ArchiveorgSoftwareyolo implementation in keras and tensorflow 2.2 YoloV3 Real Time Object Detector in tensorflow 2.2 . Explore the docs » · Report Bug · Request Feature TODO[ ] Transfer learning[ ] YoloV4 configuration[ ] Live plot losses[ ] Command line handler[ ] YoloV3 tiny[ ] Rasberry Pi supportTable of Contents Getting startedPrerequisitesHere are the packages you'll need to install before starting to use the detector:* pandas==1.0.3* lxml==4.5.0* opencvpythonheadless==4.2.0.34* imagesize==1.2.0* seaborn==0.10.0* tensorflow==2.2.0* tensorflow-gpu==2.2.0* numpy==1.18.2* matplotlib==3.2.1* imgaug==0.4.0InstallationClone the reposhgit clone https://github.com/emadboctorx/yolov3-keras-tf2/Install requirementsshpip install -r requirements.txtorshconda install --file requirements.txtDescriptionyolov3-keras-tf2 is an implementation of yolov3[4] (you only look once)which is is a state-of-the-art, real-time object detection system that is extremely fast and accurate.There are many implementations that support tensorflow, only a few that support tensorflow v2 and as I didnot find versions that suit my needs so, I decided to create this version which is very flexible and customizable. It requires the Python interpreter version 3.6, 3.7, 3.7+, is not platform specific and is MIT licensed which means you can use, copy, modify, distribute this software however you like.Featurestensorflow 2.2 & keras functional apiThis program leverages features that were introduced in tensorflow 2.0 including: Eager execution: an imperative programming environment that evaluates operations immediately,without building graphs check here[5]tf.function: A JIT compilation decorator that speeds up some components of the program check here[6]tf.data: API for input pipelines check here[7]CPU & GPU supportThe program detects and uses available GPUs at runtime(training/detection)if no GPUs available, the CPU will be used(slow).Random weights and DarkNet weights supportBoth options are available, and NOTE in case of using DarkNet yolov3 weights[8]you must maintain the same number of COCO classes[9] (80 classes)as transfer learning to models with different classes will be supported in future versions of this program.csv-xml annotation parsersThere are 2 currently supported formats that the program is able to read and translate to input.XML VOC format which looks like the following example:xml/path/to/image/folderimage_filename.png/path/to/image/folder/image_filename.png13447563 Car873.0000007680001402.00000019200011315.00000128697.0000000320001 Car550.999999872404.999999838883.000000512711.000000018 Car8.999999903999992374.999999976525.99999984736.000000344 Traffic Lights857.999999808312.99999960599996903.9999991679999372.99999933 Traffic Lights1220.9999995291.9999998541317.999999456249.99999985799997 Traffic Lights701.999999232207.00000014399998753.999998976275.000000184 Street Sign798.99999984244.999999944881.00000016275.000000184CSV with relative labels that looks like the following example:Image | Object Name | Object Index | bx | by | bw | bh | #--- | --- | --- | --- |--- |--- |--- |--- img1.png | dog | 2 | 0.438616071 | 0.51521164 | 0.079613095 | 0.123015873img1.png | car | 1 | 0.177827381 | 0.381613757 | 0.044642857 | 0.091269841img2.png | Street Sign | 5 | 0.674107143 | 0.44047619 | 0.040178571 | 0.084656085Anchor generatorA k-means[10] algorithm finds the optimal sizes and generates anchors with process visualization.matplotlib visualization of all stagesIncluding: Precision and recall curves: Augmentation options visualization:Double screen visualization(before/after) image like the following example: Dataset pre and post augmentation visualization with bounding boxes:You can always visualize different stages of the program using my other repo labelpix[11] which is tool for drawing bounding boxes, but can also be used to visualize bounding boxes over images using csv files in the format mentioned here[12].tf.data input pipelineTFRecords[13] a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data and are used as input pipeline to store and read data efficientlythe program takes as input images and their respective annotations and builds training and validation(optional)TFRecords to be further used for all operations and TFRecords are also used in the evaluation(mid/post) training,so it's valid to say you can delete images to free space after conversion to TFRecords.pandas & numpy data handlingMost of the operations are using numpy and pandas for efficiency and vectorization.imgaug augmentation pipeline(customizable)Special thanks to the amazing imgaug creators,an augmentation pipeline(optional) is available and NOTE that the augmentation isconducted before the training not during the training due to technical complicationsto integrate tensorflow and imgaug. If you have a small dataset, augmentation is an optionand it can be preconfigured before the training check Augmentor.md[14][15]loggingDifferent operations are recorded using logging module.All-in-1 custom Trainer classFor custom training, Trainer class accepts configurations for augmentation, new anchor generation, new dataset(TFRecord(s)) creation, mAP evaluationmid-training and post training. So all you have to do is place imagesin Data > Photos, provide the configuration that suits you and start the trainingprocess, all operations are managed from the same place for convenience.For detailed instructions check Trainer.md[16]Stop and resume training supportby default the trainer checkpoints to Models > checkpoint_name.tf at the endof each training epoch which enables the training to be resumed at any given point by loading the checkpoint which would be the most recent.Fully vectorized mAP evaluationEvaluation is optional during the training every n epochs(not recommended for large datasets as it predicts every image in the dataset) and one evaluation at the end which is optional as well. Training and validation datasetscan be evaluated separately and calculate mAP(mean average precision) as wellas precision and recall curves for every class in the model, check Evaluator.md[17] labelpix supportYou can check my other repo labelpix[18] which is alabeling tool for drawing bounding boxes over images if you need to make custom datasetsthe tool can help and is supported by the detector. You can use csv filesin the format mentioned here[19] as labels and loadimages if you need to preview any stage of the training/augmentation/evaluation/detection.Photo & video detectionDetections can be performed on photos or videos using Predictor classcheck Predictor.md[20]UsageTrainingHere are the most basic steps to train using a custom dataset:1- Copy images to Data > Photos2- If labels are in the XML VOC format[21],copy label xml files to Data > Labels3- Create classes .txt file that contains classes delimited by \ndogcatcarpersonboatfanlaptop4- Create a training instance and specify input_shape, classes_file,image_width and image_heighttrainer = Trainer( input_shape=(416, 416, 3), classes_file='/path/to/classes_file.txt', image_width=1344, # The original image width image_height=756 # The original image height)5- Create dataset configuration(dict) that contains the following keys:dataset_name: TFRecord prefix(required)and one of the following:(required)relative_labels: path to csv file in the following format[22]orandtest_size: percentage of the validation split ex: 0.1(optional)augmentation: True (optional)and if augmentation this implies the following:sequences: (required) A list of augmentation sequences check Augmentor.md[23] workers: (optional) defaults to 32 parallel augmentations.batch_size: (optional) this is the augmentation batch size defaults to 64 images to load at once.datasetconf = { 'relativelabels': '/path/to/labels.csv', 'datasetname': 'datasetname', 'testsize': 0.2, 'sequences': preset1, # check Config > augmentation_options.py 'augmentation': True, }6- Create new anchor generation configuration(dict) that contains the following keys:7- Start the trainingNote If you're going to use DarkNet yolov3 weights, make sure the classes filecontains 80 classes(COCO classes) or you'll get an error. Transfer learning to models with different number of classes will be supported in future versionsof the program.tr.train(epochs=100, batch_size=8, learning_rate=1e-3, dataset_name='dataset_name', merge_evaluation=False, min_overlaps=0.5, new_dataset_conf=dataset_conf, # check step 5 new_anchors_conf=anchors_conf, # check step 6 # weights='/path/to/weights' # If you're using DarkNet weights or resuming training )After the training completes:The trained model is saved in Models folder(which you can use to resume training later/predict photos or videos)The resulting TFRecords and their corresponding csv data are saved in Data > TFRecordsThe resulting figures and evaluation results are saved in Output folder.AugmentationHere are the most basic steps to augment images(no training, just augmentation):If you need to augment photos and take your time to examine/visualize the results,here are the steps:1- Copy images to Data > Photos or specify image_folder param2- Ensure you have a csv file containing the labels in the format mentioned here[24], if you have labelsin xml VOC format, you can easily convert them using Helpers > annotation_parsers.py > parse_voc_folder() (everything is explained in the docstrings)3- Create augmentation instance:from Config.augmentation_options import augmentationsfrom Helpers.augmentor import DataAugmentaug = DataAugment( labels_file='/path/to/labels/csv/file', augmentation_map=augmentations)aug.create_sequences(sequences) # check the docsaug.augment_photo_folder()After augmentation you'll find augmented images in the Data > Photos folderor the folder you specified(if you did specify one) And you should find 2 csv files in the Output folder: augmented_data_plus_original.csv : you can use this with labelpix[25] to visualize results withbounding boxesadjusted_data_plus_original.csvand any of the 2 csv files above can be used in the new dataset configurationin the training.EvaluationHere are the most basic steps to evaluate a trained model:Create an evaluation instance:evaluator = Evaluator( inputshape=(416, 416, 3), traintfrecord='/path/to/train.tfrecord', validtfrecord='/path/to/valid.tfrecord', classesfile='/path/to/classes.txt', anchors=anchors, # defaults to yolov3 anchors score_threshold=0.1 # defaults to 0.5 but it's okay to be lower )Read actual and prediction results(that resulted from the training)actual = pd.readcsv('../Data/TFRecords/fulldata.csv') preds = pd.readcsv('../Output/fulldataset_predictions.csv')Calculate mAP(mean average precision):evaluator.calculatemap( predictiondata=preds, actualdata=actual, minoverlaps=0.5, display_stats=True)After evaluation, you'll find resulting plots and predictions in the Output folder.DetectionHere are the most basic steps to perform detection:Create an evaluation instance:p = Detector( (416, 416, 3), '/path/to/classes_file.txt', score_threshold=0.5, iou_threshold=0.5, max_boxes=100, anchors=anchors # Optional if not specified, yolo default anchors are used)Perform detections:A) Photos:photos = ['photo/path1', 'photo/path2']p.predict_photos(photos=photos, trained_weights='/path/to/trained/weights') # .tf or yolov3.weights(80 classes)B) Videop.detect_video( '/path/to/target/vid', '/path/to/trained/weights.tf',)After predictions is complete you'll find photos/video in Output > DetectionsContributingContributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.Fork the ProjectCreate your Feature Branch (git checkout -b feature/AmazingFeature)Commit your Changes (git commit -m 'Add some AmazingFeature')Push to the Branch (git push origin feature/AmazingFeature)Open a Pull RequestLicenseDistributed under the MIT License. See LICENSE[26] for more information.Show your supportGive a ⭐️ if this project helped you!ContactEmad Boctor - Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein! link: https://github.com/emadboctorx/yolov3-keras-tf2 To restore the repository download the bundle wget https://archive.org/download/github.com-emadboctorx-yolov3-keras-tf2_-_2020-05-24_06-10-15/emadboctorx-yolov3-keras-tf2_-_2020-05-24_06-10-15.bundle and run: git clone emadboctorx-yolov3-keras-tf2_-_2020-05-24_06-10-15.bundle Source: https://github.com/emadboctorx/yolov3-keras-tf2[27]Uploader: emadboctorx[28]Upload date: 2020-05-24 [1][2][3]References^ Explore the docs » (github.com)^ Report Bug (github.com)^ Request Feature (github.com)^ yolov3 (pjreddie.com)^ here (www.tensorflow.org)^ here (www.tensorflow.org)^ here (www.tensorflow.org)^ yolov3 weights (pjreddie.com)^ COCO classes (gist.github.com)^ k-means (en.wikipedia.org)^ labelpix (github.com)^ here (archive.org)^ TFRecords (www.tensorflow.org)^ imgaug (github.com)^ Augmentor.md (archive.org)^ Trainer.md (archive.org)^ Evaluator.md (archive.org)^ labelpix (github.com)^ here (archive.org)^ Predictor.md (archive.org)^ format (archive.org)^ format (archive.org)^ Augmentor.md (archive.org)^ here (archive.org)^ labelpix (github.com)^ LICENSE (archive.org)^ https://github.com/emadboctorx/yolov3-keras-tf2 (github.com)^ emadboctorx (github.com)

weiterlesen: RSS Quelle öffnen