Skip to content

abolande/minichall_abolande

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Files

Permalink
Failed to load latest commit information.

minichall_abolande

This repository contains the code used to generate results for the ECE 50024 Mini Challenge.

TO RUN THE TEST FILE:

  1. Use 'converttoRGBfromgray_RGBA.py' to convert all images to RGB. Update the file path to the folder of testing images and rename the save file name/path.
  2. Use 'crop_and_save_images_test.py' to crop and save all images in test folder. Use the RGB files for input, and select a new file name/path for cropped images to be saved.
  3. Open 'testing_pytorch5_ordered.py' the testing file. Ensure that 'new_train_model_epoch_9.pth' is uploaded for the model parameters. Update path name for .csv results to be stored and saved.
  4. Hit run.
  5. View stored results in .csv file selected in step 3.

Process description:

Training images pre-processing: A .csv file was created that merged the category.csv and the train.csv file. This was to allow the creation of a dictionary that mapped categories to classes. This file is called 'Got_csv_classes.py'. The output of this file is a .csv with the combined results of the train.csv and the category.csv files.

Once the new .csv file was created, all the training images were divided into class subfolders within the train folder. This was done using teh updated .csv file created in the step above and the 'Got_classification_folders2.py'.

The files in the 'train' folder were pre-processed by initially formatting each image as an RGB image. Then, a facial recognition tool was used to identify faces/eyes in each of the image. Any image with a face and at least one eye were cropped to the 224x224 size, and any others were deleted. This provided a clean dataset. The file name for converting to RGB images is 'converttoRGBfromgray_RGBA.py', and the file for facial recognition and cropping is 'crop_and_save_images_NEW_train.py'.

Helpful links for pre-processing code: https://www.youtube.com/watch?v=kwKfWBb6frs&t=1878s https://www.tutorialspoint.com/convert-bgr-and-rgb-with-python-and-opencv#:~:text=To%20convert%20an%20image%20from,it%20using%20the%20matplotlib%20library https://towardsdatascience.com/face-detection-in-2-minutes-using-opencv-python-90f89d7c0f81 https://www.tensorflow.org/tutorials/images/classification https://github.com/opencv/opencv/tree/master/data/haarcascades https://keras.io/examples/vision/image_classification_from_scratch/

Testing images pre-processing: The testing images were pre-processed to convert every image to RGB in the same way as described above. However, for facial recognition and cropping, rather than deleting any images where faces weren't detected, the images were stored in their original format. This is to ensure that the entire dataset is available for testing. Use 'crop_and_save_images_test.py'.

Model training: The model training was done with a pre-trained model EfficientNetB0. This pretrained model allowed for an initial set of model parameters to be selected before specific training of the celebrity facial recognition was added in. The training was run initially for 15 epochs, with the .pth files stored after each epoch. The results were then reviewed, and it was noted that overfitting began after the ninth epoch, so the ninth epoch was selected as the best trained for use in testing. The model training was done in 'train_pytorch3.py', and the model parameters were saved in 'new_train_model_epoch_9.pth'.

Helpful links for model training: https://keras.io/api/applications/efficientnet/ https://pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b0.html https://github.com/lukemelas/EfficientNet-PyTorch/tree/master/efficientnet_pytorch https://pytorch.org/hub/nvidia_deeplearningexamples_efficientnet/

Testing: For testing, the images were pre-processed using the steps described in 'Testing images pre-processing.' Open 'testing_pytorch5_ordered.py' where testing will take place. Ensure the file path to the cropped images is clearly denoted. Then, ensure the 'new_train_model_epoch_9.pth' trained model parameters are uploaded. Then, the .csv file for storing tested results is updated.

About

This repository contains the code used to generate results for the ECE 50024 Mini Challenge.

Resources

Stars

Watchers

Forks

Releases

No releases published

Languages