CIFAR-10 Image Classifier
This project demonstrates a CIFAR-10 image classification model. You can view images from
the dataset, see the model's predictions, and compare them with the actual labels.
Image
Training Method
The model was trained using a ResNet
architecture on the CIFAR-10 dataset. The ResNet architecture used in this project includes:
- Convolutional layers with batch normalization and ReLU activation.
- Residual blocks to mitigate the vanishing gradient problem in deep networks.
- Global average pooling layer followed by a fully connected layer for classification.
The training process involved data augmentation techniques such as:
- Random horizontal flipping and rotations to make the model invariant to these transformations.
- Random cropping and scaling to handle different object sizes and aspect ratios.
- Color jittering to simulate different lighting conditions.
The model was optimized using the Adam optimizer, and cross-entropy loss was
used as the loss function. The training was conducted over 100 epochs with a batch size of 64.
Detailed Training Steps:
- Implemented basic network layers including ReLU activation, fully-connected layers, and softmax
activation for output.
- Derived gradients and performed backpropagation for each layer to update weights.
- Applied cross-entropy loss for classification tasks and optimized using AdamW optimizer with
learning rate decay.
- Experimented with different hyperparameters such as learning rate and hidden layer size to achieve
the best performance.
Model Performance:
Achieved a test accuracy of 93.6% on the CIFAR-10 test set after 100 epochs, contributing to a score that
placed top 3
on the Kaggle leaderboard during a university competition.