Survey of image classification models for transfer learning
Department of Computer Science, Western Illinois Unversity, Macomb IL, USA.
Research Article
World Journal of Advanced Research and Reviews, 2024, 21(01), 373–383
Publication history:
Received on 22 November 2023; revised on 01 January 2024; accepted on 03 January 2024
Abstract:
Training models for image classification is a very time-consuming task. It has always been a challenge for researchers and practitioners to train a model partly because of the large dataset required, which is complex and sometimes almost impossible to source. This has recently led to the use of pre-trained models for image classifications. Pre-trained models have gained popularity because they initialize the model with appropriate weight and significantly reduce the training time and dataset required. There are many image classification pre-trained models in use today, and this paper will focus on investigating the performance of the ten top models (ConvNext, DenseNet, EfficientNet, InceptionResNet, Inception, mobileNet, ResNet, VGG, Xception, NasNet) using caltech101 dataset containing 101 object classes and caltech256 dataset containing 256 object classes. The models are all trained on the ImageNet-1k dataset. Tensorflow and Keras were used as the frameworks for developing the experiments. The accuracy, precision, recall, and f1-score were used as metrics for model performance evaluation. The findings and analysis underscore the significance of training time, number of epochs, and choice of model in image classification.
Keywords:
Pre-trained models; Caltech101; Caltech256; Convolutional Neural Networks; Keras; Transfer Learning
Full text article in PDF:
Copyright information:
Copyright © 2024 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0