skip to main content

A Comparative Analysis of Convolutional Neural Network (CNN): MobileNetV2 and Xception for Butterfly Species Classification

1Faculty of Computer Science, Universitas Dian Nuswantoro , Indonesia

2Software Engineer, The Mathwork Inc. , United States

Received: 5 May 2025; Revised: 25 May 2025; Accepted: 29 May 2025; Available online: 29 May 2025; Published: 31 May 2025.
Editor(s): Ferda Ernawan
Open Access Copyright (c) 2025 The authors. Published by Department of Informatics Universitas, Diponegoro
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Citation Format:
Abstract
This study aims to compare the effectiveness and efficiency of two convolutional neural network architectures, MobileNetV2 and Xception, for automated butterfly species classification. As biodiversity monitoring gains significance, effective species identification technologies are crucial for conservation. The research utilized a dataset of 100 butterfly species with 12,594 training images and 1,000 validation and test images. Transfer learning with pre-trained ImageNet weights was implemented, and both models were enhanced with custom classification layers. Data augmentation and class weighting mitigated dataset imbalance issues. Experimental results show Xception attained 93.40% test accuracy compared to MobileNetV2's 93.20%. These high accuracy rates were achieved through effective transfer learning that preserved general feature extraction capabilities, comprehensive class balancing techniques, and carefully tailored learning rate strategies for each architecture. Despite minimal performance difference, MobileNetV2 offers significant computational efficiency advantages with 4.15M parameters compared to Xception's 25.27M, while Xception provides marginally better classification. This study contributes to entomological research and highlights trade-offs between model complexity and performance in fine-grained classification tasks, supporting implementation decisions for butterfly identification systems in practical applications.
Fulltext View|Download
Keywords: Classification; Convolutional Neural Network; MobileNetV2; Xception; Butterfly;

Article Metrics:

  1. M. A. H. Saedan, M. Kassim, and A. F. Abd Aziz, “Biological Butterfly Characterization with Mobile System Using Convolutional Neural Network (CNN) Classify Image,” International Journal of Interactive Mobile Technologies, vol. 18, no. 7, pp. 125–138, Apr. 2024, doi: 10.3991/ijim.v18i07.46267
  2. F. Rajeena P. P. et al., “A Novel Method for the Classification of Butterfly Species Using Pre-Trained CNN Models,” Electronics (Basel), vol. 11, no. 13, p. 2016, Jun. 2022, doi: 10.3390/electronics11132016
  3. D. S. Singh, A. Kumar Pandey, A. S. Singh, and A. K. Mishra, “Butterfly Species Recognition Using Convolutional Neural Network,” IRE Journals, vol. 7, no. 6, Dec. 2023
  4. M. Santhiya, S. Karpagavalli, “Integration of a Dual Hybrid Deep Convolutional Neural Network Framework for Insect Taxonomic Classification,” Communications on Applied Nonlinear Analysis, vol. 32, no. 9s, pp. 2244–2257, Mar. 2025, doi: 10.52783/cana.v32.4510
  5. A. Kaur, “Nature Meets Technology: Revolutionizing Butterfly Classification with MobileNetV2,” in 2024 9th International Conference on Communication and Electronics Systems (ICCES), IEEE, Dec. 2024, pp. 1449–1454. doi: 10.1109/ICCES63552.2024.10859734
  6. H. T. Adityawan, O. Farroq, S. Santosa, H. M. M. Islam, M. K. Sarker, and D. R. I. M. Setiadi, “Butterflies Recognition using Enhanced Transfer Learning and Data Augmentation,” Journal of Computing Theories and Applications, vol. 1, no. 2, pp. 115–128, Nov. 2023, doi: 10.33633/jcta.v1i2.9443
  7. Z. Lin, J. Jia, W. Gao, and F. Huang, “Fine-grained visual categorization of butterfly specimens at sub-species level via a convolutional neural network with skip-connections,” Neurocomputing, vol. 384, pp. 295–313, Apr. 2020, doi: 10.1016/j.neucom.2019.11.033
  8. T. Xi, J. Wang, Y. Han, C. Lin, and L. Ji, “Multiple butterfly recognition based on deep residual learning and image analysis,” Entomol Res, vol. 52, no. 1, pp. 44–53, Jan. 2022, doi: 10.1111/1748-5967.12564
  9. D. Elreedy, A. F. Atiya, and F. Kamalov, “A theoretical distribution analysis of synthetic minority oversampling technique (SMOTE) for imbalanced learning,” Mach Learn, vol. 113, no. 7, pp. 4903–4923, Jul. 2024, doi: 10.1007/s10994-022-06296-4
  10. P. Soltanzadeh, M. R. Feizi-Derakhshi, and M. Hashemzadeh, “Addressing the class-imbalance and class-overlap problems by a metaheuristic-based under-sampling approach,” Pattern Recognit, vol. 143, p. 109721, Nov. 2023, doi: 10.1016/j.patcog.2023.109721
  11. L. Li, H. He, and J. Li, “Entropy-based Sampling Approaches for Multi-Class Imbalanced Problems,” IEEE Trans Knowl Data Eng, vol. 32, no. 11, pp. 2159–2170, Nov. 2020, doi: 10.1109/TKDE.2019.2913859
  12. Z. Sun, W. Ying, W. Zhang, and S. Gong, “Undersampling method based on minority class density for imbalanced data,” Expert Syst Appl, vol. 249, p. 123328, Sep. 2024, doi: 10.1016/j.eswa.2024.123328
  13. D. Singh and B. Singh, “Investigating the impact of data normalization on classification performance,” Appl Soft Comput, vol. 97, p. 105524, Dec. 2020, doi: 10.1016/j.asoc.2019.105524
  14. T. Huynh, A. Nibali, and Z. He, “Semi-supervised learning for medical image classification using imbalanced training data,” Comput Methods Programs Biomed, vol. 216, p. 106628, Apr. 2022, doi: 10.1016/j.cmpb.2022.106628
  15. L. F. Sánchez-Peralta, A. Picón, F. M. Sánchez-Margallo, and J. B. Pagador, “Unravelling the effect of data augmentation transformations in polyp segmentation,” Int J Comput Assist Radiol Surg, vol. 15, no. 12, pp. 1975–1988, Dec. 2020, doi: 10.1007/s11548-020-02262-4
  16. L. Nanni, M. Paci, S. Brahnam, and A. Lumini, “Feature transforms for image data augmentation,” Neural Comput Appl, vol. 34, no. 24, pp. 22345–22356, Dec. 2022, doi: 10.1007/s00521-022-07645-z
  17. M. C. Bingol and G. Bilgin, “Prediction of Chicken Diseases by Transfer Learning Method,” International Scientific and Vocational Studies Journal, vol. 7, no. 2, pp. 170–175, Dec. 2023, doi: 10.47897/bilmes.1396890
  18. S. G. Brucal et al., “Development of Tomato Leaf Disease Detection using Single Shot Detector (SSD) Mobilenet V2,” International Journal of Computing Sciences Research, vol. 7, pp. 1857–1869, Jan. 2023, doi: 10.25147/ijcsr.2017.001.1.136
  19. S. K. T. S, P. A, A. P. K, and S. Alagammal, “A Comparative Study on Plant Classification Performance using Deep Learning Optimizers,” in 2021 Emerging Trends in Industry 4.0 (ETI 4.0), IEEE, May 2021, pp. 1–9. doi: 10.1109/ETI4.051663.2021.9619238
  20. Y. Gulzar, “Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique,” Sustainability, vol. 15, no. 3, p. 1906, Jan. 2023, doi: 10.3390/su15031906
  21. K. Srinivasan et al., “Performance Comparison of Deep CNN Models for Detecting Driver’s Distraction,” Computers, Materials & Continua, vol. 68, no. 3, pp. 4109–4124, 2021, doi: 10.32604/cmc.2021.016736
  22. A. A. Mukhlif, B. Al-Khateeb, and M. A. Mohammed, “Incorporating a Novel Dual Transfer Learning Approach for Medical Images,” Sensors, vol. 23, no. 2, p. 570, Jan. 2023, doi: 10.3390/s23020570
  23. P. Sobti, A. Nayyar, Niharika, and P. Nagrath, “EnsemV3X: a novel ensembled deep learning architecture for multi-label scene classification,” PeerJ Comput Sci, vol. 7, p. e557, May 2021, doi: 10.7717/peerj-cs.557
  24. S. Boeschoten, C. Catal, B. Tekinerdogan, A. Lommen, and M. Blokland, “The automation of the development of classification models and improvement of model quality using feature engineering techniques,” Expert Syst Appl, vol. 213, p. 118912, Mar. 2023, doi: 10.1016/j.eswa.2022.118912
  25. L. Shen, Y. Sun, Z. Yu, L. Ding, X. Tian, and D. Tao, “On Efficient Training of Large-Scale Deep Learning Models,” ACM Comput Surv, vol. 57, no. 3, pp. 1–36, Mar. 2025, doi: 10.1145/3700439
  26. K. Karthick, “Comprehensive Overview of Optimization Techniques in Machine Learning Training,” Control Systems and Optimization Letters, vol. 2, no. 1, pp. 23–27, Feb. 2024, doi: 10.59247/csol.v2i1.69
  27. R. Li, D. Fu, C. Shi, Z. Huang, and G. Lu, “Efficient LLMs Training and Inference: An Introduction,” IEEE Access, vol. 13, pp. 32944–32970, 2025, doi: 10.1109/ACCESS.2024.3501358
  28. A. Ramkumar, “Accelerating Foundational Model Training: A Systematic Review of Hardware, Algorithmic, and Distributed Computing Optimizations,” International Journal For Multidisciplinary Research, vol. 6, no. 6, Dec. 2024, doi: 10.36948/ijfmr.2024.v06i06.32140
  29. J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He, “DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, New York, NY, USA: ACM, Aug. 2020, pp. 3505–3506. doi: 10.1145/3394486.3406703
  30. G. Canbek, T. Taskaya Temizel, and S. Sagiroglu, “BenchMetrics: a systematic benchmarking method for binary classification performance metrics,” Neural Comput Appl, vol. 33, no. 21, pp. 14623–14650, Nov. 2021, doi: 10.1007/s00521-021-06103-6
  31. A. Luque, M. Mazzoleni, A. Carrasco, and A. Ferramosca, “Visualizing Classification Results: Confusion Star and Confusion Gear,” IEEE Access, vol. 10, pp. 1659–1677, 2022, doi: 10.1109/ACCESS.2021.3137630
  32. L.-E. Pommé, R. Bourqui, R. Giot, and D. Auber, “Relative Confusion Matrix: Efficient Comparison of Decision Models,” in 2022 26th International Conference Information Visualisation (IV), Jan. 2023. doi: 10.1109/IV56949.2022.00025ï
  33. I. Markoulidakis and G. Markoulidakis, “Probabilistic Confusion Matrix: A Novel Method for Machine Learning Algorithm Generalized Performance Analysis,” Technologies (Basel), vol. 12, no. 7, Jul. 2024, doi: 10.3390/technologies12070113
  34. A. Hinterreiter et al., “ConfusionFlow: A model-agnostic visualization for temporal analysis of classifier confusion,” IEEE Trans Vis Comput Graph, vol. 28, no. 2, pp. 1222–1236, Oct. 2019, doi: 10.1109/TVCG.2020.3012063

Last update:

No citation recorded.

Last update: 2025-06-16 01:02:48

No citation recorded.