Web
Analytics

Comparison of Classification of Birds Using Lightweight Deep Convolutional Neural Networks

  Aldi Jakaria (1*), Hilman Ferdinandus Pardede (2)

(1) Fakultas Teknologi Informasi, Universitas Nusa Mandiri - Indonesia
(2) Research Center for Data and Information Sciences, National Research and Innovation Agency - Indonesia orcid
(*) Corresponding Author

Received: September 21, 2022; Revised: November 23, 2022
Accepted: December 16, 2022; Published: December 31, 2022


How to cite (IEEE): A. Jakaria,  and H. F. Pardede, "Comparison of Classification of Birds Using Lightweight Deep Convolutional Neural Networks," Jurnal Elektronika dan Telekomunikasi, vol. 22, no. 2, pp. 87-94, Dec. 2022. doi: 10.55981/jet.503

Abstract

Environmental scientists often use birds to understand ecosystems because they are sensitive to environmental changes, but few experts are available. To make it easier to recognize bird species, an automatic system that can classify bird species is needed. There are lots of models to choose from, but some models require very high computational data when training data, reducing training time can result in less wasted electrical energy so that it can have a good effect on the environment. For this reason, it is necessary to test a model that has a small complexity in training time, whether it can produce good performance. Based on the number of neural network models available, this study will classify using the EfficientNet, EfficientNetV2, MobileNet, MobileNetV2, and NasnetMobile models to determine whether these models can have good performance. From the research results, all the models tested have good performance with an accuracy between 95% - 97%. The MobileNetV2 model has the less efficiency with the smallest training time while maintaining good performance.


  http://dx.doi.org/10.55981/jet.503

Keywords


EfficientNet; EfficientNetV2; MobileNet; MobileNetV2; NasnetMobile

Full Text:

  PDF

References


Y. Rachmawati, Y. W. N. Tri, and A. P. Milenia, “Keanekaragaman jenis aves dan status konservasi di area pemandian air panas cangar , jawa timur 2019,” (in Indonesia), in Seminar Nasional Pendidikan Biologi dan Saintek (SNPBS) ke-4, Jul. 2019 pp. 436–444, 2019. [Online]. Available: https://publikasiilmiah.ums.ac.id/handle/11617/11355

P. Gavali and J. S. Banu, “Bird species identification using deep learning on gpu platform,” in Int. Conf. Emerg. Trends Inf. Technol. Eng. ic-ETITE 2020, Vellore, India, pp. 1–6, 2020, doi: 10.1109/ic-ETITE47903.2020.85. Crossref

S. Islam, S. I. A. Khan, M. Minhazul Abedin, K. M. Habibullah, and A. K. Das, “Bird species classification from an image using vgg-16 network,” in ACM Int. Conf. Proceeding Ser., Bangkok, Thailand, pp. 38–42, 2019, doi: 10.1145/3348445.3348480. Crossref

S. Kumar, V. Dhoundiyal, N. Raj, and N. Sharma, “A comparison of different techniques used for classification of bird species from images,” in Smart and Sustainable Intelligent Systems, N. Gupta, P. Chatterjee and T. Choudhury, Eds., Beverly, USA: Scrivener Publishing LLC, 2021, pp. 41–50, doi: 10.1002/9781119752134.ch3. Crossref

G. Gupta, M. Kshirsagar, M. Zhong, S. Gholami, and J. L. Ferres, “Comparing recurrent convolutional neural networks for large scale bird species classification,” Sci. Rep., vol. 11, no. 1, pp. 1–12, 2021, doi: 10.1038/s41598-021-96446-w. Crossref

S. Hartati, "Kecerdasan Buatan Berbasis Pengetahuan.", 1st ed. Yogyakarta, Indonesia: UGM Press, 2021. [Online]. Available: https://books.google.co.id/books?id=cnlREAAAQBAJ.

N. I. Sarkar and S. Gul, “Green Computing and Internet of Things for Smart Cities: Technologies, Challenges, and Implementation,” Green Energy and Technology, pp. 35–50, Sep. 2020, doi: 10.1007/978-3-030-48141-4_3. Crossref

Y. P. Huang and H. Basanta, “Bird image retrieval and recognition using a deep learning platform,” IEEE Access, vol. 7, pp. 66980–66989, 2019, doi: 10.1109/ACCESS.2019.2918274. Crossref

Y. LeCun, L. eon Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv e-print pp. 1409.1556, 2014. [Online], Available: https://ui.adsabs.harvard.edu/abs/2014arXiv1409.1556S.

A. Satyo, B. Karno, D. Arif, and I. Sari, “Deteksi covid-19 image chest x-ray dengan convolution neural network efficient net-b7,” (in Indonesia), in Semin. Nas. Teknol. Inf. dan Komun. STI&K, vol. 5, no. 1, pp. 2581–2327, 2021.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474. Crossref

A. G. Howard et al., “MobileNets: efficient convolutional neural networks for mobile vision applications,” CoRR, vol. 1704.04861, 2017, [Online]. Available: http://arxiv.org/abs/1704.04861.

B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Salt Lake City, UT, USA, pp. 8697–8710, 2018, doi: 10.1109/CVPR.2018.00907. Crossref

M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proc. 36th Int. Conf. Mach. Learn., Long Beach, California, USA, 2019, vol. 97, pp. 6105–6114. [Online]. Available: https://proceedings.mlr.press/v97/tan19a.html

M. Tan and Q. V. Le, “EfficientNetV2: smaller models and faster training,” in Proc. 38th Int. Conf. Mach. Learn, 2021, [Online]. Available: http://arxiv.org/abs/2104.00298.


Article Metrics

Metrics Loading ...

Metrics powered by PLOS ALM

Refbacks

  • There are currently no refbacks.




Copyright (c) 2022 National Research and Innovation Agency

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.