Cataloguing of Animal Species Utilizing Convolutional Neural Networks
DOI:
https://doi.org/10.63682/jns.v14i16S.4296Keywords:
N\AAbstract
The growth of urban areas in contemporary times has led to significant habitat displacement in forested regions[1]. Consequently, wild animals are compelled to enter human settlements, which often disrupt their natural behaviors. Frequently, the search for food drives these animals to venture into populated areas. This situation poses a real threat to humans who may inadvertently encounter these animals when they are in a heightened state of aggression. Therefore, there is a pressing need for the identification of wild animals at the periphery of human communities adjacent to natural habitats[2]. An effective, reliable, and robust early warning system would greatly reduce the risk of deadly human- animal conflicts, thereby safeguarding both human lives and protecting endangered species.Moreover, such a system would also be useful in wildlife sanctuaries and biosphere reserves to monitor the movement of animals at the border areas of such establishments which have often proved difficult to control[4]. The usage of technology and robust cameras is not an alien concept in most major biosphere reserves and national parks around the world. Although there has been a considerable amount of progress, software-based tools have not been explored to a satisfactory extent in these use cases.[3]
Computer vision has the ability to transform the tracking and monitoring process with the accuracy that its components and supporting techniques provide[5][8]. The automation-augmented reduction of man-hours invested in searching for and tracking wild animals is perhaps the biggest potential boon that computer vision can provide. The pre-processing involved in the application of computer vision algorithms is often under-documented although it plays a key role in the success of the algorithm[6]. A deep understanding of the nature of the inputs is necessary to make appropriate changes at crucial junctures of processing to meet the often- convoluted criteria required by complicated deep learning algorithms. Transforming the images is invariably necessitated due to the erratic nature of real-world data feeds[4]. The absence of an artificial synthesis element in the generation of inputs via raw camera stills adds to the intricacies involved in the image processing component. Many researchers have studied on the image classification. The purpose of the study is to establish a model that can realize the animal image classifing by using CNN. In this paper, first of all, a suitable data set is collected for the research[5]. Secondly, the parameters and layers of the neural network are chosen. Finally, a suitable criterion is established for evaluating the model so as to find the most suitable model to solve the research problem[8].
Downloads
Metrics
References
Hanguen Kim, Jungmo Koo, Donghoonkim, Sungwoo Jung, Jae-Uk Shin, Serin Lee, Hyun Myung, “Image-Based Monitoring of Jellyfish Using Deep Learning Architecture”, IEEE sensors journal, vol. 16, no. 8, 2016.
Carlos Silva, Daniel Welfer, Francisco Paulo Gioda, Claudia Dornelles,” Cattle Brand Recognition using Convolutional Neural Network and Support Vector Machines ”, IEEE Latin America Transactions, vol. 15, no. 2, pp. 310- 316, 2017.
S. Yang, L. Bo, J. Wang, and L. G. Shapiro, "Unsupervised template learning for finegrained object recognition," in Advances in Neural Information Processing Systems, 2012, pp. 3122-3130. [4]. Dhruv Rathi, Sushant Jain, Dr. S. Indu, “Underwater Fish Species Classification using Convolutional Neural Network and Deep Learning”, International Conference of Advances in Pattern Recognition, 2017.
Mohamad Aqib Haqmi Abas, Nurlaila Ismail, Ahmad Ishan Mohd Yassin, Mohd Nasir Taib, “VGG16 for plant image classification with transfer learning and data augmentation”, International Journal of Engineering & Technology, 2018.
Kaggle.com,Animals10dataset[Online],Availableat : https://www.kaggle.com/alessiocorrado99/animals10.
M. A. Al-antari, P. Rivera, M. A. Al-masni, E. Valarezo, G. Gi, T. Y. Kim,
H. M. Park, and T. S. Kim, "An automatic recognition of multiclass skin lesions via Deep Learning Convolutional Neural Networks," In Conference: ISIC2018: Skin Image Analysis Workshop and Challenge. 2018.
L. Hu, and Q. Ge, "Automatic facial expression recognition based on MobileNetV2 in Real- time," In Journal of Physics: Conference Series, vol. 1549, no. 2, 2020.
Hussain M, Bird JJ, Faria DR. A study on cnn transfer learning for image classification. InUK Workshop on computational Intelligence 2018 Sep 5 (pp. 191-202). Springer, Cham.
Gao, Y., Mosalam, K.: Deep transfer learning for image-based structural damage recognition. Comput. Aided Civ. Infrastruct. Eng. (2018).
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.