Utilizing Deep Learning for Automated Detection of Endangered Species in Camera Trap Data
Dr. Suhas BallalAssistant Professor, Department of Biochemistry, School of Sciences, JAIN (Deemed-to-be University), Karnataka, India. b.suhas@jainuniversity.ac.in0000-0002-6041-8332
Ritika MehraSchool of Engineering & Computing, Dev Bhoomi Uttarakhand University, Dehradun, India. socse.ritika@dbuu.ac.in0000-0002-2785-5856
Sachin MittalCentre of Research Impact and Outcome, Chitkara University, Rajpura, Punjab, India. sachin.mittal.orp@chitkara.edu.in0009-0006-7510-6725
Aarsi KumariAssistant Professor, Department of Computer Science & IT, ARKA JAIN University, Jamshedpur, Jharkhand, India. aarsi.k@arkajainuniversity.ac.in0009-0008-5355-234X
Dr.S. MuruganProfessor, Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India. murugan.cse@sathyabama.ac.in0000-0001-5503-8960
Dr. Satya Narayan SatapathyAssociate Professor, Department of Entomology, Institute of Agricultural Sciences, Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India. satyanarayansatpathy@soa.ac.in0000-0002-3202-1717
Keywords: Deep learning, endangered species, camera trap, biodiversity, conservation, image recognition, wildlife monitoring.
Abstract
Effective and efficient monitoring of endangered species is a vital part of biodiversity conservation; however, the process of analysing camera trap data is not without its difficulties, being very labour intensive, and vulnerable to the potential for human error. Traditional image analysis frameworks have drawbacks with regards to classification accuracy when assessing animals in low light, occluded, or other complex environments. This study outlines the design and development of an automated assessment process using deep learning techniques, specifically convolutional neural networks (CNN), using transfer learning from large-scale wildlife datasets. The framework processes raw camera trap images and assigns correct identities to species, identifying and scoring endangered species with confidence levels through proper temporal data augmentation, based on the environmental conditions the photos were taken in. The performance of the automated method is then assessed against a traditional classifier based on feature-engineering, as well as human expert annotations. The comparative metrics used include precision, recall,F1-score, and processing length for original images, assessed in a variety of ecologically-meaningful zones and lighting contexts. Results show a detection accuracy of 94.6%, with a more than 15% improvement on baseline measures, and substantial savings in manual review time also derived from the automated processing. The method outlined presents an opportunity for enhanced efficiencies in biodiversity studies involving large numbers of images and will facilitate timely action in regards to conservation where needed. This system represents a doorway between computational development and ecological fieldwork, supporting evidence-led animal management and policy decisions within the context of exploring nature-inspired design in the environmental engineering context.