Visual monitoring of wild animals has been modernized over the years through technologies such as high definition photography and camera trapping. Researchers can now document populations, movement, and behavior of certain species using large volumes of data over longer periods[1,2]. These tools aid population research, but photo-identification still relies on our ability as humans to distinguish certain features of individuals. Because identification remains a manual task, extracting information from visual data can be expensive and time-consuming. (Read our previous blog on UWERP’s EARS – Elephant Attribute Recording System – IDs Database here).
MSc student, Elgiriyage de Silva, at the University of Colombo, is lead author on their recently published CNN study, alongside others including Dr. Shermin de Silva and Udawalawe Elephant Research Project’s (UWERP) Research Supervisor, T.V. Kumara. The study made use of CNN in order to determine the feasibility of such technology to identify Asian elephants, and used 10 years of labeled photographs of wild Asian elephants collected by the UWERP study. The researchers considered full body, face, and ears as three points for individual identification. Two techniques namely Training from Scratch (TS) and Transfer Learning (TL), which made use of a pre-trained model, were applied to five CNN models: Xception, Inception V3, VGG16, ResNet50, AlexNet. These models were evaluated for their efficiency in correctly identifying an individual as the top candidate or including the correct individual among the top five possible candidates.
The results of the study revealed that the VGG16 model trained by TL yielded the highest prediction accuracy at 21.34% and 42.35% for the top candidate when using the full body and faces dataset, respectively. The best-performer was the Xception model, trained on ears with the TS technique, which returned a prediction accuracy of 89.02% for identifying the correct individual, and 99.27% for including it among the top five candidates. Interestingly, ears turned out to be the most identifiable features for the CNN algorithm, just as they are for humans. With these impressive results, de Silva and colleagues (2022) concluded that it is possible to accurately automate the identification of Asian elephants, but with certain caveats.
To solve this predicament, advancements using Artificial Intelligence, more specifically machine learning (ML) techniques, have facilitated computer-mediated identification among animals. One of the well-known systems used for image classification under ML is Convolutional Neural Networks (CNN). Likened to the function of neurons in the human brain, CNN are potent artificial intelligence technologies that utilize deep learning to execute generative and performative functions such as image classification and object detection. While this technology has been proven useful among animals with unique skin or coat patterns such as whale sharks and tigers, identification of species that lack distinguishing features, such as Asian elephants, remains a challenge.
Since automated identification requires training data, the study notes that the first drawback in the system’s efficiency would be the availability of photos for model training per individual. As the target populations are found in the wild, it would take time to build an extensive library of labeled photographs which will still include initial individual identification by a human, thus defeating the purpose of automation. (See our new C.O.O.’s blog here and here from her PhD fieldwork back in 2011 and 2013 where she continued to unravel identity puzzles for the calves she was studying in Udawalawe NP!) Furthermore, field conditions may restrict collection of full body and face photos through obstructions such as vegetation and foliage. Aside from this, Asian elephants’ features change over time, including changes brought about by aging or injuries, so the system would require periodic updating and re-training.
These limitations raise the question: is it practical to use CNN techniques for Asian elephants in the wild?
According to de Silva et al. (2022), this technique is more feasible for long-term monitoring due to the amount of time and resources needed to train, test, and maintain the system. As long as a population is well-cataloged with sufficient images, automated identification can save time by narrowing down potential candidates. Since the ideal situation is where there are are a large number of high quality photographs for each individual, this technique can aid wildlife stakeholders in activities such as tracking rehabilitated and translocated individuals and countering illegal trade involving falsified documentation[5,6].
1Kays, R., Tilak, S., Kranstauber, B., Jansen, P. A., Carbone, C., Rowcliffe, M. J., Fountain, T., Eggert, J., & He, Z. (2010). Monitoring wild animal communities with arrays of motion sensitive camera traps. ArXiv:1009.5718 [Cs]. http://arxiv.org/abs/1009.5718
2O’Brien, T. (2011). Camera Traps in Animal Ecology (pp. 71–96). https://doi.org/10.1007/978-4-431-99495-4_6
3de Silva, E. M. K., Kumarasinghe, P., Indrajith, K. K. D. A. K., Pushpakumara, T. V., Vimukthi, R. D. Y., de Zoysa, K., Gunawardana, K., & de Silva, S. (2022). Feasibility of using convolutional neural networks for individual-identification of wild Asian elephants. Mammalian Biology. https://doi.org/10.1007/s42991-021-00206-2
4Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C., & Clune, J. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 115(25), E5716–E5725. https://doi.org/10.1073/pnas.1719367115
5Menon V, Tiwari SK (2019) Population status of Asian elephants Elephas maximus and key threats. Int Zoo Year 53(1):17–30. https://doi.org/10.1111/izy.12247
6Prakash TSL, Indrajith WU, Aththanayaka A, Karunarathna S, Botejue M, Nijman V, Henkanaththegedara S (2020) Illegal capture and internal trade of wild Asian elephants (Elephas maximus) in Sri Lanka. Nat Cons 42:51