Visual monitoring of wild animals has been modernized over the years through technologies such as high definition photography and camera trapping[1]. Researchers can now document populations, movement, and behavior of certain species using large volumes of data over longer periods[1,2]. These tools aid population research, but photo-identification still relies on our ability as humans to distinguish certain features of individuals[3]. Because identification remains a manual task, extracting information from visual data can be expensive and time-consuming[4]. (Read our previous blog on UWERP’s EARS – Elephant Attribute Recording System – IDs Database here).
MSc student, Elgiriyage de Silva, at the University of Colombo, is lead author on their recently published CNN study, alongside others including Dr. Shermin de Silva and Udawalawe Elephant Research Project’s (UWERP) Research Supervisor, T.V. Kumara. The study made use of CNN in order to determine the feasibility of such technology to identify Asian elephants, and used 10 years of labeled photographs of wild Asian elephants collected by the UWERP study. The researchers considered full body, face, and ears as three points for individual identification. Two techniques namely Training from Scratch (TS) and Transfer Learning (TL), which made use of a pre-trained model, were applied to five CNN models: Xception, Inception V3, VGG16, ResNet50, AlexNet. These models were evaluated for their efficiency in correctly identifying an individual as the top candidate or including the correct individual among the top five possible candidates[3].
