Machine Learning for Conservation

Species Identification and Localization in Camera Trap Images

Project Goals

Why a standard dataset?

A characteristic of deep learning models is that they require a lot of training data and their performance continues to improve when provided with more training data. There are numerous convolutional neural network (CNN) based implementations being tested to automate the detection and labeling of animal species in camera trap photos (Castelblanco et al. 2017, Giraldo-Zuluaga et al. 2017, Norouzzadeh et al. 2017, Chen et al. 2014, Yu et al. 2013). However, this work is mostly being done independently and training data are not broadly and openly shared, often due to privacy concerns and the inconvenience associated with preparing and distributing a massive dataset to other users, making direct comparisons of model accuracy impossible. The Animal Detection Network intends alleviate these limitations.

Standard, labeled datasets have been an integral component in the incredible advancements observed in deep learning in recent years. For example, ImageNet (ImageNet 2016, Deng et al. 2009) has created a dataset with over 14 million images which allows researchers to take advantage of a massive labeling effort so they can focus their time on developing new algorithms while being able to directly compare accuracy to other models trained using the same dataset. ImageNet also hosts an annual Large Scale Visual Recognition Challenge (ILSVRC) (Russakovskyet al. 2015), to evaluate and directly compare object detection and image classification algorithms at large scale. Competitions and standard dataset provide a platform to assess the progress of individual teams and more generally the state of the art.

Assembling the dataset

The Animal Detection Network intends to identify partners who will commit to providing a curated set of annotated images for a species to seed the main public dataset. The current focus of the Animal Detection Network is on camera trap (i.e., trail camera) images of nonthreatened and nonendangered species.

Curators will be responsible for annotating, verifying, and providing the data for their species of choice. The Andenet-Desktop software will be used by curators and/or their teams to annotate, review, and package their data. The packaging portion of the workflow removes all visible and embedded metadata so that curators will not compromise their project’s findings or ongoing research when contributing to an open dataset. The objective is to accumulate 5,000 annotated images for each species that will be included in the dataset.

Contact Us

If you have any questions or are interested in being a data curator please contact Ned Horning (Director of Applied Biodiversity Informatics, Center for Biodiversity and Conservation ) and Peter Ersts (Software Developer, Center for Biodiversity and Conservation) .

2018 Objectives

Progress

References

Castelblanco LP, Narváez CL, Pulido AD. 2017 Methodology for mammal classification in camera trap images. Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), doi: http://dx.doi.org/10.1117/12.2268732

Chen G, Han TX, He Z., Kays R, Forrester, T. 2014. Deep convolutional neural network based species recognition for wild animal monitoring. in 'ICIP' , IEEE, , pp. 858-862 .

Deng J, Dong W, Socher R, Li LJ, Kai L, Li FF. 2009. "ImageNet: A large-scale hierarchical image database," 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, pp. 248-255. doi: 10.1109/CVPR.2009.5206848

Giraldo-Zuluaga JH, Gomez A, Salazar A, Diaz-Pulido A. 2017. Camera trap images segmentation using multi-layer robust principal component analysis. Submitted to ICIP 2017. https://arxiv.org/abs/1701.08180

ImageNet. (2016). http://www.image-net.org. [Accessed 6 Feb. 2018]

Lin TY, Maire M, Belongie S, Bourdev ., Girshick ., Hays J, Perona P, Ramanan D, Zitnick CL, Dollar P. 2014 Microsoft COCO: Common Objects in Context. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8693. Springer, Cham. Doi: https://doi.org/10.1007/978-3-319-10602-1_48

Norouzzadeh, M S, Nguyen A, Kosmala M, Swanson A, Packer C, Clune J. (2017) Automatically identifying wild animals in camera trap images with deep learning. [Online] Available at: https://arxiv.org/abs/1703.05830 [Accessed 6 Feb. 2017]

Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC and Fei-Fei L. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 2015. doi: https://doi.org/10.1007/s11263-015-0816-y

Yu, X, Wang J, Kays R, Jansen PA, Wang T Huang T. 2013. Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing. 2013: 52. doi:10.1186/1687-5281-2013-52

X
Email:


First & Last Name:


Question / Comment:


Add me to your contact list: