Deep learning for all:  managing and analyzing underwater and remote sensing imagery on the web using BisQue

Abstract

Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification, detection and segmentation tasks but are typically computationally expensive and require extensive training on large datasets. Therefore, harnessing distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of complex multi-dimensional underwater and remote sensing imagery and associated data. It is designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and heterogeneous computational environments behind a user friendly web-based interface. BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. We are developing a deep learning service for automated image classification. The service allows training various models with a single click, validating their performance and providing several modes of classification: point classification for percent cover, image partitioning for substrate description and object detection for counting organisms. Semantic segmentation can be used to classify all pixels in an image, allowing estimation of organism size and species interactions. Our experiments on identification of 11 benthic marine organisms within a dataset of 2K images with 200K annotations demonstrate good performance with the overall accuracy of 86% and 4% error. We are now constructing a hierarchical model of more than 300 species on 6K images with 1M annotations.

[Link] [BibTex]
D.V. Fedorov, K.G. Kvilekval, B.S. Manjunath, B. Doheny, S. Sampson, R.J. Miller D.V. Fedorov, K.G. Kvilekval, B.S. Manjunath, B. Doheny, S. Sampson, R.J. Miller,
3rd Workshop on Automated Analysis of Video Data for Wildlife Surveillance, Santa Rosa, CA, Mar. 2017.
Node ID: 704 , Lab: VRL , Target: Workshop
Subject: [Bio-Image Informatics] « Look up more