UAV Sensor Fusion with Latent Dynamic Conditional Random Fields in Coronal Plane Estimation

Abstract

We present a real-time body orientation estimation in a micro-Unmanned Air Vehicle video stream. This work is part of a fully autonomous UAV system which can maneuver to face a single individual in challenging outdoor environments. Our body orientation estimation consists of the following steps: (a) obtaining a set of visual appearance models for each body orientation, where each model is tagged with a set of scene information (obtained from sensors); (b) exploiting the mutual information of on-board sensors using latent-dynamic conditional random fields (LDCRF); (c) Characterizing each visual appearance model with the most discriminative sensor information; (d) fast estimation of body orientation during the test flights given the LDCRF parameters and the corresponding sensor readings. The key aspects of our approach is to add sparsity to the sensor readings with latent variables followed by long range dependency analysis. Experimental results obtained over real-time video streams demonstrate a significant improvement in both speed (15-fps) and accuracy (72%) compared to the state of the art techniques that only rely on visual data. Video demonstration of our autonomous flights (both from ground view and aerial view) are included in the supplementary material.

[PDF] [BibTex]
Amir M Rahimi, R Ruschel, BS Manjunath,
IEEE, Las Vegas, NV, Jun. 2016.
Node ID: 693 , Lab: VRL , Target: Proceedings
Subject: [Image and Video Segmentation] « Look up more