KESTREL: Video Analytics for Augmented Multi-Camera Vehicle Tracking

Abstract

As mobile devices have become more powerful and GPU-equipped,vision-based applications are becoming feasible on these devices. Mobile devices can augment fixed camera surveillance systems to improve coverage and accuracy, but it is unclear how to leverage these mobile devices while respecting their processing, communication, and energy constraints. As a first step towards understanding this question, this paper discusses Kestrel, a system that tracks vehicles across a hybrid camera network. In Kestrel, fixed camera feeds are processed on the cloud, and mobile devices are invoked to resolve ambiguities in vehicle tracks. Kestrel’s mobile pipeline detects objects using a deep neural network, extracts attributes using cheap visual features, and resolves path ambiguities by careful association of vehicle visual descriptors, while using several optimizations to conserve energy and reduce latency. We evaluate Kestrel on a heterogenous dataset including both mobile and static cameras from a campus surveillance network. The results demonstrate that Kestrel on a hybrid network can achieve precision and recall comparable to a fixed camera network of the same size and topology.

[PDF] [BibTex]
Hang Qiu, Xiaochen Liu, Swati Rallapalli, Archith J. Bency, Rahul Urgaonkar, B.S. Manjunath, Ramesh Govindan, Kevin Chan,
The Proceedings of ACM/IEEE International Conference on Internet-of-Things Design and Implementation (IoTDI), Orlando, Florida, Apr. 2018.
Node ID: 710 , Target: Conference