KESTREL: Video Analytics for Augmented Multi-Camera Vehicle Tracking
Abstract
As mobile devices have become more powerful and GPU-equipped,vision-based applications are becoming feasible on these devices. Mobile devices can augment fixed camera surveillance systems to improve coverage and accuracy, but it is unclear how to leverage these mobile devices while respecting their processing, communication, and energy constraints. As a first step towards understanding this question, this paper discusses Kestrel, a system that tracks vehicles across a hybrid camera network. In Kestrel, fixed camera feeds are processed on the cloud, and mobile devices are invoked to resolve ambiguities in vehicle tracks. Kestrel’s mobile pipeline detects objects using a deep neural network, extracts attributes using cheap visual features, and resolves path ambiguities by careful association of vehicle visual descriptors, while using several optimizations to conserve energy and reduce latency. We evaluate Kestrel on a heterogenous dataset including both mobile and static cameras from a campus surveillance network. The results demonstrate that Kestrel on a hybrid network can achieve precision and recall comparable to a fixed camera network of the same size and topology.