Multiple View Discriminative Appearance Modeling with IMCMC for Distributed Tracking
This paper proposes a distributed multi-camera tracking algorithm with interacting particle filters. We propose a novel algorithm for multi-view appearance modeling by sharing training samples between the views. Motivated by incremental learning, we create an intermediate data representation between two camera views with generative subspaces as points on Grassmann manifold, and sample along the geodesic between training data from two views to uncover the meaningful description due to viewpoint changes. Finally, we train a Boosted appearance model using the projected training samples on to these generative subspaces. For each object, we have a set of two particle filters i.e., local and global. The local particle filter models the object motion in the image plane. The global particle filter models the object motion in the ground plane. These particle filters are integrated into a unified Interacting Markov Chain Monte Carlo (IMCMC) framework. We show the manner in which we induce priors on scene specific information into the global particle filter to improve the tracking accuracy. The proposed algorithm is validated with extensive experimentation in challenging camera network data, and compares favorably with state of the art object trackers.