Object Browsing and Searching in A Camera Network using Graph Models
Abstract
This paper proposes a novel system to assist human image analysts to effectively browse and search for objects in a camera network. In contrast to the existing approaches that focus on finding global trajectories across cameras, the proposed approach directly models the relationship among raw camera observations. A graph model is proposed to represent detected/tracked objects, their appearance and spatial-temporal relationships. In order to minimize communication requirements, we assume that raw video is processed at camera nodes independently to compute object identities and trajectories at video rate. However, this would result in unreliable object locations and/or trajectories. The proposed graph structure captures the uncertainty in these camera observations by effectively modeling their global relationships, and enables a human analyst to query, browse and search the data collected from the camera network. A novel graph ranking framework is proposed for the search and retrieval task, and the absorbing random walk algorithm is adapted to retrieve a representative and diverse set of video frames from the cameras in response to a user query. Preliminary results on a wide area camera network are presented.