Multimodal Photo Annotation and Retrieval on a Mobile Phone

Xavier Anguera
Telefonica Research
Via Augusta 177,
08021 Barcelona, Spain
xanguera [at] tid.es

JieJun Xu
Vision Research Lab
University of California at
Santa Barbara
CA 93106, USA
jiejun [at] cs.ucsb.edu

Nuria Oliver
Telefonica Research
Via Augusta 177,
08021 Barcelona, Spain
nuriao [at] tid.es

Abstract

Mobile phones are becoming multimedia devices. It is common to observe users capturing photos and videos on their mobile phones on a regular basis. As the amount of digital multimedia content expands, it becomes increasingly difficult to find specific images in the device. In this paper, we present a multimodal and mobile image retrieval prototype named MAMI (Multimodal Automatic Mobile Indexing). It allows users to annotate, index and search for digital photos on their phones via speech or image input. Speech annotations can be added at the time of capturing photos or at a later time. Additional metadata such as location, user identification, date and time of capture is stored in the phone automatically. A key advantage of MAMI is that it is implemented as a stand-alone application which runs in real-time on the phone. Therefore, users can search for photos in their personal archives without the need of connectivity to a server. In this paper, we compare multimodal and monomodal approaches for image retrieval and we propose a novel algorithm named the Multimodal Redundancy Reduction (MR2) Algorithm. In addition to describing in detail the proposed approaches, we present our experimental results and compare the retrieval accuracy of monomodal versus multimodal algorithms.
[PDF] [BibTex]
Xavier Anguera, JieJun Xu and Nuria Oliver,
ACM International Conference on Multimedia Information Retrieval (MIR2008), Vancouver, Canada, Oct. 2008.
Node ID: 514 , DB ID: 321 , Lab: VRL , Target: Proceedings
Subject: [Managing Multimedia Databases] « Look up more