Multi-label Learning with Fused Multimodal Bi-relational Graph

Abstract

The problem of multi-label image classification using multiple feature modalities is considered in this work. Given a collection of images with partial labels, we first model the association between different feature modalities and the images labels. These associations are then propagated with a graph diffusion kernel to classify the unlabeled images. Towards this objective, a novel Fused Multimodal Bi-relational Graph representation is proposed, with multiple graphs corresponding to different feature modalities, and one graph corresponding to the image labels. Such a representation allows for effective exploitation of both feature complementariness and label correlation. This contrasts with previous work where these two factors are considered in isolation. Furthermore, we provide a solution to learn the weight for each image graph by estimating the discriminative power of the corresponding feature modality. Experimental results with our proposed method on two standard multi-label image datasets are very promising.
[PDF] [BibTex]
Jiejun Xu, Vignesh Jagadeesh 2nd B.S. Manjunath,
IEEE Transaction on Multimedia, vol. 16, no. 2, pp. 403-412, Feb. 2014.
Node ID: 590 , DB ID: 400 , Lab: VRL , Target: Journal