Robot Control Gestures (RoCoG)

Abstract

Building successful collaboration between humans and robots requires efficient, effective, and natural communication. This dataset supports the study of RGB-based deep learning models for controlling robots through gestures (e.g., “follow me”). To address the challenge of collecting high-quality annotated data from human subjects, synthetic data was considered for this domain. This dataset of gestures includes real videos with human subjects and synthetic videos from our custom simulator. This dataset can be used as a benchmark for studying how ML models for activity perception can be improved with synthetic data.

[Link] [BibTex]
de Melo Celso, Rothrock Brandon, Gurram Prudhvi, Ulutan Oytun, Manjunath B.S.,
Sep. 2020.
Node ID: 782 , Lab: VRL , Target: Conference