MetaTOC stay on top of your field, easily

Integrating visual perception and manipulation for autonomous learning of object representations

, , ,

Adaptive Behavior: Animals, Animats, Software Agents, Robots, Adaptive Systems

Published online on

Abstract

Humans can effortlessly perceive an object they encounter for the first time in a possibly cluttered scene and memorize its appearance for later recognition. Such performance is still difficult to achieve with artificial vision systems because it is not clear how to define the concept of objectness in its full generality. In this paper we propose a paradigm that integrates the robot’s manipulation and sensing capabilities to detect a new, previously unknown object and learn its visual appearance. By making use of the robot’s manipulation capabilities and force sensing, we introduce additional information that can be utilized to reliably separate unknown objects from the background. Once an object has been identified, the robot can continuously manipulate it to accumulate more information about it and learn its complete visual appearance. We demonstrate the feasibility of the proposed approach by applying it to the problem of autonomous learning of visual representations for viewpoint-independent object recognition on a humanoid robot.