A Learning from Demonstration Framework for Manipulation Tasks

Konferenz: ISR/Robotik 2014 - 45th International Symposium on Robotics; 8th German Conference on Robotics
02.06.2014 - 03.06.2014 in München, Germany

Tagungsband: ISR/Robotik 2014

Seiten: 7Sprache: EnglischTyp: PDF

Persönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt

Autoren:
Tosello, Elisa; Michieletto, Stefano; Bisson, Andrea; Pagello, Enrico; Menegatti, Emanuele (Department of Information Engineering (DEI), University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy)

Inhalt:
This paper presents a Robot Learning from Demonstration (RLfD) framework for teaching manipulation tasks in an industrial environment: the system is able to learn a task performed by a human demonstrator and reproduce it through a manipulator robot. An RGB-D sensor acquires the scene (human in action); a skeleton tracking algorithm extracts the useful information from the images acquired (positions and orientations of skeleton joints); and this information is given as input to the motion re-targeting system that remaps the skeleton joints into the manipulator ones. After the remapping, a model for the robot motion controller is retrieved by applying first a Gaussian Mixture Model (GMM) and then a Gaussian Mixture Regression (GMR) on the collected data. Two types of controller are modeled: a position controller and a velocity one. The former was presented in [10] inclusive of simulation tests, and here it has been upgraded extended the proves to a real robot. The latter is proposed for the first time in this work and tested both in simulation and with the real robot. Experiments were performed using a Comau Smart5 SiX manipulator robot and let to show a comparison between the two controllers starting from natural human demonstrations.