A Learning from Demonstration Framework for Manipulation Tasks

Conference: ISR/Robotik 2014 - 45th International Symposium on Robotics; 8th German Conference on Robotics
06/02/2014 - 06/03/2014 at München, Germany

Proceedings: ISR/Robotik 2014

Pages: 7Language: englishTyp: PDF

Personal VDE Members are entitled to a 10% discount on this title

Authors:
Tosello, Elisa; Michieletto, Stefano; Bisson, Andrea; Pagello, Enrico; Menegatti, Emanuele (Department of Information Engineering (DEI), University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy)

Abstract:
This paper presents a Robot Learning from Demonstration (RLfD) framework for teaching manipulation tasks in an industrial environment: the system is able to learn a task performed by a human demonstrator and reproduce it through a manipulator robot. An RGB-D sensor acquires the scene (human in action); a skeleton tracking algorithm extracts the useful information from the images acquired (positions and orientations of skeleton joints); and this information is given as input to the motion re-targeting system that remaps the skeleton joints into the manipulator ones. After the remapping, a model for the robot motion controller is retrieved by applying first a Gaussian Mixture Model (GMM) and then a Gaussian Mixture Regression (GMR) on the collected data. Two types of controller are modeled: a position controller and a velocity one. The former was presented in [10] inclusive of simulation tests, and here it has been upgraded extended the proves to a real robot. The latter is proposed for the first time in this work and tested both in simulation and with the real robot. Experiments were performed using a Comau Smart5 SiX manipulator robot and let to show a comparison between the two controllers starting from natural human demonstrations.