Statistical Generalization of Robot Movement in Autoencoder Latent Space

author: Rok Pahič, Department for Automation, Biocybernetics and Robotics, Jožef Stefan Institute
published: May 23, 2017,   recorded: April 2017,   views: 1091
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Robots have started to emerge in different environments, like for example in households and medical care. In such environments they cannot be preprogramed for all possible tasks. One solution is to equip the robots with the ability to learn, so they can adapt their behaviour to ever changing environments. One of the learning approaches is imitation learning, where a robot is given human demonstrations of the desired task execution. Typically using several demonstrations trajectories, the robot can extract suitable parameters to generate movement commands for its own motion. We used imitation learning to teach a robot to accurately throw a ball. In this case the robot is given a set of example throws and information where the ball has landed. The landing positions are used as task parameters. For every new target position, the robot should compute a new arm movement that is similar to the example movements and results in the ball hitting the target. Robot movement trajectories are described in joint coordinates. In our work we applied deep autoencoders to learn a representation of robot movements in latent coordinates. Latent spaces are useful because they provide a representation of robot movement in a lower dimensional space. An autoencoder is a neural networks made of an encoder and decoder part. The encoder part takes as input a configuration on the robot trajectory in joint space. The data are encoded through layers of the encoder network, where each layer has less neurons than the previous one. From the latent space the decoder network can transform latent space coordinates back to the original joint space. An autoencoder is usually trained using backpropagation.

In our research, we applied a deep autoencoder to lower the dimensionality of robot joint space. Statistical learning is applied to latent space representations to compute an optimal trajectory to throw a ball at a given target. In the first step, the example trajectories are transformed to their latent space representations and encoded with dynamic motion primitives (DMP). For every new throw, new DMP parameters in latent space are then computed with Gaussian Process Regression (GPR) or Locally Weighted Regression (LWR). From the computed DMP parameters, the trajectory that can be executed by a robot can be computed using the decoder part of the autoencoder.

The proposed method was evaluated in simulation of a three degrees of freedom planar robot, throwing a ball at target located at different horizontal and vertical positions. Generalization of robotic throws was performed using LWR and GPR. We compared the performance of statistical learning in the original robot joint space and in latent space. Statistical learning in the original joint space was in comparison with learning in latent space computationally slower, but with higher accuracy. The best accuracy was achieved with GPR in joint space.

See Also:

Download slides icon Download slides: ipssc2017_pahic_robot_movement_01.pdf (690.2 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: