IntellAct (2011-14): Intelligent observation and execution of Actions and manipulations

author: Norbert Krüger, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark
published: March 14, 2012,   recorded: February 2012,   views: 3154
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

IntellAct addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. IntellAct will provide means to allow for this transfer not by copying movements of the human but by transferring the human action on a semantic level. IntellAct will demonstrate the ability to understand scene and action semantics and to execute actions with a robot in two domains. First, in a laboratory environment (exemplified by a lab in the International Space Station (ISS)) and second, in an assembly process in an industrial context.

IntellAct consists of three building blocks:

Learning - Abstract, semantic descriptions of manipulations are extracted from video sequences showing a human demonstrating the manipulations;
Monitoring - In the second step, observed manipulations are evaluated against the learned, semantic models;
Execution - Based on learned, semantic models, equivalent manipulations are executed by a robot.

The analysis of low-level observation data for semantic content (Learning) and the synthesis of concrete behaviour (Execution) constitute the major scientific challenge of IntellAct. Based on the semantic interpretation and description and enhanced with low-level trajectory data for grounding, two major application areas are addressed by IntellAct: First, the monitoring of human manipulations for correctness (e.g., for training or in high-risk scenarios) and second, the efficient teaching of cognitive robots to perform manipulations in a wide variety of applications.

To achieve these goals, IntellAct brings together recent methods for:

parsing scenes into spatio-temporal graphs and so-called "semantic Event Chains‟.
probabilistic models of objects and their manipulation.
probabilistic rule learning, and
dynamic motion primitives for trainable and flexible descriptions of robotic motor behaviour.

Its implementation employs a concurrent-engineering approach that includes virtual-reality-enhanced simulation as well as physical robots. Its goal culminates in the demonstration of a robot understanding, monitoring and reproducing human action.

See Also:

Download slides icon Download slides: cogsys2012_krueger_intelligent_01.pdf (1.8 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: