The New Techniques That Will Power Robot Intelligence and Control

Tuesday, April 28, 2015


 Machine Intelligence
Videos of robots moving slowly, or sped up are common.  This might be a thing of the past as researchers like Sergey Levin develop neural networks to control robots.  In this impressive work, the perception and the control lead the robots to work in real time and very fast.





A remarkable feature of human and animal intelligence is the ability to autonomously acquire new behaviors. Sergey Levin, a researcher at UC Berkeley, is concerned with designing algorithms that aim to bring this ability to robots and simulated characters.

Readers of this website might be familiar with video of robots like the PR2 above, but may have also noticed that often the videos are run fast to show what is going on.  The actual robot is moving very slowly.

In the video above and in Levin's lecture below, the researchers, have managed to get the robot to perform complicated tasks in real time.

The New Techniques That Will Power Robot Intelligence and Control


This is perhaps the most impressive robotics motion with real-world situations that have ever been released.


"The reason it is fast is because it is optimized on the real physical system."


"The reason it is fast," says Levin of the robot's motion, "is because it is optimized on the real physical system."

According to Levin, a central challenge in this field is to learn behaviors with representations that are sufficiently general and expressive to handle the wide range of motion skills that are necessary for real-world applications, such as general-purpose household robots.


These representations must also be able to operate on raw, high-dimensional inputs and outputs, such as camera images, joint torques, and muscle activations. In the lecture below, Levin describes a class of guided policy search algorithms that tackle this challenge by transforming the task of learning control policies into a supervised learning problem, with supervision provided by simple, efficient trajectory-centric methods.

optimal control reinforcement learning deep learning


Related articles
Levin shows how this approach can be applied to a wide range of tasks, from locomotion and push recovery to robotic manipulation.  This includes generalization of the tasks the robot learns,

For instance, the robot is trained to hang a coat hanger on a rack, without clothes on the hanger, but is then tested with clothes on the hanger.  In another test, the robot learns how to screw a cap onto a bottle, but successfully implements what it learned to other bottles.

The researchers got over 50 percent accuracy on the coat hanger task, and nearly 90% on the bottle exercise.

He also presents new results on using deep convolutional neural networks to directly learn policies that combine visual perception and control, learning the entire mapping from rich visual stimuli to motor torques on a real robot.

Levin concludes his talk below, by discussing future directions in deep sensorimotor learning and how advances in this emerging field can be applied to a range of other areas.  This includes the potential of using big data in addition to reinforcement learning to further improve the robot control and movement.

Sergey Levine is a postdoctoral researcher working with Professor Pieter Abbeel at UC Berkeley. He completed his PhD in 2014 with Vladlen Koltun at Stanford University. His research focuses on robotics, machine learning, and computer graphics. In his PhD thesis, he developed a novel guided policy search algorithm for learning rich, expressive locomotion policies. In later work, this method enabled learning a range of robotic manipulation tasks, as well as end-to-end training of policies for perception and control. He has also developed algorithms for learning from demonstration, inverse reinforcement learning, and data-driven character animation.

The lecture below is hightly technical, including some of the foudational mathematics behind the robot control algorithms, but at around the 25 minute mark, the impressive PR2 robot controls are shown.




SOURCE  University of Washington

By 33rd SquareEmbed

0 comments:

Post a Comment