As Robots Become Smarter, They Will Walk More Like Animals

Thursday, February 7, 2013

Cornell Robot


 Artificial Intelligence
Researchers Jeffrey Clune, Hod Lipson and others at the University of Wyoming and Cornell are working to develop systems so that robots can teach themselves how to walk.  At the root of the problem is a need for smarter artificial intelligence, not better servos and gears.
As far as the uncanny valley is concerned, one of the dead giveaways that something is not real is how it moves.  The boxy, clunky movements that used to make up most CGI and still pervade robotics is as more a result of the "smarts" or intelligence behind the movements as the mechanical systems behind them.

"If you look humanlike but your motion is jerky or you can't make proper eye contact, those are the things that make them uncanny," said Ayse Saygin, a cognitive scientist at the University of California. "I think the key is that when you make appearances humanlike, you raise expectations for the brain. When those expectations are not met, then you have the problem in the brain."

But just as computer animation has dramatically progressed from Pixar's early Tin Toy short film to the very life-like characters in Tintin and Avatar, so too are robotics progressing to more lifelike motions and behaviors.

According to an article by Lakshmi Sanhana at Fast Company, this development is mainly due to better machine intelligence.

“We are working on evolving brains that can be downloaded onto a robot, wake up, and begin exploring their environment to figure out how to accomplish the high-level objectives we give them (e.g. avoid getting damaged, find recharging stations, locate survivors, pick up trash, etc.),” says Jeffrey Clune, Assistant Professor of Computer Science at the University of Wyoming.

Clune's research group started looking at the idea initially by evolving gaits for robots in an effort to reduce the time it takes to get them operational. To date getting any robot to walk or perform other behaviors requires a lot of programming and engineering time.

Every motion must be manually programmed and engineers have to reprogram them for new robots or different versions of the same robot. “The manual approach is too expensive and will not scale to produce many different types of robots,” explains Clune.

The resulting clunky and intimidating robotic motion might be acceptable in some situations  but it’s not practical for, the robots that will increasingly be part of our homes and businesses everyday. The unnatural motion, part of the uncanny valley, has been found to interfere with human-machine interaction and user experience.

For Clune and others the challenge is to get all sorts of robots to somehow learn to walk by themselves.

Clune and fellow team members Hod Lipson, Cornell Associate Professor of Mechanical and Aerospace Engineering and Cornell students Sean Lee and Jason Yosinski, started their work by combining neural networks with evolutionary concepts from developmental biology.

Using the revolutionary new approach, they began growing artificial digital brains that could take a simulated or physical robot body, recognize the type of body (two-legged, four-legged, etc.), and evolve the neural patterns needed to control it. In its first test the software evolved digital brains with neural patterns to make a four-legged robot walk within a few hours. What’s more, instead of each leg doing its own thing, the walking patterns it came up with were coordinated and natural -once you have the tricky learning code, he says, you can reuse it for as many robots as you like.

Currently the brains the team can evolve are very small, with perhaps hundreds of neural connections focused only on locomotion. In nature, that corresponds to the size of brains of simple worms. 

AI modularity

However a recent breakthrough in understanding why biological brains are organized into modules could prove to be a game changer in the field of artificial intelligence and allow scaling computer brains to millions or billions of connections. For that discovery, Clune and Lipson teamed up with Jean-Baptiste Mouret, a Robotics and Computer Science Professor at Université Pierre et Marie Curie, Paris.

"Being able to evolve modularity will let us create more complex, sophisticated computational brains," says Clune.

The breakthrough in understanding modularity will allow them and others to start evolving digital brains that are, for the first time, structurally organized like biological brains, composed of many neural modules. According to Mouret the merging of modularity with AI “makes a powerful tool to give robots the adaptation abilities of animal species.” If a robot breaks a leg or loses a part, it’ll learn to compensate and still operate effectively, just like animals do.




SOURCE  Fast Company

By 33rd SquareSubscribe to 33rd Square


Enhanced by Zemanta

0 comments:

Post a Comment