Robo Brain is Learning from the Internet

Monday, August 25, 2014

Robo Brain

 Artificial Intelligence
Robo Brain is now at work examining images and concepts available on the Internet so that it can teach robots how to recognize, grasp and manipulate objects and predict human behavior in the environment.




Hey there! I'm a robot brain. I learn concepts by searching the Internet. I can interpret natural language text, images, and videos. I watch humans with my sensors and learn things from interacting with them. Here are a few things I've learned recently...

And so the Robo Brain, introduces itself.  The project at Cornell University was turned on last month.

The AI project, which is led by professor Ashutosh Saxena, is described as "a large-scale computational system that learns from publicly available internet resources, computer simulations, and real-life robot trials".

The open-source effort that includes Brown, Cornell and Stanford Universities as well as the University of California, Berkeley is addressing research challenges in various domains:

  • -Large-Scale Data Processing
  • -Language and Dialog
  • -Perception
  • -AI and Reasoning Systems
  • -Robotics and Automation

Unceasingly the system is in the process of downloading images, YouTube videos and other how-to documents and appliance manuals, along with the training Cornell researchers gave to other robots in their laboratories.

By reviewing these materials, the Robo Brain is intended to learn how to recognize objects and how they are used, as well as human language and behavior in order to train robots how to function in the human built physical world.

"Our laptops and cell phones have access to all the information we want," explains Saxena. "If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud."

For instance, the Robot Brain can learn from the Internet that the knobs on a microwave oven are turned to set the time for reheating a cup of coffee, and combined with other how-to information it finds, it could instruct a robot, like a Willow Garage PR2 or a Rethink Robotics Baxter research robot, how long to heat the beverage.

Robo Brain learning
Related articles
Other such online learning could be filled in to eventually have the robot complete all of the required steps to get you a hot cup of coffee.

It can also contain layers of abstraction, a system the researchers call "structured deep learning". For example, if the robot sees an armchair, it knows that it is a type of furniture, and more specifically, that it is furniture used for sitting -- a sub-class that contains a wide range of chairs, stools, benches and couches.

“The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries,” said Aditya Jami, a visiting researcher art Cornell, who designed the large-scale database for the brain.

Robotic Planning

This information will then be stored in what mathematicians call a Markov model, represented as a series of points ("nodes") connected by lines ("edges"), like a giant branching graph, where each state depends on the previous states.

"The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries."


The nodes could be actions, objects, or parts of an image, and each one is assigned a probability, or a level of variance while remaining correct. A key, for example, can vary in form, but still usually consists of a handle, a shaft and teeth. The robot can then follow a chain and look for nodes that match within probability limits.

Moreover, by learning to recognize about the human environment, Robo Brain also is learning about human behavior.  By finding out about what humans use objects for, the system can also be used to anticipate the actions of the people it is looking at.

Robo Brain is very similar the European RoboEarth Project.  Like Robo Brain, RoboEarth is cloud storage and computing for robots with an ever-expanding database intended to store knowledge created by both humans and robots in a robot-readable open format.

One area where Robo Brain is new, is that it also uses crowd sourcing to feed information to the graph.  Complimenting Robo Brain's object detection system; PlanIt, a simulation through which users can teach robots how to grasp objects or move about a room; is a system called Tell Me Dave, a crowd-sourced project that teaches robots how to understand language.

A major challenge for the system currently is that it does not have good source of data for the area of haptics - which would be useful for teaching robots how to touch and feel.

As researchers continue to add other types of learning models and data sources, such ImageNet, 3D Warehouse, and more along with the knowledge of the crowd, the team expects the system to undergo a positive feedback loop.  Moreover, each of the robots that are using Robo Brain in the world will feedback and teach other robots.  The exponential nature of this feedback is very evident.  Already at this early stage, the researchers are pleased with the results.

By merging all this software and data, the researchers hope to create a system that demonstrates a primitive sense of perception, that can “discover most of the common sense knowledge of the world,” says Bart Selman, a Robo Brain collaborator at Cornell.




SOURCES  Wired, Popular Science, Engadget, TechCrunch

By 33rd SquareEmbed

0 comments:

Post a Comment