bloc 33rd Square Business Tools - pr2 33rd Square Business Tools: pr2 - All Post
Showing posts with label pr2. Show all posts
Showing posts with label pr2. Show all posts

Monday, May 25, 2015

Robots Learn on Their Own Through Trial and Error

 Artificial Intelligence
Robotics researchers have engineered new algorithms that enable robots to learn motor tasks by trial and error, using a process that more closely approximates the way people learn.





Researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”

Looking for information of schools in USA http://www.top10usaschool.com Is the Ultimate source for USA Primary, Middle and High schools.

This advance was be presented at the International Conference on Robotics and Automation (ICRA). Abbeel is leading the project with fellow UC Berkeley faculty member Trevor Darrell, director of the Berkeley Vision and Learning Center. Other members of the research team are postdoctoral researcher Sergey Levine and Ph.D. student Chelsea Finn.

The work is part of a new People and Robots Initiative at UC’s Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.

Robots Learn on Their Own Through Trial and Error

“Most robotic applications are in controlled environments where objects are in predictable positions,” said Darrell. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”

Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

The UC Berkeley researchers instead turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

"The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings."


“For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,” said Levine. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”

In the experiments, the UC Berkeley researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks.

They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT’s learning included a reward function that provided a score based upon how well the robot was doing with the task (see video below).

Related articles
BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.

This end-to-end training process underlies the robot’s ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.

With this approach, when given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

Abbeel says the field will likely see significant improvements as the ability to process vast amounts of data improves.

“With more data, you can start learning more complex things,” he said. “We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”

In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.

Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

“Moving about in an unstructured 3D environment is a whole different ballgame,” said Finn. “There are no labeled directions, no examples of how to solve the problem in advance. There are no examples of the correct solution like one would have in speech and vision recognition programs.”




SOURCE  UC Berkeley

By 33rd SquareEmbed

Monday, October 21, 2013

Unbounded Robotics UBR-1

 Robotics
Willow Garage has spun off a new company, Unbounded Robotics and today they released their first robot, the one-armed UBR-1. The robot has been built to aid both academic researchers and make business automation more affordable.




The team at Unbounded Robotics were happy and excited to today introducing UBR-1, a state-of-the-art ROS-based robot that is a little brother to Willow Garage's PR2 research robot.

UBR-1 looks to to do something the high-priced PR2 could never do - become a widely adopted platform across both acedimia and business.

With decades of robotic hardware and software experience, the team at Unbounded Robotics, made up largely of Willow Garage alumni,  have developed a mobile manipulation platform that offers advanced software and a sophisticated hardware exterior.

The one-armed orange and white robot is designed for human-scale tasks and comes pre-installed with Ubuntu Linux LTS and ROS, along with applications such as MoveIt!, navigation, calibration, and joystick teleoperation. The robot offers mobility, dexterity, manipulation, and navigation in a human-scale, ADA-compliant model.

Unbounded Robotics
The Unbounded Robotics team

A spin-off from Willow Garage, the Unbounded Robotics founding team consists of Eric Diehr, Lead Mechanical Engineer; Michael Ferguson, CTO; Derek King, Lead Systems Engineer; and Melonee Wise, CEO.

UBR-1 is priced at $35,000 USD. Unbounded be taking orders for the robot soon, and shipping the device in summer 2014.

Related articles
The team has done extensive software integration to improve the user experience; MoveIt! being the highlight of that integration. On the hardware front, UBR-1 requires no calibration at start-up, has a workspace large enough for the robot to reach the ground as well as countertops, and was designed with extensibility in mind so that users can easily develop custom applications.

As for the single arm, Wise told IEEE Spectrum, "If you look at the type of research being done today and the applications that people are using robots for, a lot of them only use one arm, and when we talk to professors that are using two arms with their robots, they came back and they said, "when I really think about it, I don’t really need that second arm, and I could just buy two if I really needed a second arm."

"As Willow Garage alumni, we realize that UBR-1 will undoubtedly be compared to the PR2 robot from Willow Garage. The comparison is logical in some ways: the teams’ prior experience with ROS and the PR2, along with expertise in advanced mobile manipulation platforms," state the company on their blog.

"As a platform for robotics, we are looking forward to seeing how UBR-1 is put to use in both R&D and commercial markets. Similar to an iPhone without any third-party apps, the greatest contribution of UBR-1 will be the output from the robotics community that is able to take advantage of this sophisticated mobile manipulation platform."



SOURCE  Unbounded Robotics

By 33rd SquareSubscribe to 33rd Square

Monday, June 3, 2013


 Robotics
Researcher Scott Niekum has taught Willow Garage's PR2 Robot the fine art of IKEA furniture assembly. Using a show-and-tell-type programming method, Niekum successfully built algorithmic task trees to help the robots perform.


During his internship at Willow Garage, Scott Niekum from the University of Massachusetts Amherst developed a learning from demonstration system that allows users to show the PR2 robot how to perform complex, multi-step tasks, which the robot can then generalize to new situations. Our main test application was the autonomous assembly of simple IKEA flat-packed furniture.

PR2 Robot from Willow Garage


Related articles
By having a user provide several kinesthetic demonstrations of the task in various situations, demonstrations in which the user physically moves the compliant arms of the robot to complete tasks, the PR2 learns the steps required. A series of algorithms for the ROS operating system are then used to discover repeated structure across the training sessions, resulting in the creation of reusable skills that can be used to reproduce the tasks.

The robot is then able to sequence these skills in an intelligent, adaptive way by using classifiers learned from the demonstration data. If the robot happens to make a mistake during execution of the task, the user can stop the robot at any time and provide an interactive correction, showing the robot how to fix the mistake. Impressively, this information is then integrated into the robot's knowledge base, so that it can deal with similar situations in the future.

PR2 Robot Building Furniture


Niekum's research is available here.



SOURCE  Willow Garage

By 33rd SquareSubscribe to 33rd Square

Friday, February 22, 2013


 Robotics
MIT Researcher Annie Holladay has taught her PR2 robot to use both hands when dealing with complicated objects.  Such advanced robotic programming will be necessary if we are ever to have household robots.
Most commercial robotic arms perform what roboticists call "pick and place" tasks: The arm picks up an object in one location and places it in another. General-purpose household robots, however, would have to be able to manipulate objects of any shape, left in any location. Today, commercially available robots don't have anything like the dexterity of the human hand.

At this year's IEEE International Conference on Robotics and Automation, the premier robotics conference, students in the Learning and Intelligent Systems Group at MIT's Computer Science and Artificial Intelligence Laboratory will present a pair of papers showing how household robots could use a little lateral thinking to compensate for their physical shortcomings.

In the video above, MIT senior Annie Holladay demonstrate and describe how her algorithm helps the Willow Garage PR2 robot adapt by using both of its arms instead of just one.




SOURCE  MIT News Office

By 33rd SquareSubscribe to 33rd Square


Enhanced by Zemanta

Saturday, November 17, 2012

PR2 Robot
 
Robotics
Robots only do what we program them to do and therein lies a huge obstacle to the dream of practical robot helpers in the home and workplace. Now, the field of user experience is increasingly being used to teach robots and AI  to perform tasks with a hands-on teaching method.  New research shown at Willow Garage points to the success of this methodology in making robots easier to use and more effective.
One of the key features of Rethink Robotics' recently released Baxter robot is the way it is programmed.  Baxter was designed with the user in mind from the beginning and it is reflected in the ease at which the light industrial robot can be trained (or programmed).

Now, researchers at Willow Garage are also developing the user experience of training robots.  Specifically they are performing user studies on how to train a PR2 mobile manipulator with little to no instruction on how to do so.
Maya Cakmak from Georgia Tech, envisions robots that can be programmed by their end-users for their own specific needs. This past summer, Maya worked on developing a spoken dialog interface that allows users to program new skills by physically moving PR2’s two arms and using simple speech commands.

Calmak wants the user experience of someone using a robot for the first time to be like using an appliance.  When is the last time you had to read a manual or attend a training course to use a toaster?
 Calmak and her colleagues conducted a user study that replicates the described scenario. Participants in this study (15 men and 15 women, ages 19-70) with no prior knowledge of how to program the robot were left alone with the robot and a combination of supplementary materials. They had to figure out on their own how to program different skills such as picking up medicine from a cabinet or folding a towel.

Robot Folding laundry
The user study revealed that information presented in the user manual easily gets overlooked and instructional videos are most useful in jump starting the interaction. In addition, trial-and-error plays a crucial role especially for achieving a certain proficiency level.
User studies like this, provide important insights into how the interface and the supplementary material should be designed to improve the learnability of end-user programmable robots. 

SOURCE  Willow Garage

By 33rd SquareSubscribe to 33rd Square