bloc 33rd Square Business Tools - Carnegie Mellon University 33rd Square Business Tools: Carnegie Mellon University - All Post
Showing posts with label Carnegie Mellon University. Show all posts
Showing posts with label Carnegie Mellon University. Show all posts

Friday, July 24, 2015

Softer Materials Could Lead to More Human Robots


Robotics


Robots made entirely out of soft materials could be real game-changers. They could integrate more easily with human activities ranging from the ordinary to the exceptional. A group of engineers at Carnegie Mellon University is working to make such soft robots a hard reality.

 



SOURCE  NOVA PBS


By 33rd SquareEmbed


Monday, February 2, 2015

Uber Starting A Robotics Research Center to Develop Self-Driving Cars

 Self Driving Cars
Driver-on-demand service Uber is building a robotics research lab in Pittsburgh, near Carnegie Mellon University to “kickstart autonomous taxi fleet development.”




Uber is reportedly starting up a research center in partnership with Carnegie Mellon University to “kickstart autonomous taxi fleet development,” sources close to the decision have informed TechCrunch. The strategic partnership includes the creation of the Uber Advanced Technologies Center, near the CMU campus.

Related articles
"The center will focus on the development of key long-term technologies that advance Uber’s mission of bringing safe, reliable transportation to everyone, everywhere," the company posted on its blog.

Uber is hiring more than fifty senior scientists from Carnegie Mellon as well as from the National Robotics Engineering Center, a CMU-affiliated research entity say insiders.

Carnegie Mellon, is well-known for its robotics research.  According to one source, Uber has “cleaned out” the Robotics Institute, reports Tech Crunch.

"When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip."


"Uber is a rapidly growing company known for its innovative technology that is radically improving access to transportation for millions of global citizens," stated Andrew Moore, Dean of the School of Computer Science, Carnegie Mellon University. "CMU is renowned for innovations that transform lives. We look forward to partnering with Uber as they build out the Advanced Technologies Center and to working together on real-world applications, which offer very interesting new challenges at the intersections of technology, mobility, and human interactions."

Uber the driver-on-demand service, has raised more than $4 billion since its 2010 launch, has already announced that it plans to replace its drivers with autonomous vehicles.

Acccording to Uber CEO Travis Kalanick “The reason Uber could be expensive is you’re paying for the other dude in the car,” Kalanick told Business Insider. “When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip.”


SOURCE  Tech Crunch


By 33rd SquareEmbed

Friday, July 18, 2014

Who Wants A Few Extra Robot Fingers?

 Bionics
Researchers at MIT have developed a device, worn around the wrist, that enhances the grasping motion of the human hand with two robotic fingers.




Researchers at MIT have developed a robot that enhances the grasping motion of the human hand. Like another recent project at the university that gives the user an extra set of arms, the new wrist-mounted robot can help you twist a screwdriver, remove a bottle cap, and peel a banana single-handedly.

The device, worn around one’s wrist, works like two extra fingers adjacent to the pinky and thumb. The device's algorithm enables it to move synchronously with the user’s fingers for grasping objects of various shapes and sizes.

“This is a completely intuitive and natural way to move your robotic fingers,” says Harry Asada, the Ford Professor of Engineering in MIT’s Department of Mechanical Engineering. “You do not need to command the robot, but simply move your fingers naturally. Then the robotic fingers react and assist your fingers.”

"You do not need to command the robot, but simply move your fingers naturally. Then the robotic fingers react and assist your fingers."


Asada says, with some training people may come to perceive the extra bionic fingers as part of their body — “like a tool you have been using for a long time, you feel the robot as an extension of your hand.” He hopes that the two-fingered robot may assist people with limited dexterity in performing routine household tasks, such as opening jars and lifting heavy objects. He and graduate student Faye Wu presented a paper on the robot this week at the Robotics: Science and Systems conference in Berkeley, California.

The robot, which the researchers have dubbed “supernumerary robotic fingers,” consists of actuators linked together to exert forces as strong as those of human fingers during a grasping motion.

To develop an algorithm to coordinate the robotic fingers with a human hand, the researchers first looked to the physiology of hand gestures, learning that a hand’s five fingers are highly coordinated.

Related articles
The researchers hypothesized that a similar “biomechanical synergy” may exist not only among the five human fingers, but also among seven. To test the hypothesis, Wu wore a glove outfitted with multiple position-recording sensors, and attached to her wrist via a light brace. She then scavenged the lab for common objects, such as a box of cookies, a soda bottle, and a football.

Wu grasped each object with her hand, then manually positioned the robotic fingers to support the object. She recorded both hand and robotic joint angles multiple times with various objects, then analyzed the data, and found that every grasp could be explained by a combination of two or three general patterns among all seven fingers.

The researchers used this information to develop a control algorithm to correlate the postures of the two robotic fingers with those of the five human fingers. Asada explains that the algorithm essentially “teaches” the robot to assume a certain posture that the human expects the robot to take.

Down the road, Asada says the robot may also be scaled down to a less bulky form. “This is a prototype, but we can shrink it down to one-third its size, and make it foldable,” Asada says. “We could make this into a watch or a bracelet where the fingers pop up, and when the job is done, they come back into the watch. Wearable robots are a way to bring the robot closer to our daily life.”




SOURCE  MIT

By 33rd SquareEmbed

Sunday, November 24, 2013

Never Ending Image Learner (NEIL)


 Artificial Intelligence
Running since July this year, Carnegie Mellon University computer vision system, NEIL or the Never Ending Image Learner has analyzed over five million images, labeled 0.5 million images and learned 3000 common sense relationships.




The Never Ending Image Learner (NEIL) at Carnegie Mellon University is running 24 hours a day, searching the internet for images and doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.

NEIL, which is partially funded by Google, leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.

Related articles
But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying — that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people — and NEIL — nevertheless know that sheep typically are white.

"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."

A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.

Never Ending Image Learner (NEIL)

The public can now view NEIL's findings at the project website, www.neil-kb.com.

The research team, including Xinlei Chen, a Ph.D. student in CMU's Language Technologies Institute, andAbhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.

One motivation for the NEIL project is to create the world's largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.

"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.

Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.

Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that "pink" is just the name of a singer, rather than a color.

"People don't always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."

People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for "apple" might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.

As its search proceeds, NEIL develops subcategories of objects — tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.

NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.



SOURCE  Carnegie Mellon University

By 33rd SquareSubscribe to 33rd Square

Friday, October 4, 2013

IBM Cognitive Computing

 Artificial Intelligence
IBM is working to extend the capabilities of cognitive computing systems like Watson, and is engaging the support of four university research programs to help. The project partners include Carnegie Mellon University, MIT, New York University and the Rensselaer Polytechnic Institute.




Computer maker IBM has announced a collaborative research initiative with four leading universities to advance the development and deployment of cognitive computing systems – systems like IBM Watson that can learn, reason and help human experts make complex decisions involving extraordinary volumes of fast-moving data.

Faculty at the four schools -- Carnegie Mellon University, the Massachusetts Institute of Technology, New York University and Rensselaer Polytechnic Institute -- will study enabling technologies and methods for building a new class of systems that better enable people to interact with Big Data in what IBM has identified as a new era of computing.

"IBM has demonstrated with Watson that cognitive computing is real and delivering value today," said Zachary Lemnios, vice president of strategy for IBM Research. "It is already starting to transform the ways clients navigate big data and is creating new insights in healthcare, how research can be conducted and how companies can support their customers. But much additional research is needed to identify the systems, architectures and process technologies to support a new computing model that enables systems and people to work together across any domain of expertise."

Cognitive Computing artificial intelligence


The research initiative was announced at a colloquium held at the Thomas J. Watson Research Center attended by nearly 200 leading academics, IBM clients and IBM researchers to begin a dialog that deepens the understanding of cognitive systems and identifies additional areas of research to pursue. These initial university collaborators will help lay the foundation for a Cognitive Systems Institute that IBM envisions will comprise universities, research institutes and IBM clients.

The initial research topics for exploration so far are:

- MIT - How socio-technical tools and applications can boost the collective performance of moderate-sized groups of humans engaged in collaborative tasks such as decision making. 
- RPI - How advances in processing power, data availability, and algorithmic techniques can enable the practical application of a variety of artificial intelligence techniques. 
- CMU - How systems should be architected to support intelligent, natural interaction with all kinds of information in support of complex human tasks (it follows that this initiative will involve aspects of Carnegie Mellon's advanced robotics research). 
- NYU - How deep learning is impacting many areas of science where automated pattern recognition is essential.

"I believe that cognitive systems technologies will make it possible to connect people and computers in new ways so that--collectively--they can act more intelligently than any person, group, or computer has ever done before," said Thomas Malone, Director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management, MIT Sloan School of Management. "I am excited to be working with IBM and these other universities to understand better how to harness these new forms of collective intelligence."

Related articles
"With the explosion of information and the advances in semantic data tools, we are excited to participate in this collaboration to bring the best of human and computing capabilities together in this new era of cognitive systems," said Selmer Bringsjord, Professor and Head of the Department of Cognitive Science at Rensselaer Polytechnic Institute.

"The cost-effective creation of cognitive systems for complex analytic tasks will require fundamental advances in the rapid construction, optimization, and constant adaptation of large ensembles of analytic components. Personalized information agents will rapidly adapt and optimize their task performance based on direct interaction with the end user. I am excited that CMU will be teaming with IBM, MIT, RPI and NYU to explore the future of software architecture for cognitive systems," said Eric Nyberg, Professor at the Language Technologies Institute at Carnegie Mellon University.

"NYU's research into neural networks has the potential to revolutionize how we think about machines and the role they play in our everyday lives. NYU has a long history of helping create some of the work's most important technological breakthroughs, so we are honored to be among the universities collaborating on this research initiative into cognitive computing systems," saidPaul Horn, Senior Vice Provost for Research at New York University. "As a research university at the forefront of technology and innovation, we look forward to working with IBM and our fellow institutions to promote basic research into the next era of computing."





SOURCE  IBM

By 33rd SquareSubscribe to 33rd Square

Friday, June 14, 2013

Marauders_Map_Canegie_Mellon

 Computer Vision
Researchers at Carnegie Mellon University have developed a method for tracking the locations of multiple individuals in complex, indoor settings using a network of video cameras, creating something similar to the fictional Marauder’s Map used by Harry Potter to track comings and goings at the Hogwarts School.




Researchers at Carnegie Mellon University have developed a method for tracking the locations of multiple individuals in complex, indoor settings using a network of video cameras, creating something similar to the fictional Marauder's Map used by Harry Potter to track comings and goings at the Hogwarts School.

The method used in the research was able to automatically follow the movements of 13 people within a nursing home, even though individuals sometimes slipped out of view of the cameras. None of Potter's magic was needed to track them for prolonged periods; rather, the researchers made use of multiple cues from the video feed: apparel color, person detection, trajectory and, perhaps most significantly, facial recognition.

Related articles
Multi-camera, multi-object tracking has been an active field of research for a decade, but automated techniques have only focused on well-controlled lab environments. The Carnegie Mellon team, by contrast, proved their technique with actual residents and employees in a nursing facility—with camera views compromised by long hallways, doorways, people mingling in the hallways, variations in lighting and too few cameras to provide comprehensive, overlapping views.

The performance of the Carnegie Mellon algorithm significantly improved on two of the leading algorithms in multi-camera, multi-object tracking. It located individuals within one meter of their actual position 88 percent of the time, compared with 35 percent and 56 percent for the other algorithms.

The researchers—Alexander Hauptmann, principal systems scientist in the Computer Science Department (CSD); Shoou-I Yu, a Ph.D. student in the Language Technologies Institute; and Yi Yang, a CSD post-doctoral researcher—will present their findings June 27 at the Computer Vision and Pattern RecognitionConference in Portland, Ore.

Though Harry Potter could activate the Marauder's Map only by first solemnly swearing "I am up to no good," the Carnegie Mellon researchers developed their tracking technique as part of an effort to monitor the health of nursing home residents.

"The goal is not to be Big Brother, but to alert the caregivers of subtle changes in activity levels or behaviors that indicate a change of health status," Hauptmann said. All of the people in this study consented to being tracked.

These automated tracking techniques also would be useful in airports, public facilities and other areas where security is a concern. Despite the importance of cameras in identifying perpetrators following this spring's Boston Marathon bombing and the 2005 London bombings, much of the video analysis necessary for tracking people continues to be done manually, Hauptmann noted.

The CMU work on monitoring nursing home residents began in 2005 as part of a National Institutes of Health-sponsored project called CareMedia, which is now associated with the Quality of Life Technology Center, a National Science Foundation engineering research center at CMU and the University of Pittsburgh.

"We thought it would be easy," Hauptmann said of multi-camera tracking, "but it turned out to be incredibly challenging."

Something as simple as tracking based on color of clothing proved difficult, for instance, because the same color apparel can appear different to cameras in different locations, depending on variations in lighting. Likewise, a camera's view of an individual can often be blocked by other people passing in hallways, by furniture and when an individual enters a room or other area not covered by cameras, so individuals must be regularly re-identified by the system.

Face detection helps immensely in re-identifying individuals on different cameras. But Yang noted that faces can be recognized in less than 10 percent of the video frames. So the researchers developed mathematical models that enabled them to combine information, such as appearance, facial recognition and motion trajectories.

Using all of the information is key to the tracking process, but Yu said facial recognition proved to be the greatest help. When the researchers removed facial recognition information from the mix, their on-track performance in the nursing home data dropped from 88 percent to 58 percent, not much better than one of the existing tracking algorithms.

The nursing home video analyzed by the researchers was recorded in 2005 using 15 cameras; the recordings are just more than six minutes long.

Further work will be necessary to extend the technique during longer periods of time and enable real-time monitoring. The researchers also are looking at additional ways to use video to monitor resident activity while preserving privacy, such as by only recording the outlines of people together with distance information from depth cameras similar to the Microsoft Kinect.



SOURCE  Carnegie Mellon University

By 33rd SquareSubscribe to 33rd Square

Monday, October 15, 2012

Baxter introduced by Rodney Brooks
 
Robotics
Recently, Rethink Robotics founder, Rodney Brooks visited Carnegie Melon University to discuss his company and its first product, Baxter.  
R ethink Robotics has been developing a new class of industrial robot called Baxter. Recently, Rethink Robotics founder, Rodney Brooks visited Carnegie Melon University to discuss his company and its first product.

The transition from mainframes to PCs completely transformed office work, and then transformed how we access information in our daily lives. With mainframes only specialists had direct access to computation. With the PC ordinary people were empowered to control computation and to use if for their own purposes. The Rethink Robotics Baxter robot is aimed at an analogous transformation from current industrial robots which are installed, integrated, and controlled by specialists, to a situation where anybody who can work on a factory floor can install a robot and have it doing useful work within an hour.

For Baxter, the important metrics are adaptability, flexibility,ease of use. and low cost. This talk shows how Brooks and his team at Rethink defined and drove the design of the robot and its own manufacture to these metrics.

"We may end up being the Commodore 64 of these types of robots, but I am confident in 25 years these robots will be everywhere." says Brooks.

Rodney Brooks and Baxter
Image Source:  David Yellen for IEEE Spectrum

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT. He is a robotics entrepreneur and Founder, Chairman and CTO of Rethink Robotics (formerly Heartland Robotics). He is also a Founder, former Board Member (1990 - 2011) and former CTO (1990 - 2008) of iRobot Corp. Dr. Brooks is the former Director (1997 - 2007) of the MIT Artificial Intelligence Laboratory and then the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). He received degrees in pure mathematics from the Flinders University of South Australia and a Ph.D. in Computer Science from Stanford University in 1981. He held research positions at Carnegie Mellon University and MIT, and a faculty position at Stanford before joining the faculty of MIT in 1984. He has published many papers in computer vision, artificial intelligence, robotics, and artificial life.

Dr. Brooks served for many years as a member of the International Scientific Advisory Group (ISAG) of National Information and Communication Technology Australia (NICTA), and on the Global Innovation and Technology Advisory Council of John Deere & Co. He is an currently Xconomist at Xconomy and a regular contributor to the Edge. Dr. Brooks is a Member of the National Academy of Engineering (NAE), a Founding Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the American Academy of Arts & Sciences (AAAS), a Fellow of the American Association for the Advancement of Science (the other AAAS), a Fellow of the Association for Computing Machinery (ACM), a Corresponding Member of the Australian Academy of Science (AAS) and a Foreign Fellow of the Australian Academy of Technological Sciences and Engineering (ATSE). He won the Computers and Thought Award at the 1991 IJCAI (International Joint Conference on Artificial Intelligence). He has been the Cray lecturer at the University of Minnesota, the Mellon lecturer at Dartmouth College, and the Forsythe lecturer at Stanford University. He was co-founding editor of the International Journal of Computer Vision and is a member of the editorial boards of various journals including Adaptive Behavior, Artificial Life, Applied Artificial Intelligence, Autonomous Robots and New Generation Computing.




SOURCE  CMU Robotics

By 33rd SquareSubscribe to 33rd Square