What Can Smart Machines Still Learn From People

Wednesday, May 21, 2014


 Artificial Intelligence
Gary Marcus spoke recently at the Allen Institute for Artificial Intelligence on developments in artificial intelligence, and presents a breakdown of some of the failures of deep learning, and how Watson really won on Jeopardy.




For nearly half a century, artificial intelligence always seemed as if it just beyond reach, rarely more, and rarely less, than two decades away. Between Watson, Deep Blue, and Siri, there can be little doubt that progress in AI has been immense, yet "strong AI" in some ways still seems elusive.

In this talk at the Allen Institute for Artificial Intelligence, Gary Marcus gives a cognitive scientist's perspective on AI. What have we learned, and what are we still struggling with? Is there anything that programmers of AI can still learn from studying the science of human cognition?

In the talk, Marcus looks at the history of artificial intelligence and touches on the topics of behaviorism, whole brain emulation, and more.

For Marcus, the failure so far in achieving artificial general intelligence (AGI) is the fact that the brain is so complex, there is no 'silver bullet' algorithm that exists that will duplicate our thinking.  Marcus also thinks that the current enthusiasm for Deep Learning, championed by Andrew Ng, will soon be dampened by the realization that it "is not the panacea that everyone seems to think."

Related articles
Hierarchical feature detectors used by deep learning algorithms do not go far enough according to Marcus.  Jeff Hawkins' Numenta applies such features in it's programming.  Deep Learning is poor at natural language understanding says Marcus.  "Using classifier models to capture language is like forcing a square peg into a round hole," he states.  The present models, he claims, classify words individually, but fail to associate them in their context.

"Intelligent creatures often deal with novel cases they haven't encountered before, and not every answer is spelled out on the Web."


Even IBM's Watson, now touted as one of the pinnacles of AI is found by Marcus to be highly limited. Watson does a lot of machine learning, says Marcus, but "94.7% of Jeopardy! questions are titles of Wikipedia pages."  Watson is a very good information retrieval system in Marcus' opinion, but it is not the cognitive computing system the company claims it to be.  That was the secret to the system's victory on the television game show.

"Watson can't understand a textbook, because it doesn't have inferential abilities," says Marcus.  It can literally bring up the words on a page, but it does not understand them.  "Intelligent creatures often deal with novel cases they haven't encountered before, and not every answer is spelled out on the Web."

"I think 85% of what's going on in AI today are vulnerable to this issue," claims Marcus.

What Can Smart Machines Still Learn From People

For the conclusion of his talk, Marcus points to some directions for AI to follow, based on the human mind and cognitive science.  The human ability for inference is the first trait Marcus suggests AI could benefit from.  Others are, the human ability to cope with inconsistency; our ability to infer based on relevancy; our ability to use generic data; our handling of causal principles; our knack for approximation and shortcuts; our ability to deal with incomplete information; our use of multiple learning mechanisms, not just one and; common sense.  Also for Marcus, the notion that brains aren't blank slates is an important consideration for AI development.  He cites Elizabeth Spelke:
If children are endowed [innately] with abilities to perceive objects, persons, sets, and places, then they may use their perceptual experience to learn about the properties and behaviors of such entities...It is far from clear how children could learn anything about the entities in a domain, however, if they could not single out those entities in their surroundings.  
Robotics, for instance is another example where starting with a developmental blank slate has fallen flat in research.  Marcus suggests researchers look at Kant's The Critique of Pure Reason for foundational bases of AI and robotics.

Marcus also touches on the need for a re-examination of Cyc, the artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.

Marcus is the author of Kluge: The Haphazard Construction of the Human Mind and is Director of the NYU Center for Language and Music, and Professor of Psychology at New York University. He is the editor of The Norton Psychology Reader, Marcus's research on developmental cognitive neuroscience has been published in over forty articles in leading journals such as Science, Nature, Cognition, Cognitive Psychology, and the Monographs of the Society for Research in Child Development.

In 1996 he won the Robert L. Fantz award for new investigators in cognitive development, and in 2002-2003 Marcus he was a Fellow of the Center for Advanced Study in Social and Behavioral Sciences at Stanford.


SOURCE  Allen Institute for Artificial Intelligence

By 33rd SquareEmbed

0 comments:

Post a Comment