Artificial Intelligence
Gary Marcus spoke recently at the Allen Institute for Artificial Intelligence on developments in artificial intelligence, and presents a breakdown of some of the failures of deep learning, and how Watson really won on Jeopardy. |
For nearly half a century, artificial intelligence always seemed as if it just beyond reach, rarely more, and rarely less, than two decades away. Between Watson, Deep Blue, and Siri, there can be little doubt that progress in AI has been immense, yet "strong AI" in some ways still seems elusive.
In this talk at the Allen Institute for Artificial Intelligence, Gary Marcus gives a cognitive scientist's perspective on AI. What have we learned, and what are we still struggling with? Is there anything that programmers of AI can still learn from studying the science of human cognition?
In the talk, Marcus looks at the history of artificial intelligence and touches on the topics of behaviorism, whole brain emulation, and more.
For Marcus, the failure so far in achieving artificial general intelligence (AGI) is the fact that the brain is so complex, there is no 'silver bullet' algorithm that exists that will duplicate our thinking. Marcus also thinks that the current enthusiasm for Deep Learning, championed by Andrew Ng, will soon be dampened by the realization that it "is not the panacea that everyone seems to think."
Related articles |
"Intelligent creatures often deal with novel cases they haven't encountered before, and not every answer is spelled out on the Web." |
"Watson can't understand a textbook, because it doesn't have inferential abilities," says Marcus. It can literally bring up the words on a page, but it does not understand them. "Intelligent creatures often deal with novel cases they haven't encountered before, and not every answer is spelled out on the Web."
"I think 85% of what's going on in AI today are vulnerable to this issue," claims Marcus.
For the conclusion of his talk, Marcus points to some directions for AI to follow, based on the human mind and cognitive science. The human ability for inference is the first trait Marcus suggests AI could benefit from. Others are, the human ability to cope with inconsistency; our ability to infer based on relevancy; our ability to use generic data; our handling of causal principles; our knack for approximation and shortcuts; our ability to deal with incomplete information; our use of multiple learning mechanisms, not just one and; common sense. Also for Marcus, the notion that brains aren't blank slates is an important consideration for AI development. He cites Elizabeth Spelke:
If children are endowed [innately] with abilities to perceive objects, persons, sets, and places, then they may use their perceptual experience to learn about the properties and behaviors of such entities...It is far from clear how children could learn anything about the entities in a domain, however, if they could not single out those entities in their surroundings.
Marcus also touches on the need for a re-examination of Cyc, the artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.
Marcus is the author of Kluge: The Haphazard Construction of the Human Mind and is Director of the NYU Center for Language and Music, and Professor of Psychology at New York University. He is the editor of The Norton Psychology Reader, Marcus's research on developmental cognitive neuroscience has been published in over forty articles in leading journals such as Science, Nature, Cognition, Cognitive Psychology, and the Monographs of the Society for Research in Child Development.
In 1996 he won the Robert L. Fantz award for new investigators in cognitive development, and in 2002-2003 Marcus he was a Fellow of the Center for Advanced Study in Social and Behavioral Sciences at Stanford.
SOURCE Allen Institute for Artificial Intelligence
By 33rd Square | Embed |
0 comments:
Post a Comment