Purdue's Jennifer Neville Discusses The Easy and Hard Problems of AI

Tuesday, January 3, 2017

Purdue's Jennifer Neville Discusses The Easy and Hard Problems of AI


Artificial Intelligence

Jennifer Neville, associate professor and Purdue's Miller Family Chair of Computer Science and Statistics, discussed the recent dramatic progress of artificial intelligence research at the Dawn or Doom symposium late last year.


Late last year, Jennifer Neville, associate professor and Purdue's Miller Family Chair of Computer Science and Statistics, presented a lecture"AI Easy vs. AI Hard" about the recent dramatic progress of artificial intelligence research at the Dawn or Doom symposium.

"It is really using many of the simple foundational methods that we've built up in the area of machine learning and AI."
Dawn or Doom explores the societal risks and rewards of emerging technologies such as robotics, artificial intelligence, the 'gig economy,' and virtual reality.

Neville's research focuses on data mining and machine learning techniques for relational data. In relational domains such as social network analysis, citation analysis, epidemiology, fraud detection, and web analytics, there is often limited information about any one entity in isolation, instead it is the connections among entities that are of crucial importance to pattern discovery.

Relational data mining techniques move beyond the conventional analysis of entities in isolation to analyze networks of interconnected entities, exploiting the connections among entities to improve both descriptive and predictive models. Neville's research interests lie in the development and analysis of relational learning algorithms and the application of those algorithms to real-world tasks.

Game Tree Sizes

Related articles
In her lecture, Neville traces the scholarly foundation of artificial intelligence, including the historical foundations of what has led to current exponential breakthroughs in the field. This includes the progress from perceptrons to the deep neural networks used today.

She traces how game playing has been a halmark of AI success, from 1994's Chinook checkers program created at the University of Alberta, on to 1995's IBM's TD-Gammon's mastery of backgammon, to 1997's historic win by IBM Deep Blue over Gary Kasparov in chess, through to last year's stunning upset of 9-dan professional Lee Sedol to Google's AlphaGo program at the ultra-complex game of Go.

Using reinforcement learning and other methods, the artificial intelligence systems of today are achieving stunning results. "It is not a simple approach," Neville states when describing the neural network used by AlphaGo. "It is a very complicated system used to solve this problem, but, it is really using many of the simple foundational methods that we've built up in the area of machine learning and AI."

Tay.AI


Interestingly, Neville contrasts the success of AlphaGo's win, to Microsoft's embarrassing Twitter chatbot, Tay.AI, which was released to the public soon afterwards last year, but had to be withdrawn as the bot soon became misogynistic, lewd and racist. Tay's rapid degradation turned into a bit of a joke, but it raises serious questions about the safety of AI.

Neville compares the conversations and subtle feedback that were necessary to program Tay, compared to the more structured interactions of the gaming systems. Other dialog systems such as 1995's ALICE conversational program used natural language processing and heuristic pattern matching effectively, and IBM's Watson proved to be adept at understanding many of the subtleties of language. Even the Eugene Goostman chatbot arguably beat the Turing Test, although it used some perceived cheats to pass.

What Tay suffered from, argues Neville, was a susceptibility to a coordinated attack from a subset of users. "The big difference in this chatbot system is really an open system, not a closed system-there are no clear bounds to the types of interactions, or the types of behaviors that they might see from people, and this makes them very vulnerable to attack."

Likewise, Microsoft's success with XiaoIce in China and failure with the very similarly programmed Tay.AI, was that the boundary space in China's regulated Internet taught XiaoIce to be a nice system, whereas the childlike brain exposed to an uncontrolled Twitter, led Tay to become a horrible conversationalist. "This is very complicated to detect algorithmically," notes Neville.

For future development of artificial general intelligence, research continues to need to push into the open, unbounded spaces, states Neville. Social science, and even improvisational theatre will therefore become a big part of creating algorithms she projects.





SOURCE  Dawn or Doom


By  33rd SquareEmbed



0 comments:

Post a Comment