The Long-Term Future of Artificial Intelligence

Friday, July 10, 2015


 Artificial Intelligence
Long-time AI researcher Stuart Russell, says that for most of the history of artificial intelligence, the existential threat it presents has been largely ignored by serious researchers, but due to developments like Watson and DeepMind, more and more people are raising the warning signs.





I
n 1965, I. J. Good's article Speculations Concerning the First Ultraintelligent Machine included the following remark:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

According to long-time AI researcher Stuart Russell, for most of the history of artificial intelligence, this issue has been ignored. Indeed, Good himself continues, "It is curious that this point is made so seldom outside of science fiction."

As the capabilities of AI systems improve, however, and as the transition of AI into broad areas of human life leads to huge increases in research investment, it is inevitable that the field will have to begin to take itself seriously. As Russell observes, the field has operated for over 50 years on one simple assumption: the more intelligent, the better. To this must be conjoined an overriding concern for the benefit of humanity.

The Long-Term Future of Artificial Intelligence

In the talk above, Russell enumerates some of the recent advances in artificial intelligence, like self-driving cars, robots that fold clothes and systems that can automatically caption images.  "When you see this kind of thing happening in systems that have essentially been trained with very little manual structuring or programming, then this is the kind of thing that causes people to have the 'holy cow!' moment," states Russell

"When you see this kind of thing happening in systems that have essentially been trained with very little manual structuring or programming, then this is the kind of thing that causes people to have the 'holy cow!' moment."


The argument is very simple for Russell: AI is likely to succeed and unconstrained superintelligent AI will bring huge risks and huge benefits.

"What can we do now to improve the chances of reaping the benefits and avoiding the risks?" asks Russell.

Organizations are already considering these questions, including the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, the Machine Intelligence Research Institute in Berkeley, and the Future of Life Institute at Harvard/MIT.

Russell serves on the Advisory Boards of CSER and FLI.

Stuart Russell hates when AI is likened to the Terminator
Stuart Russell hates when AI is likened to the Terminator
He likens the arrival of nuclear fusion and the problem of containment of fusion reactions to AI as the field matures. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.

Related articles
The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.

Russell is one of the leading figures in modern artificial intelligence. He is a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley. He is author of the textbook Artificial Intelligence: A Modern Approach, widely regarded as one of the standard textbooks in the field. Russell is on the Scientific Advisory Board for the Future of Life Institute and the Advisory Board of the Centre for the Study of Existential Risk


SOURCE  CRASSH Cambridge

By 33rd SquareEmbed

0 comments:

Post a Comment