Intelligence Explosion
For Daniel Dewey the possibility of an intelligence explosion, the rapid advance of recursively self-improving artificial intelligence is one of the most important yet understudied phenomenon in our potential futures. |
For Daniel Dewey's work the long-term future of AI almost certainly points to an intelligence explosion. An intelligence explosion is a process in which an intelligent machine devises improvements to itself, then the improved machine improves itself, and so on in a chain reaction, or "explosion".
It is plausible that such a process could occur very rapidly, and could continue until a machine much more intelligent than any human is created. If intelligence explosion is possible, many interesting problems gain practical importance:
Powerful computers will be able to perform accurate inferential and decision-theoretic calculations, and so will be able to choose effective courses of action to achieve any end they are designed to. Most ends that are easy to specify are not compatible, in their fullest realizations, with valuable futures. Dewey asks, are there ways to manage such potentially harmful ability?
The concept of the intelligence explosion originated with I.J. Good in 1964 in his Speculations Concerning the First Ultraintelligent Machine.
In his TEDx talk (video above), Dewey points out that an intelligence explosion may not involve any noticeable physical change to the computer systems at first.
Looking back at the history of algorithmic improvement, it turns out that just as much improvement tends to come from new software, as from new hardware. This is true in many areas, including physics simulation, game playing, image recognition, and many problems in machine learning.
As a computer self-improves, it may make mistakes; even if the first computer is programmed to pursue valuable ends, later ones may not be. Designing a stable self-improvement process involves some open problems in logic and decision theory.
Intelligence of the kind needed for an explosion seems to lie along most developmental paths that we could pursue. As Dewey points out, it would require significant coordination to avoid intelligence explosions.
As Dewey shows, little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.
"Whether it happens soon or in the long-term future, I believe that understanding and managing the phenomenon of intelligence explosion will be a critical task— for theories of intelligence, for safe use of AI, and possibly for humanity, as a whole," concludes Dewey.
Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Dewey worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute (MIRI).
SOURCE TEDx Talks
In his TEDx talk (video above), Dewey points out that an intelligence explosion may not involve any noticeable physical change to the computer systems at first.
Related articles |
"What that means is that our outside observer might not see physical changes during an intelligence explosion; they might just see a series of programs writing more capable programs," he says. "Now, we don't yet have a good enough theory to know exactly how quickly such programs could progress, but this does mean that an intelligence explosion could happen at software speed, and in a self-contained way, and without needing new hardware."
As a computer self-improves, it may make mistakes; even if the first computer is programmed to pursue valuable ends, later ones may not be. Designing a stable self-improvement process involves some open problems in logic and decision theory.
As Dewey shows, little is yet known about the possibility of an intelligence explosion, and about the rest of the risk and strategic landscapes of the long-term future of artificial intelligence. This is an area that is very much in need of foundational research, and can benefit strongly from determined researchers and visionary funders.
"Whether it happens soon or in the long-term future, I believe that understanding and managing the phenomenon of intelligence explosion will be a critical task— for theories of intelligence, for safe use of AI, and possibly for humanity, as a whole," concludes Dewey.
Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Dewey worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute (MIRI).
SOURCE TEDx Talks
By 33rd Square | Subscribe to 33rd Square |
0 comments:
Post a Comment