Nick Bostrom Looks At The Intelligence Explosion Hypothesis

Wednesday, November 21, 2012

Nick Bostrom on the intelligence explosion
 

Artificial Intelligence
In thinking and writing about the future, Nick Bostrom is established as a prominent figure in Singularity thought, however he prefers the term intelligence explosion to Singularity. In a recent talk (embedded below) at the Emerce EDay, he explained why.
P hilosopher Nick Bostrom is a Swedish at the University of Oxford known for his work on existential risk and the anthropic principle covered in books such as Global Catastrophic Risks, Anthropic Bias and Human Enhancement. He holds a PhD from the London School of Economics . He is currently the director of both The Future of Humanity Institute and the Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.

In thinking and writing about the future, Bostrom is established as a prominent figure in Singularity thought, however he prefers the term intelligence explosion to Singularity.  In a recent talk (embedded below) at the Emerce EDay, he explained why.

One of the earliest incarnations of the contemporary Singularity concept was I.J. Good’s concept of the “intelligence explosion,” articulated in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
intelligence explosion

Good’s original idea had more to do with the explosion’s beginning than its end, or its extent, or the speed of its middle or later phases. His point was that in a short space of time a human-level artificial general intelligence (AGI) would probably explode into a significantly transhuman AGI, but he did not try to argue that subsequent improvements would continue without limit. Bostrom, like Good, is mainly interested in the explosion from human-level AGI to an AGI with, very loosely speaking, a level of general intelligence 2-3 orders of magnitude greater than the human level.

According to Bostrom, there is one absolute prerequisite for an intelligence explosion to occur, and that is that an artificial general intelligence must become smart enough to understand its own design. This will allow the intelligent agent the ability to reverse engineer itself and apply recursive self improvement.

The implications of an intelligence explosion are worth deeply considering — but, as Bostrom suggests the first thing is to very clearly understand that the intelligence explosion is very probably coming, just as I.J. Good foresaw.  Also, once we do achieve human-level artificial intelligence the exponential curves predict that the AI will very rapidly overtake humans.

Bostrom has been awarded the Eugene R. Gannon Award and has been listed in the FP 100 Global Thinkers list. His work has been translated into more than 20 languages, and there have been some 100 translations or reprints of his works. In addition to his writing for academic and popular press, Bostrom makes frequent media appearances in which he talks about transhumanism-related topics such as cloning, artificial intelligence, superintelligence, mind uploading, cryonics, nanotechnology, and the simulation argument.




SOURCE  Emerce

By 33rd SquareSubscribe to 33rd Square


0 comments:

Post a Comment