Nick Bostrom on Superintelligence: Paths, Dangers and Strategies

Friday, September 12, 2014


 Superintelligence
How should we prepare for the time when machines surpass humans in intelligence? Professor Nick Bostrom explores the frontiers of thinking about the human condition and the future of intelligent life.




In his latest book, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom examines how we should prepare for a coming possible intelligence explosion in a recent talk at the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA).

The lecture presents a journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life.

According to Bostrom, the human brain has capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position.

To reach beyond human level intelligence, he points out that there are a number of routes that are currently in development.

Bostrom thinks that whole brain emulation may be one possible route to superintelligence, but we are very far from this technologically.

"There is a much greater probability, that if and when we do get human-level machine intelligence that we will have superintelligence shortly after. I think we should take seriously scenarios when this could happen within hours, minutes or days, as well as ones where it takes a few years."


Despite the probable long time to reach human level intelligence through AI, and the conceptual leaps that are required to get whole brain emulation and other intelligence-boosting technologies to work, Bostrom argues the risk is substantial enough to warrant concern.

"There is a much greater probability, that if and when we do get human-level machine intelligence that we will have superintelligence shortly after," he says. "I think we should take seriously scenarios when this could happen within hours, minutes or days, as well as ones where it takes a few years. I don't see it taking decades."

How far to superintelligence
As Bostrom's graph above illustrates, a fast takeoff scenario, or intelligence explosion is highly probable once machines reach human-level intelligence.

If machine brains were to surpass humans in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. But we have one advantage: we get to make the first move.

Superpowers of superintelligence

Related articles
Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School.

He is the author of some 200 publications, including 'Anthropic Bias' (Routledge, 2002), 'Global Catastrophic Risks' (ed., OUP, 2008), and 'Human Enhancement' (ed., OUP, 2009), and the forthcoming book 'Superintelligence' (OUP, 2014). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds.


SOURCE  The RSA

By 33rd SquareEmbed

0 comments:

Post a Comment