Nick Bostrom Asks What Will Happen When Computers Are Smarter Than We Are

Monday, April 27, 2015


 Artificial Intelligence
In this century we unlock the potential of artificial intelligence, triggering and intelligence explosion.  Philosopher Nick Bostrom spoke about this at this year's TED Conference, and about the potential threat such a superintelligent AI might be to humanity.





Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make."

A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

“We should not be confident in our ability to keep a superintelligent genie locked up in its bottle,” he says. In any event, the time to think about instilling human values in artificial intelligence is now, not later. As for evolution itself? “The train doesn’t stop at Humanville Station. It’s more likely to swoosh right by.”

Nick Bostrom TED
Image Source:  Bret Hartman/TED
Related articles
Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument —  which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation —  to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

"What we do know is that the ultimate limits to information processing in machine substrate lie far outside the limits in biological tissue," states Bostrom. The limits of neural processing are no match for the speed of computers he suggests. "The potential for superintelligence lies dormant in matter."

For Bostrom, the threat of an intelligence explosion in this century is as big or greater as the development of the atomic bomb was in the 20th Century.

"The train doesn’t stop at Humanville Station. It’s more likely to swoosh right by."

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

SOURCE  TED

By 33rd SquareEmbed

0 comments:

Post a Comment