Michael Vassar on the Threat of AI

Tuesday, February 24, 2015


 Artificial Intelligence
In a new video, futurist Michael Vassar explains why greater-than-human artificial intelligence would be the end of humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening he says.





Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity in this video from Big Think.

"The major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions."


The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Vassar notes that AI itself isn't the greatest risk to humanity.

"I conclude that the major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions," he says.

Michael Vassar on the Threat of AI

Greater than human artificial intelligence is a specific threat to humanity because of what Steve Omohundro has called basic AI drives.  (For a brief description by Omohundro, see the video embedded below.)

As Vassar suggests, we should expect an superintelligent AI to reconfigure the universe in a manner that does not necessarily preserve human values. "As far as I can tell this position is analytically compelling. It’s not a position that a person can intelligently honestly and reasonably be uncertain about," he says.

Related articles
Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago, according to Vassar and "wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open minded person." Vassar laments the fact that Bostrom had to spend a decade becoming the director of an incredibly prestigious institute and writing an incredibly rigorous meticulous book in order to get a still tiny number of people and still a minority of the world to recognize the threat of AI.

Vassar is an American futurist, activist, and entrepreneur. He is the co-founder and Chief Science Officer of MetaMed Research. He was president of the Machine Intelligence Research Institute (then the Singularity Institute) until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force.




SOURCE  Big Think

By 33rd SquareEmbed

0 comments:

Post a Comment