Artificial Intelligence
In a new video, futurist Michael Vassar explains why greater-than-human artificial intelligence would be the end of humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening he says. |
|
Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity in this video from Big Think.
"The major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions." |
"I conclude that the major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions," he says.
As Vassar suggests, we should expect an superintelligent AI to reconfigure the universe in a manner that does not necessarily preserve human values. "As far as I can tell this position is analytically compelling. It’s not a position that a person can intelligently honestly and reasonably be uncertain about," he says.
Related articles |
Vassar is an American futurist, activist, and entrepreneur. He is the co-founder and Chief Science Officer of MetaMed Research. He was president of the Machine Intelligence Research Institute (then the Singularity Institute) until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force.
SOURCE Big Think
By 33rd Square | Embed |
0 comments:
Post a Comment