Will The Will of Superinteligence Overshadow Any Attempts To Create "Friendly AI"?

Monday, September 29, 2014

Will The Will of Superinteligence Overshadow Any Attempts To Create "Friendly AI"?


 Singularity
What will the morality of a superintelligent AI be like? Will it be friendly to humanity?  By the Singularity, will the need to consider humanity even be relevant?




Essential to the core of many transhumanist thinkers is the techno-optimistic perspective.  If and when a Singularity comes, with ultra intelligent artificial intelligence, can we really feel secure that it will feel benevolent and even loving towards humankind?According to the author of The Transhumanist Wager, Zoltan Istvan:
Within a few months of the launch of artificial intelligence, expect nearly every science and technology book to be completely rewritten with new ideas--better and far more complex ideas. Expect a new era of learning and advanced life for our species. The key, of course, is not to let artificial intelligence run wild and out of sight, but to already be cyborgs and part machines ourselves, so that we can plug right into it wherever it leads. Then no matter what happens, we are along for the ride. After all, we don't want to miss the Singularity.
Surviving the Singularity means that humans must become part machine.  Ray Kurzweil has also presented this idea.  "Computers started out as large remote machines in air-conditioned rooms tended by white coated technicians," he writes. "Subsequently they moved onto our desks, then under our arms, and now in our pockets. Soon, we’ll routinely put them inside our bodies and brains. Ultimately we will become more nonbiological than biological."

Will the rise of superintelligence mean we will entirely supplant the human race? No one knows for sure. The fact that many people that are alive today will be around to find out the answer is unsettling and exciting for those aware of the direction the exponential growth of information technologies, and their encroachment into nearly every other field will have.  As Elon Musk famously Tweeted recently, AI "is potentially more dangerous than nukes." He followed up the next day with:


Related articles
From medicine to robotics, and if nanotechnology meets it's promise, potentially all physical matter in the universe will be affected.  Controlling all of the facets of these developments will not be done by mere human intelligence, nor by an AI alone.  Working together, a new species, what Ted Chu has named the CoBe (Cosmic Being), will be in charge. It, or they will have wills of their own, making programming in so-called "friendly AI," or Asimov's Three Laws, probably irrelevant.

MIRI's Michael Anissimov has clear ideas about programming superintelligent AIs
MIRI's Michael Anissimov has clear ideas about programming superintelligent AIs. Image Source - Facebook
Istvan hopes that his novel will be the first one read by a superintelligence. "A sophisticated artificial intelligence will be able to upgrade its "will." Its plasticity will know no bounds, as our brains do. In my philosophical novel The Transhumanist Wager, I put forth the idea that all humans desire to reach a state of perfect personal power--to be omnipotent in the universe. I call this a Will to Evolution."

"Programming AI with mammalian ideas, modern-day philosophies, and the fallibilities of the human spirit is dangerous and will possibly lead to total chaos. We're just not that noble or wise, yet," Istevan writes.

He also goes on to say, "I expect AI to eventually embrace my laws, and all the challenging, coldly rational ideas in TEF. Those ideas do not reflect politically correct modern-day thinking and the society our species has built."  TEF refers to Istevan's Three Laws of Teleological Egocentric Functionalism, namely:
1) A transhumanist must safeguard one's own existence above all else. 
2) A transhumanist must strive to achieve omnipotence as expediently as possible--so long as one's actions do not conflict with the First Law. 
3) A transhumanist must safeguard value in the universe--so long as one's actions do not conflict with the First and Second Laws.
Joshua Fox and Carl Shulman wrote a paper on the morality of superintelligence for MIRI (The Machine Intelligence Research Institute), where they suggest the idea from Earth’s history tends to suggest that increasing intelligence, knowledge, and rationality results in more cooperative and benevolent behavior misses how essentially different superintelligent AI will be.  They conclude, "we have reason for pessimism regarding the values of intelligent machines not carefully engineered to be altruistic."

While designing and engineering in altruism, friendliness, or imprisoning the AI in a box, or installing fail-safe off switches may seem like good ideas to help safeguard humanity, the transhuman perspective may, in fact, supersede the necessity.  What if superintelligence does not arise from artificial intelligence software, but through genetic manipulation, or via smart drugs that actually work?  What is the off switch for a superintelligent person? In Singularity Rising, James D. Miller makes substantive arguments that the Singularity might be biological before it is algorithmically-based.

With continued accelerated development in all areas of genetics, nanotechnology and AI/robotics, and medical life extension, the coming decades will increasingly look very different, as will we.

By 2045, AI may be our Final Invention.  Our transition to a new level of evolution through transhumanism will leave behind the human being so that we may race with the machines, to the stars and beyond.


By 33rd SquareEmbed

0 comments:

Post a Comment