Are We Gambling Our Future By Developing Artificial Superintelligence?

Sunday, April 3, 2016

Are We Gambling Our Future By Developing Artificial Superintelligence


Singularity

The development of superintelligent artificial intelligence raises new and potentially existence-threatening issues related to our future. The book, 'Artificial Superintelligence: A Futuristic Approach' directly addresses these issues and consolidates research aimed at making sure the gamble to bring about superintelligence will be beneficial to humanity.


Roman  Lampolskiy has for years now been warning us of the dangers of advanced artificial intelligence, and the need to develop engineered solutions to contain and imprison these creations before they have the chance to destroy us.Yampolskiy is a Latvian born computer scientist at the University of Louisville, and an alumnus of Singularity University. Work in his lab focuses on standard cyber security problems including multi-modal biometrics, cryptography and keeping bots from draining resources, interfering in virtual worlds or manipulating online polls or voting, but he also extends his analysis to the future.

Lampolskiy recently told Insider Louisville that the Singularity is correctly named, because it evokes black holes and our inability to see beyond them. No one can know with certainty what will happen once artificial intelligence exceeds human intelligence.

Are We Gambling Our Future By Developing Artificial Superintelligence?

In his new book, Artificial Superintelligence: A Futuristic Approach, Yampolskiy argues for addressing AI potential dangers with his safety engineering approach, rather than with loosely defined ethics. Yampolskiy points out that human values are inconsistent too dynamic to quantify and program into computer systems we expect to be friendly towards us.

"Fully autonomous machines cannot ever be assumed to be safe," he writes.  Yampolskiy prescribes treating the ethical question of AI as seriously as biological and chemical weapons or harming animals and children. Yampolskiy said at the 2015 Idea Festival, "AI may be more dangerous than nuclear weapons." We need to design research review boards, decide on funding, and control what's happening. "We need human ethics applied to robots. We should do the same thing we do with human cloning with advanced AI."


In his book, Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field, outlining a Venn diagram in which 'human minds' and 'human designed AI' occupy adjacent space.'Self-improving minds' are envisioned which improve upon 'human designed AI,' and at this very juncture arises the potential for 'universal intelligence,' and the Singularity Paradox, where AI is super intelligent, but does not possess what we consider to be common sense.

Yampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or 'Just for A.I. Location.' Part of Yampolskiy's proposed solution to the AI Confinement Problem includes asking 'safe questions'. Yampolskiy includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to 'leakproof' virtual worlds), and argues for creation of committees designated to oversea AI security.

AI hazard Yampolskiy

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are key facets of the book.

Related articles
Yudkowskiy writes of having "performed AI-box 'experiments' in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box." One famous example is documented in James Barrat's Our Final Invention where Eliezer Yudkowsky, acting as an AI, was able to repeatedly be released by competition participants.

In 2010 David Chalmers proposed the idea of a “leakproof” Singularity. He suggests that for safety reasons, first AI systems be restricted to simulated virtual worlds until their behavioral tendencies could be fully understood under the controlled conditions. Chalmers argues that even if such an approach is not foolproof, it is certainly safer than building AI in physically embodied form. Chalmers also observes that a truly leakproof system in which no information is allowed
to slither out from the simulated world into our environment “… is impossible, or at least pointless” since we can’t interact with the system or even observe it.

With one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it's easy to see why there is such strong and growing concern regarding the safety to mankind of AI. If there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping the gamble has better odds. Hopefully the developers of superintelligent systems will have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its creators. Yampolskiy points out "In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of Singularity research, with numerous publications appearing every year."

"We need human ethics applied to robots. We should do the same thing we do with human cloning with advanced AI."
Neil Jacobstein, Chair, AI and Robotics, Singularity University says,"Concerns over the existential risks of artificial superintelligence have spawned multiple vectors of research and development into specification, validation, security, and control. Roman Yampolskiy’s Artificial Superintelligence: A Futuristic Approach reviews the relevant literature and stakes out the territory of AI safety engineering. Specifically, Yampolskiy advocates formal approaches to characterizing AIs and systematic confinement of superintelligent AIs. Serious students of AI and artificial general intelligence should study this work, and consider its recommendations."

Preparation for the worst case is the sensible approach, Yampolskiy said, because if you can handle the worst case, everything else is easy.

It’s important to remember that you’re dealing with a super intelligence that lacks common sense, Yampolskiy said. It’s the singularity paradox: The super A.I. is essentially really smart and really stupid at the same time.

A key problem with writing code into an AI to keep it friendly to humans is that an advanced system could simply remove those lines of code, Yampolskiy says.

The potential of the Singularity arriving imparts a sense of urgency, Yampolskiy said, because once the AI begins improving itself, it may curtail humans’ ability to weigh in.

“This is probably the last chance we have to change the system,” he said.




SOURCES  Insider Louisville, Singularity Weblog


By 33rd SquareEmbed


0 comments:

Post a Comment