Steve Omohundro Urges Preventing an Autonomous Weapons Arms Race

Monday, April 21, 2014

Steve Omohundro Urges Preventing an Autonomous Weapons Arms Race

 Artificial Intelligence
A study by AI researcher Steve Omohundro published in the Journal of Experimental & Theoretical Artificial Intelligence suggests that humans should be very careful to prevent future autonomous technology-based systems from developing anti-social and potentially harmful behavior.




In a study recently published in the Journal of Experimental & Theoretical Artificial Intelligence artificial intelligence researcher Steve Omohundro reflects upon the growing need for autonomous technology, and suggests that humans should be very careful to prevent future systems from developing anti-social and potentially harmful behaviors.

Modern military and economic pressures require autonomous systems that can react quickly – and without human input. These systems will be required to make rational decisions for themselves.

Omohundro writes: “When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess”.

"Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case."


Like a plot from The Terminator films, we are suddenly faced with the prospect of real threat from autonomous systems unless they are designed very carefully. Like a human being or animal seeking self-preservation, a rational machine could exert the following harmful or anti-social behaviors:


  • -Self-protection, as exampled above.
  • -Resource acquisition, through cyber theft, manipulation or domination.
  • -Improved efficiency, through alternative utilization of resources.
  • -Self-improvement, such as removing design constraints if doing so is deemed advantageous.

The study abstract states:

Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.

The study highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent:

Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behavior and it is easy to design simple utility functions that would be extremely harmful.”
Related articles


The study is echoed by new calls into examining the effects of super-intelligent artificial intelligence recently published by Stephen Hawking, Max Tegmark and others. They write, "although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute."

Omohundro concludes, "it appears that humanity's great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good. We have described some of the many challenges in that quest but have also outlined an approach to meeting those challenges."

Omohundro is the president of Self-Aware Systems which is developing a new kind of semantic software technology. In addition to his scientific work, Steve is passionate about human growth and transformation. He has trained in Rosenberg’s Non-Violent Communication, Gendlin’s Focusing, Travell’s Trigger Point Therapy, Bohm’s Dialogue, Beck’s Life Coaching, and Schwarz’s Internal Family Systems Therapy. He is working to integrate human values into technology and to ensure that intelligent technologies contribute to the greater good.



SOURCE  Alpha Galileo

By 33rd SquareEmbed

0 comments:

Post a Comment