bloc 33rd Square Business Tools - Steve Omohundro 33rd Square Business Tools: Steve Omohundro - All Post
Showing posts with label Steve Omohundro. Show all posts
Showing posts with label Steve Omohundro. Show all posts

Tuesday, February 24, 2015


 Artificial Intelligence
In a new video, futurist Michael Vassar explains why greater-than-human artificial intelligence would be the end of humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening he says.





Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity in this video from Big Think.

"The major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions."


The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Vassar notes that AI itself isn't the greatest risk to humanity.

"I conclude that the major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions," he says.

Michael Vassar on the Threat of AI

Greater than human artificial intelligence is a specific threat to humanity because of what Steve Omohundro has called basic AI drives.  (For a brief description by Omohundro, see the video embedded below.)

As Vassar suggests, we should expect an superintelligent AI to reconfigure the universe in a manner that does not necessarily preserve human values. "As far as I can tell this position is analytically compelling. It’s not a position that a person can intelligently honestly and reasonably be uncertain about," he says.

Related articles
Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago, according to Vassar and "wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open minded person." Vassar laments the fact that Bostrom had to spend a decade becoming the director of an incredibly prestigious institute and writing an incredibly rigorous meticulous book in order to get a still tiny number of people and still a minority of the world to recognize the threat of AI.

Vassar is an American futurist, activist, and entrepreneur. He is the co-founder and Chief Science Officer of MetaMed Research. He was president of the Machine Intelligence Research Institute (then the Singularity Institute) until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force.




SOURCE  Big Think

By 33rd SquareEmbed

Monday, April 21, 2014

Steve Omohundro Urges Preventing an Autonomous Weapons Arms Race

 Artificial Intelligence
A study by AI researcher Steve Omohundro published in the Journal of Experimental & Theoretical Artificial Intelligence suggests that humans should be very careful to prevent future autonomous technology-based systems from developing anti-social and potentially harmful behavior.




In a study recently published in the Journal of Experimental & Theoretical Artificial Intelligence artificial intelligence researcher Steve Omohundro reflects upon the growing need for autonomous technology, and suggests that humans should be very careful to prevent future systems from developing anti-social and potentially harmful behaviors.

Modern military and economic pressures require autonomous systems that can react quickly – and without human input. These systems will be required to make rational decisions for themselves.

Omohundro writes: “When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess”.

"Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case."


Like a plot from The Terminator films, we are suddenly faced with the prospect of real threat from autonomous systems unless they are designed very carefully. Like a human being or animal seeking self-preservation, a rational machine could exert the following harmful or anti-social behaviors:


  • -Self-protection, as exampled above.
  • -Resource acquisition, through cyber theft, manipulation or domination.
  • -Improved efficiency, through alternative utilization of resources.
  • -Self-improvement, such as removing design constraints if doing so is deemed advantageous.

The study abstract states:

Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.

The study highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent:

Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behavior and it is easy to design simple utility functions that would be extremely harmful.”
Related articles


The study is echoed by new calls into examining the effects of super-intelligent artificial intelligence recently published by Stephen Hawking, Max Tegmark and others. They write, "although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute."

Omohundro concludes, "it appears that humanity's great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good. We have described some of the many challenges in that quest but have also outlined an approach to meeting those challenges."

Omohundro is the president of Self-Aware Systems which is developing a new kind of semantic software technology. In addition to his scientific work, Steve is passionate about human growth and transformation. He has trained in Rosenberg’s Non-Violent Communication, Gendlin’s Focusing, Travell’s Trigger Point Therapy, Bohm’s Dialogue, Beck’s Life Coaching, and Schwarz’s Internal Family Systems Therapy. He is working to integrate human values into technology and to ensure that intelligent technologies contribute to the greater good.



SOURCE  Alpha Galileo

By 33rd SquareEmbed

Thursday, January 30, 2014


 Artificial Intelligence
Since the publication of James Barrat's Our Final Invention, one of the key featured artificial intelligence thinkers featured in the book has garnered a lot of interest. Recently, Steve Omohundro was interviewed on the Singularity 1 on 1 podcast.




Steve Omohundro is a scientist, professor, author, and entrepreneur with a Ph.D. in physics but has spent decades studying intelligent systems and artificial intelligence. His research into the basic “AI Drives” was featured in James Barrat’s recent book Our Final Invention: Artificial Intelligence and the End of the Human Era that has been generating international interest.

Recently Omohundro was interviewed on Nikola Danaylov's Singularity 1 on 1 podcast (video above).

Related articles
During their conversation Omohundro and Danaylov cover a variety of interesting topics such as: Omohundro's personal path starting with a PhD in physics and ending into AI; his unique time with Richard Feynman; the goals, motivation and vision behind is work; Omai Ventures and Self Aware Systems; the definition of AI; Rational Decision Making and the Turing Test; provably safe mathematical systems and AI scaffolding.

The pair also cover hard vs soft Singularity take-offs.

Steve Omohundro


Omohundro has been a scientist, professor, author, software architect, and entrepreneur doing research that explores the interface between mind and matter. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He was a computer science professor at the University of Illinois at Champaign-Urbana and cofounded the Center for Complex Systems Research.

He published the book Geometric Perturbation Theory In Physics, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, and built systems which learn to read lips, control robots, and induce grammars. He has worked with many research labs and startup companies.

Omohundro is the president of Self-Aware Systems which is developing a new kind of semantic software technology. In addition to his scientific work, Steve is passionate about human growth and transformation. He has trained in Rosenberg’s Non-Violent Communication, Gendlin’s Focusing, Travell’s Trigger Point Therapy, Bohm’s Dialogue, Beck’s Life Coaching, and Schwarz’s Internal Family Systems Therapy. He is working to integrate human values into technology and to ensure that intelligent technologies contribute to the greater good.


SOURCE  Singularity Weblog

By 33rd SquareSubscribe to 33rd Square

Wednesday, October 30, 2013



 Artificial Intellligence
Gary Marcus, writing recently in the New Yorker, looks at the threat posed by a potential resource-hungry smarter-than human artificial intelligence.




Writing for the New Yorker, pscychologist and artificial intelligence expert Gary Marcus, is often critical of the approaches used to recreate human intelligence in a computer, nevertheless he predicts smarter-than-human A.I. will arrive before the end of this century.  Now, Marcus raises the alarm bells over what this development may mean.

Marcus is Director of the NYU Center for Language and Music, and Professor of Psychology at New York University. He is author of The Birth Of The Mind: How A Tiny Number Of Genes Creates The Complexities Of Human Thought, The Algebraic Mind: Integrating Connectionism and Cognitive Science, and editor of The Norton Psychology Reader, Marcus's research on developmental cognitive neuroscience has been published in over forty articles in leading journals.

For Marcus, "at some level, the only real difference between enthusiasts and skeptics is a time frame. The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less than two decades. My estimate is at least double that, especially given how little progress has been made in computing common sense; the challenges in building A.I., especially at the software level, are much harder than Kurzweil lets on."

However much the time-frame for human-level artificial intelligence is off, Marcus believes it is inevitable. "It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine," he writes.

When computers will be able to program themselves, absorb vast quantities of new information, and reason in ways we can't even imagine of, is the main pre-condition of the Singularity.

Related articles
Marcus, along with Eric Brynjolfsson and Andrew McAfee have worried about the consequences of A.I. and robotics for employment. In his article, Marcus also warns that super-advanced A.I. might very well threaten humans more directly, by battling us for resources.

Marcus writes about the new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, and how Barrat presents a clear case for concern.

Barrat’s core argument, taken mainly from Steve Omohundro, an artificial intelligence researcher, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence.

Quoting Omohundro, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loathe to surrender their resources to the machine.

According to Barrat, “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.”  A consequence, may be that all matter and energy would be resourced by the A.I. to build and improve it's intelligence - a Computronium.

In Charles Stross's Accelerando, a science fiction picture of what this might actually mean for planets and people is presented.  The existential risk of such an A.I. has also been thoroughly examined by Nick Bostrom.

Kurzweil and others, of course, do not view the resource requirements of a super-intelligent artificial intelligence in such absolutes, suggesting that humans will race forward with the machines in a technobio symbiotic arrangement. As we understand more about our own intelligence through our exploration of neuroscience and the entangled web of connectomics, we will build computers that we can more readily merge with according to this transhumanist line of thought.

If this merger does not take place though, Marcus argues that the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

Marcus again refers to  Barrat’s book, and a quote by legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”



SOURCE  New Yorker

By 33rd SquareSubscribe to 33rd Square