Artificial Intellligence
Gary Marcus, writing recently in the New Yorker, looks at the threat posed by a potential resource-hungry smarter-than human artificial intelligence. |
Writing for the New Yorker, pscychologist and artificial intelligence expert Gary Marcus, is often critical of the approaches used to recreate human intelligence in a computer, nevertheless he predicts smarter-than-human A.I. will arrive before the end of this century. Now, Marcus raises the alarm bells over what this development may mean.
Marcus is Director of the NYU Center for Language and Music, and Professor of Psychology at New York University. He is author of The Birth Of The Mind: How A Tiny Number Of Genes Creates The Complexities Of Human Thought, The Algebraic Mind: Integrating Connectionism and Cognitive Science, and editor of The Norton Psychology Reader, Marcus's research on developmental cognitive neuroscience has been published in over forty articles in leading journals.
For Marcus, "at some level, the only real difference between enthusiasts and skeptics is a time frame. The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less than two decades. My estimate is at least double that, especially given how little progress has been made in computing common sense; the challenges in building A.I., especially at the software level, are much harder than Kurzweil lets on."
However much the time-frame for human-level artificial intelligence is off, Marcus believes it is inevitable. "It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine," he writes.
When computers will be able to program themselves, absorb vast quantities of new information, and reason in ways we can't even imagine of, is the main pre-condition of the Singularity.
Marcus, along with Eric Brynjolfsson and Andrew McAfee have worried about the consequences of A.I. and robotics for employment. In his article, Marcus also warns that super-advanced A.I. might very well threaten humans more directly, by battling us for resources.
Marcus writes about the new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, and how Barrat presents a clear case for concern.
Barrat’s core argument, taken mainly from Steve Omohundro, an artificial intelligence researcher, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence.
Quoting Omohundro, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loathe to surrender their resources to the machine.
According to Barrat, “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.” A consequence, may be that all matter and energy would be resourced by the A.I. to build and improve it's intelligence - a Computronium.
In Charles Stross's Accelerando, a science fiction picture of what this might actually mean for planets and people is presented. The existential risk of such an A.I. has also been thoroughly examined by Nick Bostrom.
Kurzweil and others, of course, do not view the resource requirements of a super-intelligent artificial intelligence in such absolutes, suggesting that humans will race forward with the machines in a technobio symbiotic arrangement. As we understand more about our own intelligence through our exploration of neuroscience and the entangled web of connectomics, we will build computers that we can more readily merge with according to this transhumanist line of thought.
If this merger does not take place though, Marcus argues that the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.
Marcus again refers to Barrat’s book, and a quote by legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”
SOURCE New Yorker
Related articles |
Marcus writes about the new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, and how Barrat presents a clear case for concern.
Barrat’s core argument, taken mainly from Steve Omohundro, an artificial intelligence researcher, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence.
Quoting Omohundro, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loathe to surrender their resources to the machine.
According to Barrat, “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.” A consequence, may be that all matter and energy would be resourced by the A.I. to build and improve it's intelligence - a Computronium.
In Charles Stross's Accelerando, a science fiction picture of what this might actually mean for planets and people is presented. The existential risk of such an A.I. has also been thoroughly examined by Nick Bostrom.
Kurzweil and others, of course, do not view the resource requirements of a super-intelligent artificial intelligence in such absolutes, suggesting that humans will race forward with the machines in a technobio symbiotic arrangement. As we understand more about our own intelligence through our exploration of neuroscience and the entangled web of connectomics, we will build computers that we can more readily merge with according to this transhumanist line of thought.
If this merger does not take place though, Marcus argues that the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.
Marcus again refers to Barrat’s book, and a quote by legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”
SOURCE New Yorker
By 33rd Square | Subscribe to 33rd Square |
0 comments:
Post a Comment