How Dangerous is Artificial Intelligence?

Monday, April 27, 2015


 Artificial Intelligence
In Avengers: Age of Ultron, the villain starts out as an artificial intelligence experiment gone wrong. So is AI really the biggest threat to humanity? 





Artificial intelligence development is proceeding at an accelerating rate, but will it reach human-level (strong AI), and is this a threat to humanity itself?  A growing number of researchers, scientists and concerned people are raising the alarm about the existential threat of AI.

Much of the recent dialog about the threat of artificial intelligence is coming from discussion following the release of Nick Bostrom's book, Superintelligence. The book explores intellectual models of the sometimes dire possibilities when machines exceed human intelligence. For Bostrom, superintelligence is not the point when AI conquors the  Turing Test—it's what comes after that. Once we build a system as smart as a human, that machine will try to improve itself, which enables it to further improve itself—recursively and exponentially.


Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, SpaceX and Tesla head Elon Musk referred to artificial intelligence as "summoning the demon."
I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out.
Physicist Stephen Hawking told the BBC in December he believes future developments in artificial intelligence have the potential to eradicate mankind. The Cambridge professor, who relies on a form of artificial intelligence to communicate, said if technology could match human capabilities “it would take off on its own, and re-design itself at an ever increasing rate.”



Related articles
He also said that due to biological limitations, there would be no way humans could match the speed of development of technology.

"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. The development of full artificial intelligence could spell the end of the human race."


“Humans, who are limited by slow biological evolution, couldn't compete and would be superseded,” he said. “The development of full artificial intelligence could spell the end of the human race.”

In an open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

The letter’s authors recognize the remarkable successes in “speech recognition, image classification, autonomous vehicles, machine translation, legged location and question-answering systems,” and argue that it is not unfathomable that the research may lead to the eradication of disease and poverty. But they insisted that “our AI systems must do what we want them to do” and laid out research objectives that will “help maximize the societal benefit of AI.”

How Dangerous is Artificial Intelligence?

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
"I am in the camp that is concerned about super intelligence," Bill Gates recently wrote in a Reddit AMA. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

What do you think?  Are you concerned about the threat of artificial intelligence?

SOURCE  FW: Thinking

By 33rd SquareEmbed

0 comments:

Post a Comment