Should We Allow Artificial Intelligence to Happen?

Wednesday, October 22, 2014


 Artificial Intelligence
With the race for ever improving AI ramping up, the potential benefits are huge, but as Stephen Hawking and others have warned, AI may be the riskiest technology ever created.




According to renowned physicist, Stephen Hawking, "Success in creating Artificial Intelligence would be  the biggest event in human history." In a recent well publicized editorial written with Max Tegmark, Stuart Russsel and Frank Wilczek, in the Independant, he also warned, that AI is also" potentially our worst mistake in history."

As the race for ever improving AI ramps up, the potential benefits are huge, however, as the very definition on the Singularity implies, we cannot predict what we might actually achieve when this technology meets and exceeds human capabilities. This is otherwise known as strong AI.

Strong artificial intelligence, which is also called artificial general intelligence (AGI), is defined as intelligence that can successfully perform any intellectual task that a human being can.  So far all AI progress has been non-general, narrow, or weak by this standard.

Will AI destroy humanity?

A key element to the Singularity is that such strong AI will implement recursive self-improvement will kick-in and software will begin to program its own code to make itself better. Being a digital (or quantum) computer, these improvements may take place over a very short period of time, leading to an intelligence explosion, or hard take-off scenario. This possibility continues to be hotly debated.

"There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains."


"There are no fundamental limits to what can be achieved," wrote Hawking and his co-authors, "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains."

Clearly a hard take-off is the most worrying scenario for strong AI development.

In the video above, produced by Bahrain-based YouTuber Sharkee, a poll is conducted at the end, asking if the viewer would allow strong artificial intelligence to happen or not. So far the results are quite strongly in favor of pursuing strong AI despite the risks.

Should We Allow Artificial Intelligence to Happen?

Related articles
As Hawking, and his co-writers warn, "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Hawking's warning is not necessarily intended to prescribe forbidding AI development. He suggests we can explore the implications now to improve the chances of reaping the benefits and avoiding the risks.

Supporting research that is devoted to these issues such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute are one way. Another key goal should be educating yourself and others on the risk and reward scenario presented by the development of strong AI.


SOURCE  Sharkee

By 33rd SquareEmbed

0 comments:

Post a Comment