bloc 33rd Square Business Tools - existential threat 33rd Square Business Tools: existential threat - All Post
Showing posts with label existential threat. Show all posts
Showing posts with label existential threat. Show all posts

Friday, October 23, 2015

the Threat of artificial intelligence


Existential Threats


At an event organized by the Permanent Mission of Georgia, in collaboration with the United Nations Interregional Crime and Justice Research Institute, Max Tegmark and Nick Bostrom discussed existential threats to humanity, including that of artificial intelligence.  



(The video, embedded is quite long, if you want to skip ahead, Max Tegmark's portion starts at 1:55:31 and Nick Bostrom at 2:14:48.)

Max Tegmark, known as "Mad Max" for his unorthodox ideas and passion for adventure, his scientific
Related articles

interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book Our Mathematical Universe. He is an MIT physics professor with more than two hundred technical papers and has featured in dozens of science documentaries. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine's "Breakthrough of the Year: 2003." He is founder (with Anthony Aguirre) of the Foundational Questions Institute.

Tegmark is also one of the co-founders of the Future of Life Institute.

Nick Bostrom is a Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and, most recently, the book Superintelligence: Paths, Dangers, Strategies. He is known for his pioneering work on existential risk, the simulation argument, anthropics, AI safety, and global consequentialism. He has received the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement and been named One of the Top 100 Global Thinkers by Foreign Policy Magazine.




By 33rd SquareEmbed



Friday, May 15, 2015

Stephen Hawking Continues To Warn Against Artificial Intelligence

Artificial Intelligence
Stephen Hawking, the renowned theoretical physicist and cosmologist, has reiterated his warnings about artificial intelligence at a recent conference in London.





Speaking at the Zeitgeist 2015 conference in London, Stephen Hawking warned that smart computers will overtake human intelligence at some point in the next century.

The internationally renowned cosmologist and Cambridge University professor, said, “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

Related articles
Hawking, who signed an open letter alongside Max Tegmark, Elon Musk, Demis Hassabis, Yann LeCun, Geoffrey Hinton, Ben Goertzel and other expert in the field, also said, “Our future is a race between the growing power of technology and the wisdom with which we use it.”

"Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours."


In the short term, people are concerned about who controls AI, but in the long term, the concern will be whether AI can be controlled at all, he said.

Hawking, the author of A Brief History of Time, believes that scientists and technologists need to safely and carefully coordinate and communicate advancements in AI to ensure it does not grow beyond humanity's control.

The existential risk posed by artificial intelligence has also been brought to a wider audience than just technologists by Nick Bostrom's recent book, Superintelligence.

Bostrom, an Oxford professor and Director of the Future of Humanity Institute at Oxford in his book seeks to replace the prediction of a Singularity—the idea there will be a crossover point where society becomes unrecognizable due to superior machine intelligence— with that of an "intelligence explosion."

Some critics liken the trend of predicting the threat of superintellligence to those of global warming and acid rain in the past. What do you think?  Is this all just hype, or is the threat of super-smart AI's real?


SOURCE  Tech World

By 33rd SquareEmbed

Monday, April 6, 2015

Is AI Truly the Future? What Real Engineers think About Artificial Intelligence

 Artificial Intelligence
What are the computer engineers and industry-related experts' opinion of artificial intelligence? Do the recent depictions in science fiction of the technology match up with reality?





Artificial intelligence provides a future world with the hope of automation linked with an intelligent basis. In such a light, AI is not simply employed to provide automation only, but to implement it with a sort of artificial awareness that is a simulation of how humans track and monitor how events transpire. With such digital insights, the AI of the future is hoped to be able to respond more in-line with a human's capacity to reason things—artificial general intelligence. Moreover, computer simulations of human thinking may actually operate on a much higher level than humans themselves operate.

What are the computer engineers and industry-related experts really saying about AI?


Something as simple as having a greater capacity to store vast quantities of information in short term memory buffers would give an AI system a major advantage over the rather limited capacity of human short term memory storage and manipulation. But, what are the computer engineers and industry-related experts really saying? Is AI a viable product for the world of the future, or is man simply dreaming?

Ray Kurzweil Weighs in on the Future

According to one source, AI enthusiast and futurist, Ray Kurzweil, has recently weighed in on the reality of AI as being a viable product of the future in light of the recent movie Her. Her was a movie about an AI system that fell in love with a human.

Although Kurzweil is optimistic that the near future will give rise to the technology expressed throughout this movie, he does not believe that AI will overtake the human capacity to function. One reason for this belief is because he envisions a situation where humans and AI systems integrate rather symbiotically.

Related articles

Did Transcendence Get It Right?

According to another review, another recent movie, Transcendence, has been sporting the latest in AI technology. Computer engineers have been shedding light on issues outlined in the movie, such as how the technology already exists to do vital AI tasks like image reconstruction and image recognition. As technology improves, it is expected that a computer's ability to unite images, much like the eye and the brain coordinate, will improve the speed and accuracy of 3D object or facial recognition procedures.

The Dark Side of AI

It is important to remember that AI has many potential dangers. Instead of people doubting the viability of AI as shaping the future, the greater fear among engineers and experts is that AI will be increasingly used in a dangerous capacity. Ultimately, the danger of AI is summed up in the idea that the machines will become smarter than the humans; thus, giving rise to the notion that humans are irrelevant to the future of machines. Until then, reports on AI bombs, which know how to target intended victims, becomes the talk of future methods of technological terrorism. In this capacity, it is difficult to ignore the impact that AI is already having on the future of mankind, or at the very least, it is having a huge impact on our perception of how wholesome AI is to our species.

Artificial intelligence is in many ways already a reality. Many of the components needed to formulate a working AI system exist and produce functional results. From visual and audio recognition to data recall and memory buffering, the pieces of the digital mind of the future have been thoroughly researched, tested, and given rise to learning mechanisms that operate within a limited capacity to simulate the human mind and even some emotional like responses. From this point, time will only give rise to more advanced mechanisms that do the job more accurately and with a greater range of success. However, it is always beneficial to pay attention to what true engineers are saying on the matter—don’t believe everything you see in the movies.


The information for this article was provided by professionals who offer online civil engineering degrees.



By Dixie SomersEmbed

Author Bio - Dixie is a freelance writer who loves to write about business, finance and technology. She lives in Arizona with her husband and three beautiful daughters.

Tuesday, February 24, 2015


 Artificial Intelligence
In a new video, futurist Michael Vassar explains why greater-than-human artificial intelligence would be the end of humanity. The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening he says.





Futurist Michael Vassar explains why it makes perfect sense to conclude that the creation of greater-than-human AI would doom humanity in this video from Big Think.

"The major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions."


The only thing that could save us is if due caution were observed and a framework installed to prevent such a thing from happening. Vassar notes that AI itself isn't the greatest risk to humanity.

"I conclude that the major global catastrophic threat to humanity is not AI but rather the absence of social, intellectual frameworks for people quickly and easily converging on analytically compelling conclusions," he says.

Michael Vassar on the Threat of AI

Greater than human artificial intelligence is a specific threat to humanity because of what Steve Omohundro has called basic AI drives.  (For a brief description by Omohundro, see the video embedded below.)

As Vassar suggests, we should expect an superintelligent AI to reconfigure the universe in a manner that does not necessarily preserve human values. "As far as I can tell this position is analytically compelling. It’s not a position that a person can intelligently honestly and reasonably be uncertain about," he says.

Related articles
Nick Bostrom who recently wrote the book Superintelligence, was aware of the basic concerns associated with AI risk 20 years ago, according to Vassar and "wrote about them intelligently in a manner that ought to be sufficiently compelling to convince any thoughtful and open minded person." Vassar laments the fact that Bostrom had to spend a decade becoming the director of an incredibly prestigious institute and writing an incredibly rigorous meticulous book in order to get a still tiny number of people and still a minority of the world to recognize the threat of AI.

Vassar is an American futurist, activist, and entrepreneur. He is the co-founder and Chief Science Officer of MetaMed Research. He was president of the Machine Intelligence Research Institute (then the Singularity Institute) until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force.




SOURCE  Big Think

By 33rd SquareEmbed

Wednesday, May 9, 2012

Existential Risk Of Artificial Intelligence

 Existential Risk
Existential threats form a sub-category of global catastrophic risks, in which an adverse outcome would either cause the extinction of Earth-originating intelligent life or permanently and drastically destroy its future potential. One such form of existential risk is the rise of smarter-than-human artificial intelligence.
Global catastrophic risks are those that pose serious threats to human well-being on a global scale. An immensely diverse collection of events could constitute global catastrophes: they range from volcanic eruptions to pandemic infections, nuclear accidents to worldwide tyrannies, out-of-control scientific experiments to climatic changes, and cosmic hazards to economic collapse.

Existential threats form a sub-category of global catastrophic risks, in which an adverse outcome would either cause the extinction of Earth-originating intelligent life or permanently and drastically destroy its future potential. It would spell an end to the human story.

One such form of existential risk is the rise of smarter-than-human artificial intelligence.

Due to the extreme severity of existential risks, they deserve extremely careful attention even if their probability could confidently be assessed to be very small. Reduction of existential risk is therefore important.

In 1965, I.J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind. The pace of an intelligence explosion depends on two conflicting pressures. Each improvement in AI technology increases the ability of AIs to research more improvements, but an AI may also face the problem of diminishing returns as the easiest improvements are achieved first.

The rate of improvement is hard to estimate, but several factors suggest it would be high. The predominant view in the AI field is that the bottleneck for powerful AI is software, not than hardware. Continued rapid hardware progress is expected in coming decades. If and when the powerful AI software is developed, there may by that time be a glut of hardware available to run many copies of AIs, and to run them at high speeds. This could amplify the effects of AI improvements.

Humans are not optimized for intelligence. Rather, we are the first and possibly dumbest species capable of producing a technological civilization. The first AI with humanlike AI research abilities might be able to reach superintelligence rapidly "€” in particular, more rapidly than researchers and policy-makers can develop adequate safety measures.

Ted Bell, author of  Phantom: An Alex Hawke Novel which contains an 'evil' artificial intelligence character states, “We’re going to have to be really careful with these machines,” he said. “We’re ultimately going to have a war with these machines and we might not win it.”


Keeping the artificial intelligence genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky.  Yampolskiy argues that AI agents should essentially be imprisoned to protect humanity.

In this short talk, Future of Humanity Institute researcher Stuart Armstrong has five minutes to lay out the reasons why Artificial Intelligence, if possible, will likely be extremely powerful and considerably dangerous (though not at all in the ways movies imply). The talk was given at the 25th Oxford Geek Night.




SOURCE  FHIOxford

By 33rd SquareSubscribe to 33rd Square