bloc 33rd Square Business Tools - Jaan Tallinn 33rd Square Business Tools: Jaan Tallinn - All Post
Showing posts with label Jaan Tallinn. Show all posts
Showing posts with label Jaan Tallinn. Show all posts

Wednesday, February 1, 2017

A Super-Powered Panel Discusses Superintelligence


Artificial Intelligence

At the Beneficial AI Conference recently, Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn chatted with moderator Max Tegmark what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.


Last month a panel of experts gathered at the Beneficial AI Conference, in Asilomar, California organized by the Future of Life Institute to discuss the most important issue of this century. The amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI.


Beneficial AI Conference


Related articles
Below, Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with moderator Max Tegmark what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

When Tegmark polls the panel about when this may happen, the consensus (apart from Musk who plays to the crowd) is that this will happen in the time frame of years. Tegmark comments that the timescale makes a huge difference—hard takeoff versus soft takeoff.

"We're talking about human AI. Human AI is by definition at human levels, therefore is human."
Kurzweil talked about how the growth of AGI might be better in a slow takeoff scenario. "As technologists we should do everything we can to keep the technology safe and beneficial. As we do each specific application, like self driving cars, there's a whole host of ethical issues to address, but I don't think we can solve the problem just technologically." Kurzweil projects that even if the most perfect and safe AI is created, it might be at the expense of the political system or other factors, it won't be an ideal outcome.

"We're talking about human AI. Human AI is by definition at human levels, therefore is human," he states. According to the futurist, the issue of how we make humans ethical is the same issue as how we make AIs human-level ethical.

In conjunction with the AI conference in Asilomar a large group of the leaders in AI and related fields teamed up and extended the open letter into a set of 23 principles for AI research, design and use, intended to ensure that AI lives up to its great potential to help and empower people in the decades and centuries ahead.

Artificial intelligence has already provided beneficial tools that we use every day by people around the world. According to the conference participants, continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.


The Asilomar Principles 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. 
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: 
  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers. 
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. 
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. 

Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. 
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why. 
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. 
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. 
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. 
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. 
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data. 
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty. 
14) Shared Benefit: AI technologies should benefit and empower as many people as possible. 
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. 
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. 
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends. 
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
Longer-term Issues 
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. 
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. 
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. 
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures. 
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

If you'd like to join Demis Hassabis, Yann LeCun, Yoshua Bengio, Stuart Russell, Peter Norvig, Ray Kurzweil, Jeff Dean, Tom Gruber, Francesca Rossi, Bart Selman, Leslie Kaelbling, Guru Banavar and others as a signatory, you'll find the principles and a signature form here.

Learn more about the Asilomar AI Principles that resulted from the conference and the process involved in developing them.





SOURCE  Future of Life Institute


By  33rd SquareEmbed



Tuesday, January 28, 2014

Could a Hardware Overhang Lead To An Intelligence Explosion?

 The Singularity
With Moore's Law continuing to expand computing power, and artificial intelligence software algorithms lagging behind, will the eventual loading of an AGI system onto a future super-computer pose too grave a risk to humanity?




Moore's Law operates on the hardware side of computer development, exponentially ramping up computing power each year.  Specifically, the law refers to the number of transistors on integrated circuits doubling approximately every two years.

This doubling has now led us to the stage of exascale computing and to the cusp of machines that may duplicate the processing ability of the human brain.  The power required and other factors are in no way human-like yet, however the processing features may yield the projected memory and processing requirements to duplicate human brain activity.



Many supercomputers may already be at human levels in these terms, however the algorithmic methods used do not yet come close to the human brain.  Projects like the Human Brain Project and the US BRAIN Initiative look to close up the software gap.

In the meantime, Moores Law is continuing, so that when (and if, admittedly) software is capable of mimicking or actually duplicating human intelligence processes, the software may be loaded onto machines that have much more processing power and memory than a human brain.

Jaan Talinn commented a few years ago at a Humanity+ UK event (from which the top image is used),
It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.
Such a hardware or, computing overhang refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

Related articles
In the context of Artificial General Intelligence (AGI), this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers.

Anders Sandberg writes, "when you run an AI, its effective intelligence will proportional to how much fast hardware you can give it (e.g. it might run faster, have greater intellectual power or just able to exist in more separate copies doing intellectual work). More effective intelligence, bigger and faster intelligence explosion."

As some would argue, this hard take-off scenario could make AGIs much more powerful than before, and present an existential risk.

As futurist David Wood points out, an example of a hardware overhang once occurred when thermonuclear bombs were being tested at the Bikini Atoll in the Marshall Islands.  The projected explosive yield was expected to be from four to six Megatons,  but when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  If the scientists at the time were wrong in their estimates by a greater amount, the consequences could have been so much greater.

With the risk of AGI, are we too looking at a development that may threaten millions of people if it is unleashed?  Will putting an AGI onto a Zetta-scale or Yotta-scale computer in the coming years produce a Singularity like Wood's graph below?  Moreover, would such a technology spell the end of humanity?



SOURCE  David Wood

By 33rd SquareSubscribe to 33rd Square


Enhanced by Zemanta

Sunday, January 26, 2014

Google Makes Another Major AI Aquisition


 Artificial Intelligence
Google has made another key acquisition in artificial intelligence, gobbling up DeepMind, a young company started by Demis Hassabis, Mustafa Suleyman, Jaan Tallin and Shane Legg.




Another day, and another major acquisition by Google. The company has announced it will buy London-based artificial intelligence company DeepMind. The deal was confirmed to Re/code by Google. There is speculation on the price for DeepMind, but it is rumoured to be in the $400 million range.

DeepMind was founded by well known AI researchers, neuroscientist Demis Hassabis, a former child prodigy in chess,  Mustafa Suleyman, Jaan Tallin, Skype and Kazaa developer, and researcher Shane Legg.

Tallinn is also a board member of the Lifeboat Foundation an organization with the tagline "safeguarding humanity" and at university he majored in theoretical physics.

Related articles
Legg is a former speaker at the Singularity Summit is a long-time collaborator with Marcus Hutter.

This is the latest move by Google expands the already impressive roster of artificial intelligence experts, and was reportedly led by Google CEO Larry Page. In December 2012, Google hired inventor, entrepreneur, author, and futurist Ray Kurzweil as a director of engineering focused on machine learning and language processing. Kurzweil has said that he wants to build a search engine so advanced that it could act like a “cybernetic friend.”

Not to mention the robotic bodies for the AI's may also be well underway at Google thanks to the recent vast acquisitions in that area as well.

Google’s hiring of Deepmind will help it compete against other major tech companies as they all try to gain business advantages by focusing on deep learning. For example, Facebook recently hired NYU professor Yann LeCunn to lead its new artificial intelligence lab, IBM’s Watson supercomputer is now working on deep learning, and Yahoo recently acquired photo analysis startup LookFlow to lead its new deep learning group.

DeepMind’s site currently only has a landing page, which says that it is “a cutting edge artificial intelligence company” to build general-purpose learning algorithms for simulations, e-commerce, and games.

Ben Goertzel, Marcus Hutter, Alex Wissner-Gross, and other AI experts may want to keep their voicemail free in the next few weeks.


SOURCE  re/code

By 33rd SquareSubscribe to 33rd Square

Monday, August 12, 2013


 
Artificial Intelligence
James Barrat's new book, Our Final Invention: Artificial Intelligence and the End of the Human Era explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own?




A

rtificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the “smart” in your smartphone, it has the run of your house, and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.

Though primitive today, AI cognitive systems double in speed and power each year. In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.

Related articles
Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And more to the point: Will they allow us to?

For about 20 years Barrat has written and produced documentaries.  A long-standing fascination with Artificial Intelligence came to a head in 2000, when he interviewed inventor Ray Kurzweil, roboticist Rodney Brooks, and sci-fi legend Arthur C. Clarke. Kurzweil and Brooks were casually optimistic about a future they considered inevitable - a time when we will share the planet with intelligent machines.

"It won't be some alien invasion of robots coming over the hill," Kurzweil told Barrat, "because they'll be made by us." In his compound in Sri Lanka, Clarke wasn't so sure. "I think it's just a matter of time before machines dominate mankind," he said. "Intelligence will win out."

Intelligence, not charm or beauty, is the special power that enables humans to dominate Earth. That dominance wasn't won by a huge intellectual margin either, but by a relatively small one. It doesn't take much to take it all. Now, propelled by a powerful economic wind, scientists are developing intelligent machines. Each year intelligence grows closer to shuffling off its biological coil and taking on an infinitely faster and more powerful synthetic one. But before machine intelligence matches our own, we have a chance. We must develop a science for understanding and coexisting with smart, even superintelligent machines. If we fail, we'll be stuck in an unwinnable dilemma. We'll have to rely on the kindness of machines to survive. Will machines naturally love us and protect us? 

Barrat asks, "Should we bet our existence on it?"

Our Final Invention

Our Final Invention is about what can go wrong with the development and application of advanced AI. It's about AI's catastrophic downside, one you'll never hear about from Google, Apple, IBM, and DARPA. It is growing into the most important conversation of our time.
Some advanced praise for the book:
"A hard-hitting book about the most important topic of this century and possibly beyond — the issue of whether our species can survive. I wish it was science fiction but I know it’s not.”
Jaan Tallinn, co-founder of Skype
"The compelling story of humanity’s most critical challenge. A Silent Spring for the 21st Century.”
Michael Vassar, former President of the Singularity Institute
"Barrat’s book is excellently written and deeply researched. It does a great job of communicating to general readers the danger of mistakes in AI design and implementation.”
Bill Hibbard, author of Super-Intelligent Machines 
"Our Final Invention is a thrilling detective story, and also the best book yet written on the most important problem of the 21st century.”
Luke Muehlhauser, Executive Director of the Machine Intelligence Research Institute 
"An important and disturbing book.”
Huw Price, co-founder, Cambridge University Center for the Study of Existential Risk



SOURCE  James Barrat

By 33rd SquareSubscribe to 33rd Square

Enhanced by Zemanta

Monday, September 10, 2012

Greater than human artificial intelligence a threat says Jaan Tallinn
 
Artificial Intelligence
Jaan Tallinn argues that evolution has outsmarted itself by creating intelligent species actually smart enough to understand and control it.  Now, with the increasing possibility of smarter-than-human artificial intelligence, humanity, Tallinn argues, is about to commit a similar blunder.
Skype's founding programmer and A.I philosopher at Cambridge University Jaan Tallinn explains how A.I is taking over from evolution.  Talinn, a philosopher of modern technology, believes the impact of artificial intelligence has reached a crucial threshold.

The Estonian-born Tallinn is a board member of the Lifeboat Foundation an organization with the tagline "safeguarding humanity" and at university he majored in theoretical physics. 

"Evolution actually made a sort of mistake in a sense that it actually created primates with optimisation ability and that optimisation ability got powerful enough to actually understand evolution. It actually created something that was more powerful than itself."

Tallinn's point is that humans are on the verge of potentially repeating that mistake. "I’m giving about 50 per cent probability of this thing (technological singularity) happening this century," he said. "If this thing is going to be really slow, then people have time to turn this thing into policy and then because this is a really contentious issue, it might end up doing a lot of damage," he said.

He also warns of the dangers of only seeing AI as Hollywood's humanoid robots. "I mean intelligence is really about planning and protection and you don’t really need arms and legs to do that. For example, Google is a very famous application of AI."




SOURCE  World News Australia

By 33rd SquareSubscribe to 33rd Square