bloc 33rd Square Business Tools - Bill Gates 33rd Square Business Tools: Bill Gates - All Post
Showing posts with label Bill Gates. Show all posts
Showing posts with label Bill Gates. Show all posts

Monday, April 27, 2015


 Artificial Intelligence
In Avengers: Age of Ultron, the villain starts out as an artificial intelligence experiment gone wrong. So is AI really the biggest threat to humanity? 





Artificial intelligence development is proceeding at an accelerating rate, but will it reach human-level (strong AI), and is this a threat to humanity itself?  A growing number of researchers, scientists and concerned people are raising the alarm about the existential threat of AI.

Much of the recent dialog about the threat of artificial intelligence is coming from discussion following the release of Nick Bostrom's book, Superintelligence. The book explores intellectual models of the sometimes dire possibilities when machines exceed human intelligence. For Bostrom, superintelligence is not the point when AI conquors the  Turing Test—it's what comes after that. Once we build a system as smart as a human, that machine will try to improve itself, which enables it to further improve itself—recursively and exponentially.


Speaking at the MIT Aeronautics and Astronautics department’s Centennial Symposium in October, SpaceX and Tesla head Elon Musk referred to artificial intelligence as "summoning the demon."
I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out.
Physicist Stephen Hawking told the BBC in December he believes future developments in artificial intelligence have the potential to eradicate mankind. The Cambridge professor, who relies on a form of artificial intelligence to communicate, said if technology could match human capabilities “it would take off on its own, and re-design itself at an ever increasing rate.”



Related articles
He also said that due to biological limitations, there would be no way humans could match the speed of development of technology.

"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. The development of full artificial intelligence could spell the end of the human race."


“Humans, who are limited by slow biological evolution, couldn't compete and would be superseded,” he said. “The development of full artificial intelligence could spell the end of the human race.”

In an open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

The letter’s authors recognize the remarkable successes in “speech recognition, image classification, autonomous vehicles, machine translation, legged location and question-answering systems,” and argue that it is not unfathomable that the research may lead to the eradication of disease and poverty. But they insisted that “our AI systems must do what we want them to do” and laid out research objectives that will “help maximize the societal benefit of AI.”

How Dangerous is Artificial Intelligence?

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
"I am in the camp that is concerned about super intelligence," Bill Gates recently wrote in a Reddit AMA. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

What do you think?  Are you concerned about the threat of artificial intelligence?

SOURCE  FW: Thinking

By 33rd SquareEmbed

Monday, April 20, 2015


 Superintelligence
Baidu CEO and chairman Robin Li interviewed Bill Gates and Elon Musk at the Boao Forum, last month. The discussion ranged from entrepreneurship to philanthropy, from "summoning the demon" of artificial intelligence to the pros and cons of technocratic government, to automated driving, and more.





In the session, "Dialogue: Technology & Innovation for a Sustainable Future" during the 2015 Boao Forum for Asia (BFA) in Boao, China, last month, Microsoft founder Bill Gates and SpaceX and Tesla founder Elon Musk were jointly interviewed by Baidu CEO Robin Li.

In the video above, (it is not the greatest quality, but content-wise is very good), Gates and Musk each discuss some of their early challenges in creating their companies and gaining incredible wealth.

Bill Gates

When asked how he manages his multiple companies, Musk replies, "I wouldn't really recommend this.  It's not good for quality of life."

Elon Musk

Related articles
In term of artificial intelligence, Gates stated that he would highly recommend Nick Bostrom's Superintelligence. During the interview Gates echoed Musk’s concerns with future superintelligence safety, with Musk noting that a good analogy would be “if you consider nuclear research, with its potential for a very dangerous weapon: releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety.”

"We've essentially been building the content base for the superintelligence."


"We have a general purpose learning algorithm, that evolution has endowed us with," comments Gates. "It's running in an extremely slow computer," he states while pointing at his own head. "It has very limited memory size, [no] ability to send data to other computers—we have to use this funny mouth thing here, whenever we build a new one, it starts over, it doesn't know how to walk...."

Bill Gates and Elon Musk Discuss Superintelligence and More with Baidu's CEO Robin Li

"As soon as this algorithm—of taking experience and turning it into knowledge—which is so amazing, and we have not done in software—as soon as you do that, it's not clear you'll even know when you're just at the human level," he continues. "You'll be at the superhuman level almost as soon as that algorithm is implanted in silicon."

As Watson demonstrated, such an intelligence can then go forward and read all the content of the internet.  "We've essentially been building the content base for the superintelligence." Gates wonders how other people cannot see this as a huge challenge for humanity in the not-to-distant future.


SOURCE  Adam Ford

By 33rd SquareEmbed