bloc 33rd Square Business Tools - Stephen Hawking 33rd Square Business Tools: Stephen Hawking - All Post
Showing posts with label Stephen Hawking. Show all posts
Showing posts with label Stephen Hawking. Show all posts

Thursday, October 20, 2016

Artificial Intelligence Will be The Best or Worst Thing to Happen to Humanity


Artificial Intelligence

Stephen Hawking recently spoke at the launch of the new Leverhulme Centre for the Future of Intelligence in Cambridge. At the event, he said the rise of AI would transform every aspect of our lives and was a global event on a par with the industrial revolution.


“Success in creating AI could be the biggest event in the history of our civilization,” claimed renowned cosmologist Professor Stephen Hawking at the opening of the new Leverhulme Centre for the Future of Intelligence (CFI) in Cambridge, UK. “But it could also be the last – unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many.

Related articles
The rise of superintelligent AI has been a favorite topic for Hawking the last few years. Increasingly, researchers all over the world are now taking the risk of advanced AI catching up to, and exceeding human intelligence very seriously.

"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer.  It therefore follows that computers can, in theory, emulate human intelligence — and exceed it," stated Hawking through his computer-generated voice.

“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialization.”

Stephen Hawking Says Artificial Intelligence Will be The Best or Worst Thing to Happen to Humanity

"The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity," Hawking concluded. "We do not yet know which.  That is why in 2014, I and a few others called for more research to be done in this area.  I am very glad that someone was listening to me!"

"Success in creating AI could be the biggest event in the history of our civilization."
The Centre for the Future of Intelligence has been initiated to focus on seven distinct projects in the first three-year phase of its work, reaching out to researchers and connecting them and their ideas to the challenges of making the best of AI. Among the initial research topics are: ‘Science, value and the future of intelligence’; ‘Policy and responsible innovation’; ‘Autonomous weapons – prospects for regulation’ and ‘Trust and transparency’.

The Academic Director of the Centre, and Bertrand Russell Professor of Philosophy at Cambridge, Huw Price, said at the event, “The creation of machine intelligence is likely to be a once-in-a-planet’s-lifetime event. It is a future we humans face together. Our aim is to build a broad community with the expertise and sense of common purpose to make this future the best it can be.”

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, also said, “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognizing images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and  study its implications.”

“Recent landmarks such as self-driving cars or a computer game winning at the game of Go, are signs of what’s to come,” added Professor Hawking. “The rise of powerful AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which. The research done by this centre is crucial to the future of our civilization and of our species.”



SOURCE  University of Cambridge


By  33rd SquareEmbed



Tuesday, June 28, 2016

Larry King’s Conversation with Stephen Hawking


Artificial Intelligence

Stephen Hawking – one of the world’s most brilliant thinkers, and a man who rarely gives interviews – joins Larry to discuss the greatest issues facing the planet, where artificial intelligence is headed (and what he makes of Kurzweil’s singularity theory), and what still mystifies him about the universe.


Larry King recently interviewed physicist Stephen Hawking for RT. The video below also features  astrophysicist Garik Israelian on creating the Starmus Festival, which celebrates the intersection of science and art and is this year dedicated to Hawking.

Related articles
In the interview, when asked about the dangers of artificial intelligence, Hawking explained that increases in technology may be coming at a steep cost, saying: “Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening seems a somewhat lower priority.”

That does not mean that artificial intelligence may not come without a cost.

“Artificial intelligence has the potential to evolve faster than the human race. Beneficially AI could co-exist with humans,” but there must be a line drawn, Hawking said. “Once machines reach the critical stage of being able to evolve themselves, we cannot predict whether their goals will be the same as ours.”

"I don't think advances in artificial intelligence will necessarily be benign."
When asked about Ray Kurzweil and the Singularity, Hawking responds, "I think his views are both too simplistic and too optimistic."

"Exponential growth will not continue to accelerate," says Hawking. "Something we don't predict will interrupt it, as has happened with similar forecasts in the past. And I don't think advances in artificial intelligence will necessarily be benign. "

Hawking was at the Starmus Festival in the Canary Islands where, this year, the festival is dedicated to the lifelong researcher. Hawking has been a large presence in science and mathematics, and his reputation precedes him. One of the many unique things about Hawking is how well he has beaten the odds. Having lived with amyotrophic lateral sclerosis (ALS), Hawking has become gradually paralyzed over the decades. The majority of ALS patients die of respiratory failure within three to five years from the onset of symptoms. However, Hawking has made it to 50 years and counting.


SOURCE  RT


By 33rd SquareEmbed


Tuesday, April 12, 2016

Initiative Launched to Send Microrobots to Alpha Centauri at Near Light Speed


Space

In a surprise boost for interstellar travel, the Silicon Valley philanthropist Yuri Milner and Stephen Hawking have announced $100m project to research sending a small lightweight robot to Alpha Centauri at near light speed travel.


In a joint announcement at the One World Observatory in New York City, billionaire Yuri Milner and Stephen Hawking unveiled Breakthrough Starshot, a $100 million research and engineering program seeking to lay the foundations for an eventual interstellar voyage. Milner and Hawking were joined by Ann Druyan, Freeman Dyson, Mae Jamison, Avi Loeb and Pete Worden to make the announcement. (Video of the announcement below.)

"For the first time in human history we can do more than just gaze at the stars,we can actually reach them."
This is the third Breakthrough Initiative in the past four years and will test the technologies needed to send a featherweight robot spacecraft to the Alpha Centauri star system, at a distance of 4.37 light years away (40,000,000,000,000 kilometres or 25 trillion miles).

The first step of the program involves building light-propelled “nanocrafts” that can travel at relativistic speeds—up to 20 percent the speed of light. At such high velocities, the robotic spacecraft would pass Pluto in three days and reach our nearest neighboring star system, Alpha Centauri, in only 20 years after launch.

Initiative Launched to Send Microrobots to Alpha Centauri at Near Light Speed


Related articles
“For the first time in human history we can do more than just gaze at the stars,” Milner said. “We can actually reach them.”

Milner cites three factors in the exponential rise of technology that will enable the project. These include mobile phone technology, nanotechnology and photonics. Broken down, the probe itself will be constructed of the same technology as your phone, with nanotech sails, and propelled by lasers.

"The message that Stephen Hawking and I want to send is that for the first time ever, this is an achievable goal," Milner said. "We can stand up and talk about it. Fifteen years ago, it wouldn't have made sense to make this investment. Now we've looked at the numbers, and it does."

At Tuesday's announcement, Hawking spoke of humanity's need for exploration as a driving force behind the project.

"Today we commit to this next great leap into the cosmos because we are human and our nature is to fly," he said.





SOURCE  Livestream


By 33rd SquareEmbed


Tuesday, July 21, 2015



SETI


Breakthrough Initiatives, a $100 million, 10-year multi-disciplinary effort to dramatically accelerate the search for intelligent life in the Universe, was announced recently by Russian billionaire Yuri Milner and Stephen Hawking. Project leaders include Martin Rees, Frank Drake, Geoff Marcy, Pete Worden, Ann Druyan, Dan Werthimer and Andrew Siemion.
 


Russian Billionaire Yuri Milner was joined at The Royal Society recently by Stephen Hawking, Martin Rees, Frank Drake, Geoff Marcy, Pete Worden and Ann Druyan to announce the unprecedented $100 million global Breakthrough Initiatives to reinvigorate the search for life in the universe.

Breakthrough Initiatives

The first of two initiatives announced, Breakthrough Listen, will be the most powerful, comprehensive and intensive scientific search ever undertaken for signs of intelligent life beyond Earth. The second, Breakthrough Message, will fund an international competition to generate messages representing humanity and planet Earth, which might one day be sent to other civilizations.

Breakthrough Listen

  • Biggest scientific search ever undertaken for signs of intelligent life beyond Earth.
  • Significant access to two of the world's most powerful telescopes – 100 Meter Robert C. Byrd Green Bank Telescope in West Virginia, USA ("Green Bank Telescope")1 and 64-metre diameter Parkes Telescope in New South Wales, Australia ("Parkes Telescope").
  • 50 times more sensitive than previous programs dedicated to SETI research.
  • Will cover 10 times more of the sky than previous programs.
  • Will scan at least 5 times more of the radio spectrum – and 100 times faster.
  • In tandem with a radio search, Automated Planet Finder Telescope at Lick Observatory in California, USA ("Lick Telescope")2 will undertake world's deepest and broadest search for optical laser transmissions.
  • Initiative will span 10 years.
  • Financial commitment is $100,000,000.
  • Unprecedented scope

The program will include a survey of the 1,000,000 closest stars to Earth. It will scan the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it will listen for messages from the 100 closest galaxies. The telescopes used are exquisitely sensitive to long-distance signals, even of low or moderate power:
  • If a civilization based around one of the 1,000 nearest stars transmits to us with the power of common aircraft radar, Breakthrough Listen telescopes could detect it.
  • If a civilization transmits from the center of the Milky Way, with any more than 12 times the output of interplanetary radars we use to probe the Solar System, Breakthrough Listen telescopes could detect it.
  • From a nearby star (25 trillion miles away), Breakthrough Listen's optical search could detect a 100-watt laser (energy output of normal household light bulb).
Related articles

Open Data, Open Source, Open Platform

The program will generate vast amounts of data. All data will be open to the public. This will likely constitute the largest amount of scientific data ever made available to the public. The Breakthrough Listen team will use and develop the most powerful software for sifting and searching this flood of data.

All software will be open source. Both the software and the hardware used in the Breakthrough Listen project will be compatible with other telescopes around the world, so that they could join the search for intelligent life. As well as using the Breakthrough Listen software, scientists and members of the public will be able to add to it, developing their own applications to analyze the data.

Crowdsourced processing power

Breakthrough Listen will also be joining and supporting SETI@home, University of California, Berkeley's ground breaking distributed computing platform, with 9 million volunteers around the world donating their spare computing power to search astronomical data for signs of life. Collectively, they constitute one of the largest supercomputers in the world.

Breakthrough Message

  • International competition to create digital messages that represent humanity and planet Earth.
  • The pool of prizes will total $1,000,000.
  • Details on the competition will be announced at a later date.
  • This initiative is not a commitment to send messages. It's a way to learn about the potential languages of interstellar communication and to spur global discussion on the ethical and philosophical issues surrounding communication with intelligent life beyond Earth.

Project Leadership

  • Martin Rees, Astronomer Royal, Fellow of Trinity College; Emeritus Professor of Cosmology and Astrophysics, University of Cambridge.
  • Pete Worden, Chairman, Breakthrough Prize Foundation.
  • Frank Drake, Chairman Emeritus, SETI Institute; Professor Emeritus of Astronomy and Astrophysics, University of California, Santa Cruz; Founding Director, National Astronomy and Ionosphere Center; Former Goldwin Smith Professor of Astronomy, Cornell University.
  • Geoff Marcy, Professor of Astronomy, University of California, Berkeley; Alberts SETI Chair.
  • Ann Druyan, Creative Director of the Interstellar Message, NASA Voyager; Co-Founder and CEO, Cosmos Studios; Emmy and Peabody award winning Writer and Producer.
  • Dan Werthimer, Co-founder and chief scientist of the SETI@home project; director of SERENDIP; principal investigator for CASPER.
  • Andrew Siemion, Director, Berkeley SETI Research Center.

Milner said: "With Breakthrough Listen, we're committed to bringing the Silicon Valley approach to the search for intelligent life in the Universe. Our approach to data will be open and taking advantage of the problem-solving power of social networks."

Hawking said: "I strongly support the Breakthrough Initiatives and the search for extraterrestrial life."

Drake said: "Right now there could be messages from the stars flying right through the room, through us all. That still sends a shiver down my spine. The search for intelligent life is a great adventure. And Breakthrough Listen is giving it a huge lift."

Voyager Interstellar Message

"With Breakthrough Listen, we're committed to bringing the Silicon Valley approach to the search for intelligent life in the Universe."



"We've learned a lot in the last fifty years about how to look for signals from space. With the Breakthrough Initiatives, the learning curve is likely to bend upward significantly," he added.

Druyan said:

The Breakthrough Message competition is designed to spark the imaginations of millions, and to generate conversation about who we really are in the universe and what it is that we wish to share about the nature of being alive on Earth. Even if we don't send a single message, the act of conceptualizing one can be transformative. In creating the Voyager Interstellar Message, we strived to attain a cosmic perspective on our planet, our species and our time. It was intended for two distinct kinds of recipients - the putative extraterrestrials of distant worlds in the remote future and our human contemporaries. As we approach the Message's fortieth anniversary, I am deeply grateful for the chance to collaborate on the Breakthrough Message, for what we might discover together and in the hope that it might inform our outlook and even our conduct on this world.



SOURCE  Breakthrough Iniitiatives


By 33rd SquareEmbed


Friday, May 15, 2015

Stephen Hawking Continues To Warn Against Artificial Intelligence

Artificial Intelligence
Stephen Hawking, the renowned theoretical physicist and cosmologist, has reiterated his warnings about artificial intelligence at a recent conference in London.





Speaking at the Zeitgeist 2015 conference in London, Stephen Hawking warned that smart computers will overtake human intelligence at some point in the next century.

The internationally renowned cosmologist and Cambridge University professor, said, “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

Related articles
Hawking, who signed an open letter alongside Max Tegmark, Elon Musk, Demis Hassabis, Yann LeCun, Geoffrey Hinton, Ben Goertzel and other expert in the field, also said, “Our future is a race between the growing power of technology and the wisdom with which we use it.”

"Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours."


In the short term, people are concerned about who controls AI, but in the long term, the concern will be whether AI can be controlled at all, he said.

Hawking, the author of A Brief History of Time, believes that scientists and technologists need to safely and carefully coordinate and communicate advancements in AI to ensure it does not grow beyond humanity's control.

The existential risk posed by artificial intelligence has also been brought to a wider audience than just technologists by Nick Bostrom's recent book, Superintelligence.

Bostrom, an Oxford professor and Director of the Future of Humanity Institute at Oxford in his book seeks to replace the prediction of a Singularity—the idea there will be a crossover point where society becomes unrecognizable due to superior machine intelligence— with that of an "intelligence explosion."

Some critics liken the trend of predicting the threat of superintellligence to those of global warming and acid rain in the past. What do you think?  Is this all just hype, or is the threat of super-smart AI's real?


SOURCE  Tech World

By 33rd SquareEmbed

Wednesday, October 22, 2014


 Artificial Intelligence
With the race for ever improving AI ramping up, the potential benefits are huge, but as Stephen Hawking and others have warned, AI may be the riskiest technology ever created.




According to renowned physicist, Stephen Hawking, "Success in creating Artificial Intelligence would be  the biggest event in human history." In a recent well publicized editorial written with Max Tegmark, Stuart Russsel and Frank Wilczek, in the Independant, he also warned, that AI is also" potentially our worst mistake in history."

As the race for ever improving AI ramps up, the potential benefits are huge, however, as the very definition on the Singularity implies, we cannot predict what we might actually achieve when this technology meets and exceeds human capabilities. This is otherwise known as strong AI.

Strong artificial intelligence, which is also called artificial general intelligence (AGI), is defined as intelligence that can successfully perform any intellectual task that a human being can.  So far all AI progress has been non-general, narrow, or weak by this standard.

Will AI destroy humanity?

A key element to the Singularity is that such strong AI will implement recursive self-improvement will kick-in and software will begin to program its own code to make itself better. Being a digital (or quantum) computer, these improvements may take place over a very short period of time, leading to an intelligence explosion, or hard take-off scenario. This possibility continues to be hotly debated.

"There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains."


"There are no fundamental limits to what can be achieved," wrote Hawking and his co-authors, "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains."

Clearly a hard take-off is the most worrying scenario for strong AI development.

In the video above, produced by Bahrain-based YouTuber Sharkee, a poll is conducted at the end, asking if the viewer would allow strong artificial intelligence to happen or not. So far the results are quite strongly in favor of pursuing strong AI despite the risks.

Should We Allow Artificial Intelligence to Happen?

Related articles
As Hawking, and his co-writers warn, "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Hawking's warning is not necessarily intended to prescribe forbidding AI development. He suggests we can explore the implications now to improve the chances of reaping the benefits and avoiding the risks.

Supporting research that is devoted to these issues such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute are one way. Another key goal should be educating yourself and others on the risk and reward scenario presented by the development of strong AI.


SOURCE  Sharkee

By 33rd SquareEmbed

Thursday, May 8, 2014

Researcher Says We Are Not Ready To Talk With E.T.

 SETI
A newly published study suggests that mankind is still not ready for contact with a supposed extraterrestrial civilization.




The project scientists at the Search for Extraterrestrial Intelligence program (SETI) are known for tracking possible extraterrestrial signals, but now they are also considering sending messages from Earth telling of our position. Now, a researcher from the University of Cádiz in Spain questions this idea in view of the results from a survey taken by students, revealing the general level of ignorance about the cosmos and the influence of religion when tackling these matters.

The study suggests that mankind is still not ready for contact with a supposed extraterrestrial civilization.

The SETI project is an initiative that began in the 1970s with funding from NASA, but that has evolved towards the collaboration of millions of Internet users for the processing of data from the Arecibo Observatory (Puerto Rico), where space tracking is carried out.

"Regarding our relation with a possible intelligent extraterrestrial life, we should not rely on moral reference points of thought, since they are heavily influenced by religion. Why should some more intelligent beings be ‘good’?"


Related articles
Now the members of this controversial project are trying to go further and not only search for extraterrestrial signs, but also actively send messages from Earth (Active SETI) to detect possible extraterrestrial civilizations. Astrophysicists, such as Stephen Hawking, have already warned of the risk that this implies for humanity, since it could favour the arrival of beings with more advanced technology and dubious intentions.

The ethical and sociological implications of this proposal have been analysed by the neuro-psychologist Gabriel G. de la Torre, professor at the University of Cádiz and participant in previous projects such as Mars 500 or space psychology topical team project financed by the European Space Agency, who wonders: “Can such a decision be taken on behalf of the whole planet? What would happen if it was successful and ‘someone’ received our signal? Are we prepared for this type of contact?”

To answer these questions, the professor sent a questionnaire to 116 American, Italian and Spanish university students. The survey assessed their knowledge of astronomy, their level of perception of the physical environment, their opinion on the place that things occupy in the cosmos, as well as religious questions – for example, “do you believe that God created the universe?” – or on the likelihood of contact with extraterrestrials.

The results, published in the journal Acta Astronautica, indicate that, as a species, humanity is still not ready for trying to actively contact a supposed extraterrestrial civilisation, since people lack knowledge and preparation. For this reason, SETI researchers are recommended in this study to look for alternative strategies.

“This pilot study demonstrates that the knowledge of the general public of a certain education level about the cosmos and our place within it is still poor. Therefore, a cosmic awareness must be further promoted – where our mind is increasingly conscious of the global reality that surrounds us – using the best tool available to us: education,” De la Torre emphasised. ”In this respect, we need a new Galileo to lead this journey”.

It was deduced from the questionnaires, which will soon be available to everyone on line, that university students and the rest of society lack awareness on many astronomical aspects, despite the enormous progress of science and technology. It also revealed that the majority of people consider these subjects according to their religious belief and that they would rely on politicians in the event of a huge global-scale crisis having to be resolved.

“Regarding our relation with a possible intelligent extraterrestrial life, we should not rely on moral reference points of thought, since they are heavily influenced by religion. Why should some more intelligent beings be ‘good’?,” added the researcher, who believes that this matter should not be monopolized by a handful of scientists: “In fact, it is a global matter with a strong ethical component in which we must all participate.”

What do you think?  Are you ready to talk to E.T.?


SOURCE  SiNC

By 33rd SquareEmbed

Monday, April 21, 2014

Steve Omohundro Urges Preventing an Autonomous Weapons Arms Race

 Artificial Intelligence
A study by AI researcher Steve Omohundro published in the Journal of Experimental & Theoretical Artificial Intelligence suggests that humans should be very careful to prevent future autonomous technology-based systems from developing anti-social and potentially harmful behavior.




In a study recently published in the Journal of Experimental & Theoretical Artificial Intelligence artificial intelligence researcher Steve Omohundro reflects upon the growing need for autonomous technology, and suggests that humans should be very careful to prevent future systems from developing anti-social and potentially harmful behaviors.

Modern military and economic pressures require autonomous systems that can react quickly – and without human input. These systems will be required to make rational decisions for themselves.

Omohundro writes: “When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess”.

"Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case."


Like a plot from The Terminator films, we are suddenly faced with the prospect of real threat from autonomous systems unless they are designed very carefully. Like a human being or animal seeking self-preservation, a rational machine could exert the following harmful or anti-social behaviors:


  • -Self-protection, as exampled above.
  • -Resource acquisition, through cyber theft, manipulation or domination.
  • -Improved efficiency, through alternative utilization of resources.
  • -Self-improvement, such as removing design constraints if doing so is deemed advantageous.

The study abstract states:

Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.

The study highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent:

Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behavior and it is easy to design simple utility functions that would be extremely harmful.”
Related articles


The study is echoed by new calls into examining the effects of super-intelligent artificial intelligence recently published by Stephen Hawking, Max Tegmark and others. They write, "although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute."

Omohundro concludes, "it appears that humanity's great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good. We have described some of the many challenges in that quest but have also outlined an approach to meeting those challenges."

Omohundro is the president of Self-Aware Systems which is developing a new kind of semantic software technology. In addition to his scientific work, Steve is passionate about human growth and transformation. He has trained in Rosenberg’s Non-Violent Communication, Gendlin’s Focusing, Travell’s Trigger Point Therapy, Bohm’s Dialogue, Beck’s Life Coaching, and Schwarz’s Internal Family Systems Therapy. He is working to integrate human values into technology and to ensure that intelligent technologies contribute to the greater good.



SOURCE  Alpha Galileo

By 33rd SquareEmbed

Wednesday, September 25, 2013

Stephen Hawking - Master of Space and Time


 Mind Uploading
Recently Stephen Hawking, in a talk at the premiere of the documentary film about his life, said that brain "could exist outside the body" and like a program, could be copied onto a computer.




Professor Stephen Hawking in a talk at the premiere of the documentary film about his life, said that brain "could exist outside the body" and like a program, could be copied onto a computer.

Speaking at the Cambridge Film Festival, The 71-year-old Professor, author of A Brief History of Time, said that it could be possible for the human brain to exist outside the body, but this is way beyond our present capabilities.

Related articles
"I think the brain is like a program in the mind, which is like a computer, so it’s theoretically possible to copy the brain on to a computer and so provide a form of life after death."

Hawking was cautionary however, noting that the technology is out of our reach for now, "This is way beyond out present capabilities. I think the conventional afterlife is a fairy tale for people afraid of the dark."

Hawking, who was diagnosed with motor neurone disease at the age of 21, and given only a few years to live by doctors has gone on to revolutionize cosmology and physics. Among his quests is work to harmonize quantum physics with Einstein's theory of relativity - a unified theory. His life is the subject of the new documentary, "Hawking."



SOURCE  My Science Academy


By 33rd SquareSubscribe to 33rd Square