What Will Computers Look Like By 2029?

Wednesday, September 4, 2013

Future of Computers

 Future of Computers
With advances in artificial intelligence, the continued progress of  Moore's Law and other factors, computer technologies have undoubtedly changed dramatically.  With wearable computers and other future technologies where will our computers be in the next fifteen years?




L
ooking back at the past fifteen years of computer technology, we have seen the dominance switch from desktop PCs to laptops to tablets and smartphones.  Now, as Google Glass and other wearable devices use even more powerful and smaller technology, we are entering a new phase, and a new level of acceleration of the technology.

What will computer technology look like in another 15 years?


Ray Kurzweil, Google's director of engineering, calls Google Glass a "solid first step" along the road to computers that rival and then exceed human intelligence.

Kurzweil, who is also an accomplished inventor and futurist, predicts that by 2029 computers will match human intelligence, and nanobots inhabiting our brains will create immersive virtual reality environments from within our nervous systems.

Recently, Dan Farber at CNET looked at how likely the potential is for another order of magnitude shrinkage of the technology, combined with the same factor of performance.  Quoting Kurzweil,
If you want to go into virtual reality the nanobots shut down the signals coming from your real senses and replace them with the signals that your brain would be receiving if you were actually in the virtual environment. So this will provide full-immersion virtual reality incorporating all of the senses. You will have a body in these virtual-reality environments that you can control just like your real body, but it does not need to be the same body that you have in real reality. We'll be able to interact with people in any way in these virtual-reality environments. That will replace most travel, but we'll also have new travel technologies for our real bodies using nanotechnology.
At the recent CONNECT2013 - Telstra Enterprise conference in Australia, Kurzweil claims he is now in the process, at Google, of creating, "a synthetic neocortex - to use biologically-inspired methods to try and understand how the human brain works, and use similar ideas to try and create intelligent computers." (See video below).

Ray Kurzweil

Kurzweil took his first job at Google late last year with the task of helping improve Google computers' understanding of natural language, part of their initiative driving "conversational speech" as a main form of human-computer interaction. This development, it is thought, will lead to artificially intelligent computers that pass for human levels.

Going further, it wouldn't be out of character for Google co-founders Larry Page and Sergey Brin to consider moon shots like Google's servers with direct connections to your brain, as they have for self-driving cars. It's mind bending to think about the implications, but it seems possible that Google could monetize your brain instantaneously as it thinks.

Kurzweil's brief tenure at Google hasn't yet succeeded in  merging the human brain with the Google cloud or creating a future version of Glass the size of a blood cell that runs through your brain capillaries, but are these ideas really that far off?.

Related articles
Google is a world-leading developer of artificial intelligence. Recently, applying design principles from neural networks, Google engineers realized significant improvements in the quality of the speech recognition. Google has also built a large data repository, Knowledge Graph, with nearly a billion objects and billions of relationships among them as a foundation for understanding the semantic content and context of queries.

Following the well-known path of Moore's Law, smartphones, tablets and other mobile devices over the last five years have become far more powerful and feature-rich with each new release, however, many would argue we haven't really seen a marked leap in the technology.  Even the introduction of Siri and other semi-intelligent, voice-activated assistants has not had real dramatic effects.

Now, wearable computer technology like Google Glass faces a tougher adoption curve than smartphones, which are more essential for users than the wearable accessory. Farber argues that for Glass to break through, natural language input and conversational search need to make quantum leaps.

Google insiders have said that voice search and image recognition will substantially improve the next five years.  For instance Geordie Rose recently commented that in five years there will be big changes, in part due to the introduction of quantum computers to Google's mix. "Some of the things are going to make some people very uncomfortable within five years.  Some of the things that we are working on, with our partners, are near that time frame.  There will be things that we can do that will make some people very uncomfortable about humanity's special place in the universe."

Google Glass and other products will eventually bring augmented reality to the masses. At present, it can take videos and pictures, send a tweet and provide notifications, but will likely build up the app ecosystem for the augmented reality base rapidly within next five years, especially as the cost and size of processors, sensors and other components come down and the power increases.

Farber concludes,
In a 30-year span, computing has progressed from the Macintosh, which launched in 1984, to Google Glass. A moon shot traversing from today's Google Glass to nanobots communicating between your brain and a Google cloud that is indistinguishable from a human in the next 15 to 30 years is difficult to digest, but not that far fetched.



SOURCE  CNET

By 33rd SquareSubscribe to 33rd Square

Enhanced by Zemanta

0 comments:

Post a Comment