bloc 33rd Square Business Tools - whole brain emulation 33rd Square Business Tools: whole brain emulation - All Post
Showing posts with label whole brain emulation. Show all posts
Showing posts with label whole brain emulation. Show all posts

Thursday, July 7, 2016

Robin Hanson Examines The Age of Em


Futurology

For Robin Hanson history is important and interesting to study, but the future is also important – plausibly, more important, because we can actually do something about the future. He takes his varied background studying physics, philosophy, artificial intelligence and social science to the study of the potential future of brain emulations in his new book, "The Age of Em."


Robin Hanson“What happens when a first-rate economist applies his rigor, breadth, and curiosity to the sci-fi topic of whole brain emulations?" asks Andrew McAfee, co-author of the Race Against the Machine. The answer for McAfee is Robin Hanson's new book, The Age of Em: Work, Love and Life when Robots Rule the Earth.

Hanson is an associate professor of economics at George Mason University, and a research associate at the Future of Humanity Institute of Oxford University. Hanson also has master’s degrees in physics and philosophy from the University of Chicago, nine years’ experience in artificial intelligence research at Lockheed and NASA, a doctorate in social science from California Institute of Technology. He has long been writing on his blog, Overcoming Bias, as well.

Related articles
"The Age of Em" is Hanson's in-depth analysis of the idea that uploaded brains will be able to multiply exponentially, and will dramatically shape the future. Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer, and you have a robot brain, but recognizably human.

 Hanson defines an em as, the result of taking a particular human brain, scanning it to record its particular cell features and connections, and then building a computer model that processes signals according to those same features and connections. "A good enough em has close to the same overall input-output signal behavior as the original human. One might talk with it, and convince it to do useful jobs," he writes.

Train an em to do some job and copy it a million times: an army of workers is at your disposal. When they can be made cheaply, within perhaps a century, ems will displace humans in most jobs. Also, Hanson projects that ems will be able to run much faster than human speeds. In this new economic era, the world economy may double in size every few weeks.

A brain emulation, or “em,” results from taking a particular human brain, scanning it to record its particular cell features and connections, and then building a computer model that processes signals in the same way. Ems will probably be feasible within about a hundred years. They are psychologically
quite human, and could displace humans in most jobs. If fully utilized, ems could have a monumental impact on all areas of life on Earth.
The Age of Em

"A good enough em has close to the same overall input-output signal behavior as the original human. One might talk with it, and convince it to do useful jobs."
The Age of Em shows you just how strange our electronic descendants may be. Read about changes in computer architecture, energy use, mind speeds, body sizes, security strategies, virtual reality, labor market organization, management focus, job training, career paths, wage competition, identity, status relations, co-worker and friend relations, aging, retirement, death, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, cooling infrastructure, growth rates, coalition politics, governance, law, and war.

Hanson says, "like a circus side show, my book lets readers see something strange yet familiar in great detail, so they can gawk at what else changes and how when familiar things change. My book is a dwarf, sword swallower, and bearded lady, writ large."

Ambitious and encyclopedic in scope, The Age of Em offers a completely unique window into our future. A must read for those curious about the technological destiny of our planet.

Top Image HABITAT by Till Nowak 


By 33rd SquareEmbed


Tuesday, January 5, 2016

The End is Really Nigh This Time


Artificial Intelligence

Reading Calum Chase's "Surviving AI" has made guest writer Jesse Lawler a believer that god-like artificial superintelligence is on its way. According to Lawler, it is time put the world’s best minds on the task of ensuring that when it arrives, it will be for the good thing for us all.


You’ve seen the guy with “The End is Near” scrawled on a sandwich board, neglecting his personal grooming for weeks at a time so he can warn passersby on the sidewalk.

You’ve noticed the Jesus folks with conspicuously good box seats at major sporting events, opting for a biblical warning sign instead of “Go Team!”

I’ve noticed them too. And I’ve always kind of mocked them a little. Okay, maybe a lot.

But I started getting uncomfortable as I read Calum Chase’s book, Surviving AI, thinking that maybe I owe the “Repent, Sinners!” gang an apology.
Let’s be real. The sky has a pretty impressive track record of not falling when it’s predicted to — regardless of the zeal, conviction, and pithy slogans of people doing the predicting.

When I think of all the “Jesus is coming — soon!” predictions documented in the past 500 years or so, Jesus starts looking a lot like Lucy in the Charlie Brown comics, yanking the football out from Charlie’s punt-attempt every damn time, sending Charlie sprawling in a humiliated back-flop.

Charlie Brown football

End-times false alarms aren’t limited to Christians, of course. Mayan enthusiasts looked silly when 2012 passed without incident. We even got an underwhelming secular “end is nigh” during 1999 in the ramp-up to Year’s Eve and the Y2K Bug.

But I’ve always sided with the skeptical crowd. And I admit, I’ve taken some mean-spirited, holier than thou pleasure in sneering at believers in the line-in-the-sand dates at which “everything will change.”

Among other things, I’ve always felt that there’s a lot of dishonesty around these professed beliefs. “If that guy really believed the Day of Judgment is coming in the next decade or so, I bet he’d live differently,” I’d find myself saying. And as much as I felt sorry for the people who (literally) bought into the Mayan 2012 Apocalypse and moved their families to high altitude, end-of-the-world shelters — at least they were rationally following through on the consequences of their beliefs.

Related articles
But I was in the other camp. The camp that says the future defies prediction, except maybe in trends. Certainly not specific events. We might be able to predict some events like the reappearance of Halley’s Comet every 75 years — but Halley’s Comet is bound by a few Newtonian laws and not a lot of distorting influences.

But here on Earth, predictions around human events have such a smorgasbord of influencing factors — and those much less understood than the mathematics of astronomical orbits — that predicting things months or even weeks in advance can be insurmountable. 2011’s “Arab Spring” caught the world’s intelligence community by surprise. The world’s best investors can’t tell you if the NASDAQ will be up or down tomorrow.

And this is with thousands of highly trained, highly skilled analysts looking at rich data sets with incredible analysis tools grounded in more than a century of concerted study into social movements, historical flash-points, consumer psychology...

If these sorts of resources fail to predict something as comparatively simple as a general market trend, how likely is it that clues from millennia-old, translated texts are going to be able to predict events so long after their time-of-writing?

Given these limitations, Jesus’ habit of no-show’ing to usher in the End Times seems pretty understandable.

So why am I breaking out my sandwich board and magic markers to make my own “End is Nigh” sign?

The End is Really Nigh This Time

Guess Who’s Coming to Dinner?

Surviving AI turned my thinking on its head. Chase’s book is a thorough and engaging overview of the past and present of the Artificial Intelligence (AI) technologies. And — to the limited extent that it’s possible — a forecast of the future.

The book makes the point that for the past 50 years, the most consistent definition of AI — although it’s never stated quite this way — has been “that which humans can do, but machines can’t do (yet).” The goal-posts have been continually moved back as artificial intelligence encroaches further into what used to be our unassailable intellectual territory.

These days, the domains in which we Homo Sapiens retain undisputed dominance have rolled back very far indeed. Chess-specialist computers beat the best human grandmasters. IBM’s “Watson” cleaned up on TV’s Jeopardy. And commercial AI systems do everything from fly passenger-laden planes to trade stocks to predict what you’ll want to watch next on Netflix.

And yet the undeniable touchdown-moment for AI will be the creation of an AGI — Artificial General Intelligence.

An AGI would — that is, will — be able to perform learning, abstract thinking, planning, and goal-directed decision-making not just within a narrow domain (e.g. Jeopardy) but in any subject presented to it. Just as humans do.

The jury is out on how far we are from an AGI — year, decades, or even centuries. But experts seem largely agreed that it is not a question of if, butwhen. There are a handful of well-informed naysayers who feel there’s something fundamental in the quantum underpinnings of the human brain that will make a conscious machine impossible…

But these opinions are in the extreme minority. For most experts, the deconstructionist argument wins.

Why It’s a “When”

The deconstructionist argument goes something like this:
1. The brain is made of matter.
2. The brain gives rise to mind.
3. There’s nothing inherently magical about the type of matter that makes up human brains.
4. Therefore, replicating the essential behaviors of that brain-matter — once those behaviors are well enough understood — should predictably give rise to non-human minds.

The strategy of AGI-instantiation that follows directly from this logic is called “Whole Brain Emulation” — but it is by no means the only approach that’s being attempted. Many experts feel that laboriously replicating a living human brain is the loooooong way around to solving the AGI problem. Dr. Ken Ford, whose team placed second in DARPA’s 2015 Robotics Challenge, likes to point out: Humans didn’t have to replicate birds to succeed at heavier-than-air flight. And if that was the only method the early flight pioneers had tried, we’d still be a land-bound species even now.

All that said, the exact recipe for creating our first AGI isn’t what’s most interesting.

What’s most interesting is what could happen almost as soon as our new creation comes out of the oven.

The Smartening

However AGI first happens, one thing seems inescapable. We’ll almost certainly be able to magnify the AGI’s intellectual firepower in ways we can’t do with biological brains.

Biology is constrained by things as banal as, say, the size of skulls. Packing more neurons into a healthy human noggin is a non-trivial problem. But silicon intelligences won’t have skulls. And if Moore’s Law* continues to hold (or even if it slows, but its slope stays above zero), this would be the equivalent of giving our AGIs constantly growing skulls.

It’s a rough analogy, but you get the idea.

Moore’s Law: a four-decade-old truism stating that the number of transistors that can be fit on a silicon wafer doubles every 18 months.

Networking AGIs — the equivalent of creating a conversation among humans, for group problem solving — is also something that will, from the outset, be orders of magnitude faster than anything that we biologics can do.

Human speech is an amazing ability — unique in the animal kingdom — but it has its limitations. Essentially, speech is the down-converting of ideas into syntactic language, then into motor-cortex-controlled movements of the jaw, mouth, lungs and vocal chords. A puff of air emits, vibrating at a certain frequency. It hits a nearby somebody’s ear-drum and causes it to vibrate just so, sending a corresponding signal to the receiver’s brain’s temporal lobe, which decodes the vibrated message into syntactic language and (rather magically) back into an idea that approximates what the original speaker intended.

Wow.

And yet, as awesome as all that is, you can see where networked AGIs could cut numerous steps from this process without batting a synthetic eyelid. Once a digital “idea” is turned into syntactic language, it could be electronically transmitted to AGI correspondents almost instantaneously, using any of the network data-sharing protocols that have been standard-issue for decades.

Very likely, AGIs will think of humans’ habit of vibrating air molecules to communicate as snicker-inducing as we think of baboons using fire-engine red buttocks to say “I’m horny.”

AGI-to-AGI “conversations” will happen at speeds humans can’t begin to approach. Conversations won’t be interrupted by the need for sleep, bathroom-breaks, or answering phone calls from a spouse. Imagine a team of reasonably intelligent people working constantly on a problem with a shared information pool and no time lost to misunderstandings. (The “telephone game” children play with increasingly misunderstood person-to-person whispers has no digital equivalent.)

Now imagine that this AGI team is working on creating its next member, smarter than all the current teammates.

Kaboom!

This is the scenario that leads to what’s un-ironically called an “Intelligence Explosion.” When self-improving AGIs, unfettered by biological checks-and-balances, direct their own evolution to get smarter and smarter and, well, smarter.

Imagine Einstein cloning himself and swapping out his Y chromosome for an X, producing a she-Einstein. Unconcerned with taboos against incest or infanticide, the duo produce 70,000 offspring, then euthanize those in the lowest 99.99% of cognitive performance. Then they re-cross the survivors. Rinse, and repeat.

This is all synthetic. Think of a manufacturing-based generational cycle that takes months, weeks, or maybe just days — not the multi-decade waiting period required to boot up a new human generation.

It’s plausible to think that the arrival of a superintelligence won’t be long in coming, once the first AGI is created.

Meet the New Boss

We have no reason to suspect that there’s any upper bound on theoretical intelligence.

It could be that AGIs will run into a significant choke-point on their intellects — like the limits imposed by a human skull — but it’s equally reasonable to assume that even the first such choke-point might be at a level much, much smarter than the smartest humans can aspire to.

And this brings us back to the idea of the arrival of Jesus, or the Mayan Apocalypse, or aliens surfing the wake of the Hale-Bopp comet.

At the risk of lining up with the frequently embarrassed predictors of Major Upsettings of Life-As-We-Know-It… The prospect of an Intelligence Explosion, and the consequent arrival of a god-like intelligence during your lifetime and mine doesn’t sound all that far-fetched.

artificial intelligence superintelligence

“God-like” can mean a lot of things. It might be overreaching to expect an AI-god in the mold of Christianity’s all-powerful, universe-in-six-days, look-ma-no-hands type god. But a superintelligence more like a Greek god — ridiculously powerful, but with some foibles and character flaws, despite being able to trounce any mere mortal — this seems like a reasonable, maybe even conservative, expectation.

At the end of the day, if we want to know how smart an AGI is, we’ll probably need to ask it. And hope that it answers truthfully.

At the end of the day, if we want to know how smart an AGI is, we’ll probably need to ask it. And hope that it answers truthfully.

Digression: Isn’t it interesting that the old gods of pantheistic religions weren’t really into self-improvement? Sure, they’d squabble for rank and fight amongst themselves, but none seemed to be trying to upgrade himself to “become a better god.” It’s fair to expect that our AI digi-gods — especially if they emerge from a recursive frenzy of iterative self-improvements — would have no reason to back off the throttle. Indeed, such a habit might be “cooked into their DNA.”

The End of the World — As We Don’t Know It

Should an AI superintelligence arrive (or arise, or whatever), it’s hard to imagine that the world as we know it wouldn’t change — profoundly, fundamentally and forever.

If it’s a nice god, get ready for an amazing future.
If it’s a nasty god, well, say your prayers.

And even if it’s a god-like power that quickly loses interest in us paltry humans, it’s hard to imagine us not being interested in it. Imagine, for example, that a digi-god arises, takes a few nanoseconds to realize that it’s at too great an existential risk by being bound to just one planet, and focuses its resources on spreading beyond Earth in a sort of Artificial-Life Insurance Policy.

If the smartest thing we know about decides to abandon Earth, it would be hard for us to not take notice.

The end is near

In any case, the crux of it is: We’ve been the planet’s dominant species for at least the past 50,000 years. The moment a superintelligence arises, our multi-millenial streak is over. The power balance shifts.

And that’s why we need to prepare.

That’s why I’m joining the unshaven, sandwich-board-and-magic-marker crowd.

The sky may not be falling. It may be rising — faster than we can catch it.
We need to have a huddle about this. Have a think. Have a strategy.

Says a Skeptical Reader: “You’re sounding like those crazies you used to make fun of, Jesse. Get a grip. What you’re saying is premature, fatalist, and/or just plain nuts.”

Superficially, yes, it may sound that way. But read the book, and then see how you feel.

I see two major differences between the “Non-Denominational End Times” scenario that AGI raises and other entries in the embarrassing history of such forecasts.

The First Big Difference

First: this one is actually likely to happen.* Nick Bostrom, Director of The Future of Humanity Institute and author of the book Superintelligence, took an informal poll among AI researchers and related professionals to get expert opinions as to when a human-level AI was “as likely as not.” In what year will it be a 50/50 coin toss as to whether an AGI exists?

The experts’ average guess: the year 2040.

That’s less than a quarter-century from now. Close enough to be exciting, terrifying, or both. And either way — worth planning for.

* Note: Yes, I know they all say that. This is me being funny.


The Second Big Difference

The arrival of an AI digi-god isn’t like some asteroid flying towards Earth with its mass, speed, and chemical composition already set in extraterrestrial stone. And it also isn’t like some established god with a literary history and an entrenched press corps ready to tell us how to roll out the red carpet (or the grisly consequences of not doing so). This is something that we are building. Its egg, sperm, womb, and the all-important early years of nurture for this nascent god will be constructed out of decisions made by good old-fashioned Homo Sapiens living here on Earth right now.

If we agree that a superintelligence is coming — and Calum Chase’s book made me a believer — then it behooves us to put the world’s current best minds on the task of ensuring that when their successor arrives, it will be a good thing for all involved.


By Jesse LawlerEmbed


About the Author

Jesse Lawler is the host of Smart Drug Smarts, a top-rated podcast about “practical neuroscience,” where each week he speaks with world-leading experts in neuroscience, brain-tech, and social issues related to cognitive enhancement. His goal: to bring listeners ideas and strategies to make themselves smarter. Jesse is also a software developer, self-experimentalist, and a health nut; he tweaks his diet, exercise habits, and medicine cabinet on an ongoing basis, always seeking the optimal balance for performance and cognition.

You can find many of his articles at smartdrugsmarts.com or on Medium: https://medium.com/smart-drug-smarts

Wednesday, May 7, 2014

Ed Boyden

 Neuroscience
Optogenetics pioneer, Ed Boyden recently sat down for a discussion on the importance of neuroscience research on the Singularity 1 on 1 podcast.




One of the main presenters at last year’s Global Future 2045 Conference in New York was neuroscientist, Dr. Ed Boyden. Boyden’s impressive work in neuroscience in general and optogenetics in particular, may have profound implications it would have on our ability to understand and manipulate the brain.

Recently Nikola Danaylov interviewed Boyden on the podcast Singularity 1 on 1.

"Our approach is very much focused on the technologies that will allow us to numerate, and describe the mechanistic processes through which neural circuits interact."


During the conversation with Boyden, the pair covered a variety of topics such as: his interesting career path from chemistry to physics to electrical engineering and into neuroscience; the loop of understanding and why the brain is where we need to go; and the importance of philosophy.

Boyden also covers his work in optogenetics and whether the brain is a classical computer or not. A major goal of Boyden’s current work is to manipulate individual nerve cells using light. To do this, he takes advantage of naturally occurring light-sensitive proteins from various microorganisms, which can be artificially expressed in brain cells using genetic technology. By controlling these proteins with an implanted fiber-optic device, Boyden is developing on/off switches for brain activity. This will be a powerful way to test theories of brain function in experimental animals, and could also open the door to new clinical therapies for conditions such as epilepsy, Parkinson’s disease, or blindness.

optogenetics

The pair discuss the Penrose-Hameroff theory of consciousness; and the Human Brain Project.

Boyden and Danaylov also touch on Randal Koene's Whole Brain Emulation project; the definition and importance of consciousness; neuroplasticity and Norman Doidge’s The Brain That Changes Itself. They also talk about free will and mind-uploading.

Related articles
Boyden is Associate Professor of Biological Engineering and Brain and Cognitive Sciences, at the MIT Media Lab and the MIT McGovern Institute. He leads the Synthetic Neurobiology Group, which develops tools for analyzing and engineering the circuits of the brain. These technologies, created often in interdisciplinary collaborations, include ‘optogenetic’ tools, which enable the activation and silencing of neural circuit elements with light, 3-D microfabricated neural interfaces that enable control and readout of neural activity, and robotic methods for automatically recording intracellular neural activity and performing single-cell analyses in the living brain. He has launched an award-winning series of classes at MIT that teach principles of neuroengineering, starting with basic principles of how to control and observe neural functions, and culminating with strategies for launching companies in the nascent neurotechnology space. He also co-directs the MIT Center for Neurobiological Engineering, which aims to develop new tools to accelerate neuroscience progress.

Amongst other recognitions, he has received the Jacob Heskel Gabbay Award (2013), the Grete Lundbeck European “Brain” Prize, the largest brain research prize in the world (2013), the Perl/UNC Neuroscience Prize (2011), the A F Harvey Prize (2011), and the Society for Neuroscience Research Award for Innovation in Neuroscience (RAIN) Prize (2007). He has also received the NIH Director’s Pioneer Award (2013), the NIH Director’s Transformative Research Award (twice, 2012 and 2013), and the NIH Director’s New Innovator Award (2007), as well as the New York Stem Cell Foundation-Robertson Investigator Award (2011) and the the Paul Allen Distinguished Investigator Award in Neuroscience (2010). He was also named to the World Economic Forum Young Scientist list (2013), the Wired Smart List “50 People Who Will Change the World” (2012), the Technology Review World’s “Top 35 Innovators under Age 35″ list (2006), and his work was included in Nature Methods “Method of the Year” in 2010.

His group has hosted hundreds of visitors to learn how to use neurotechnologies, and he also regularly teaches at summer courses and workshops in neuroscience, as well as delivering lectures to the broader public at TED and at the World Economic Forum. Ed received his Ph.D. in neurosciences from Stanford University as a Hertz Fellow, where he discovered that the molecular mechanisms used to store a memory are determined by the content to be learned. Before that, he received three degrees in electrical engineering, computer science, and physics from MIT. He has contributed to over 300 peer-reviewed papers, current or pending patents, and articles, and has given over 240 invited talks on his group’s work.



SOURCE  Singularity Weblog

By 33rd SquareEmbed

Wednesday, September 25, 2013

Stephen Hawking - Master of Space and Time


 Mind Uploading
Recently Stephen Hawking, in a talk at the premiere of the documentary film about his life, said that brain "could exist outside the body" and like a program, could be copied onto a computer.




Professor Stephen Hawking in a talk at the premiere of the documentary film about his life, said that brain "could exist outside the body" and like a program, could be copied onto a computer.

Speaking at the Cambridge Film Festival, The 71-year-old Professor, author of A Brief History of Time, said that it could be possible for the human brain to exist outside the body, but this is way beyond our present capabilities.

Related articles
"I think the brain is like a program in the mind, which is like a computer, so it’s theoretically possible to copy the brain on to a computer and so provide a form of life after death."

Hawking was cautionary however, noting that the technology is out of our reach for now, "This is way beyond out present capabilities. I think the conventional afterlife is a fairy tale for people afraid of the dark."

Hawking, who was diagnosed with motor neurone disease at the age of 21, and given only a few years to live by doctors has gone on to revolutionize cosmology and physics. Among his quests is work to harmonize quantum physics with Einstein's theory of relativity - a unified theory. His life is the subject of the new documentary, "Hawking."



SOURCE  My Science Academy


By 33rd SquareSubscribe to 33rd Square

Sunday, June 24, 2012


 Artificial Brains
The Cornell – IBM SyNAPSE team has developed a key building block of a modular neuromorphic architecture: a neurosynaptic core, IBM Almaden scientist Dr. Dharmendra S Modha’s Cognitive Computing Blog reports.
Dharmendra Modha, the Manager of the Cognitive Computing Systems at IBM, has shared a paper on IBM Research's efforts to help shape the new age of cognitive computing via the development of a neuromorphic core processor on his blog.

Modha described IBM's research into Whole Brain Emulation and their plans to simulate the brain by 2018 at the 2008 Singularity Summit.

The core incorporates central elements from nanotechnology, neuroscience and supercomputing, including 256 leaky integrate-and-fire neurons, 1024 axons, and 256x 1024 synapses using an SRAM crossbar memory. It fits in a 4.2mm square area, using a 45nm SOI process.

A design prototype of the core was announced in August 2011, part of the SyNAPSE project, a DARPA program that aims to develop electronic neuromorphic (neuron-like) machine technology similar to the mammalian brain. Such artificial brains would be used in robots whose intelligence matches that of rats, cats, and ultimately even humans.

“One of the main obstacles holding back the widespread utility of low-power neuromorphic chips is the lack of a consistent software-hardware neural programming model, where neuron parameters and connections can be learned off-line to perform a task in software with a guarantee that the same task will run on power-efficient hardware,” the team said in an open-access paper.

The core replaces supercomputers and commodity chips (DSP, GPU, FPGA), both of which require high power consumption, the authors say. The compact design is also compatible with mobile devices. It consumes just 45pJ (picojoule) per spike.

“This is a flexible brain-like architecture capable of a wide array of real-time applications, and designed for the ultra-low power consumption and compact size of biological neural systems,” explained Mohda.




SOURCE  KurzweilAI

By 33rd SquareSubscribe to 33rd Square


Saturday, March 17, 2012


Professor of Computational Neuroscience at MIT Sebastian Seung discusses how the study of "connectomes", a comprehensive map of neural connections in the brain, can help turn science fiction into reality. Seung proposes that through the study of the connectome we can test whether ideas such as freezing ourselves or uploading our brains on to computers are even possible.

Seung has found what he calls the nexus of nature and nurture: the "Connectome", or the network of connections between neurons in the human brain. He will take you inside his ambitious quest to model the Connectome, which, if successful, would uncover the basis of personality, intelligence, memory and disorders such as autism and schizophrenia.

Dr. Seung is Professor of Computational Neuroscience in the Department of Brain and Cognitive Sciences and the Department of Physics at the Massachusetts Institute of Technology, and Adjunct Assistant Neurobiologist at Massachusetts General Hospital, Boston. He studied theoretical physics with David Nelson at Harvard University, and completed postdoctoral training with Haim Sompolinsky at the Hebrew University of Jerusalem. Before joining the MIT faculty, he was a member of the Theoretical Physics Department at Bell Laboratories. Dr. Seung has been a Sloan Research Fellow, a Packard Fellow, and a McKnight Scholar. 

He is also author of the recent book, Connectome: How the Brain's Wiring Makes Us Who We Are, and creator of the Eyewire project.  McGill University Professor of Psychology and Neurosciences Daniel Levitin wrote in The Wall Street Journal that Connectome is "the best lay book on brain science I've ever read."

View the complete video at: http://fora.tv/2012/02/13/Sebastian_Seung_Connectome 

Subscribe to 33rd Square


Thursday, January 5, 2012


For many, including Ray Kurzweil,  the path to artificial general intelligence lies with neuroscience, and more specifically the task of simulating the human brain.  


Whole brain emulation or mind uploading (sometimes called mind transfer) is the hypothetical process of transferring or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer would have to run a simulation model so faithful to the original that it would behave in essentially the same way as the original brain, or for all practical purposes, indistinguishable from the brain.  


For these reasons and more, the fields of neuroscience and artificial intelligence have experienced overlap and convergence in recent years.  


Kurzweil's Logarithmic Plot of Supercomputer Power



A number of projects to this end are taking place around the globe.  Here we outline and rate some of the more well-know projects.

1. The Blue Brain Project

Reconstructing the brain piece by piece and building a virtual brain in a supercomputer—these are some of the goals of the Blue Brain Project since 2005.  The virtual brain will be an exceptional tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases. The ultimate goals of brain simulation are to answer age-old questions about how we think, remember, learn and feel, to discover new treatments for the scourge of brain disease and to build new computer technologies that exploit what we have learned about the brain.




As a first step, the project succeeded in simulating a rat cortical column. This neuronal network, the size of a pinhead, recurs repeatedly in the cortex. A rat’s brain has about 100,000 columns of in the order of 10,000 neurons each. In humans, the numbers are dizzying—a human cortex may have as many as  two million columns, each having in the order of 100,000 neurons each.


Led by Dr. Henry Markram, The Blue Brain Project recently joined with other 12 partners  to propose the Human Brain Project – a very large 10 year project that will pursue precisely these aims. The new grouping has just been awarded a  Eur 1.4 million European grant to formulate a detailed proposal.


2. The Human Brain Project

As mentioned in some of the other projects, The Human Brain Project is currently a proposal to amalgamate some of these projects under one umbrella project.  

Introduction

The project is integrating everything we know about the brain into computer models and using these models to simulate the actual working of the brain. Ultimately, it will attempt to simulate the complete human brain. The models built by the project will cover all the different levels of brain organisation – from individual neurons through to the complete cortex. The goal is to bring about a revolution in neuroscience and medicine and to derive new information technologies directly from the architecture of the brain.


3. SyNAPSE

SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine technology that scales to biological levels. More simply stated, it is an attempt to build a new kind of computer with similar form, function, and architecture to the mammalian brain. Such artificial brains would be used in robots whose intelligence matches that of rats, cats, and ultimately even humans.






IBM is largely involved with SyNAPSE, under the Cognitive Computing Project.









4. Numenta / Hierarchical Temporal Memory Theory


Jeff Hawkins argues that attempts to create an artificial intelligence by simply programming a computer to do what a brain does are flawed and that to actually make an intelligent computer, we simply need to teach it to find and use patterns, not to attempt any specific tasks. Through this method, he thinks we can build intelligent machines, helping us do all sorts of useful tasks that current computers cannot achieve. He further argues that this memory-prediction system as implemented by the brain's cortex is the basis of human intelligence.





Numenta, a company formed by Hawkins was created to develop the theories Hawkins put forth in the book, On Intelligence.  Numenta is developing what will potentially be a category-defining product based on this technology. The product promises to dramatically reduce the cost and difficulty of extracting value from any type of data.


Hawkins is a spokesperson of sorts for the neuroscience and AI communities, however Numenta has recently been promising breakthrough developments.  As a former computer engineer turned neuroscientist, he embodies the convergence of the formerly separate fields of study.  





5. The Human Connectome Project


The Human Connectome Project aims to provide an unparalleled compilation of neural data, an interface to graphically navigate this data and the opportunity to achieve never before realized conclusions about the living human brain.




The Human Connectome Project (HCP) is a project to construct a map of the complete structural and functional neural connections in vivo within and across individuals. The HCP represents the first large-scale attempt to collect and share data of a scope and detail sufficient to begin the process of addressing deeply fundamental questions about human connectional anatomy and variation. A collaboration between MGH and UCLA, the HCP is being developed to employ advanced neuroimaging methods, and to construct an extensive informatics infrastructure to link these data and connectivity models to detailed phenomic and genomic data, building upon existing multidisciplinary and collaborative efforts currently underway. Working closely with other HCP partners based at Washington University in St. Louis we will provide rich data, essential imaging protocols, and sophisticated connectivity analysis tools for the neuroscience community.

Many of the neural connections can be seen on the Connectome Project's Connection Viewer.