Machines May Learn Like Us

Monday, August 26, 2013

neural network

 Artificial Intelligence
Studies have found that neural network computer models, which are used in a growing number of applications, may learn to recognize patterns in data using the same algorithms as the human brain.




A growing number of experiments with neural networks are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms.

The algorithm used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle.

Hinton — the great-great-grandson of the 19th-century logician George Boole, whose work is the foundation of modern computer science — has always wanted to understand the rules governing when the brain beefs a connection up and when it whittles one down — in short, the algorithm for how we learn. “It seemed to me if you want to understand something, you need to be able to build one,” he said. Following the reductionist approach of physics, his plan was to construct simple computer models of the brain that employed a variety of learning algorithms and “see which ones work,” said Hinton, who splits his time between the University of Toronto, where he is a professor of computer science, and Google.

Boltzmann machine

During the 1980s and 1990s, Hinton  invented or co-invented a collection of machine learning algorithms. The algorithms, which tell computers how to learn from data, are used in computer models called artificial neural networks — webs of interconnected virtual neurons that transmit signals to their neighbors by switching on and off, or “firing.” When data are fed into the network, setting off a cascade of firing activity, the algorithm determines based on the firing patterns whether to increase or decrease the weight of the connection, or synapse, between each pair of neurons.

For decades, many of Hinton’s computer models languished. But thanks to advances in computing power, scientists’ understanding of the brain and the algorithms themselves, neural networks are playing an increasingly important role in neuroscience.

Related articles
Sejnowski, now head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in La Jolla, Calif., said: “Thirty years ago, we had very crude ideas; now we are beginning to test some of those ideas.”

Early on, Hinton’s attempts at replicating the brain were limited. Computers could run his learning algorithms on small neural networks, but scaling the models up quickly overwhelmed the available hardware. However, in 2005, Hinton discovered that if he sectioned his neural networks into layers and ran the algorithms on them one layer at a time, which approximates the brain’s structure and development, the process became more efficient.

Although Hinton published his discovery in two top journals, neural networks had fallen out of favor by researchers.  But, in the years since, the theoretical learning algorithms have been put to practical use in a surging number of applications, such as the Google Now personal assistant and the voice search feature on Microsoft Windows phones.
Neural networks have recently hit their stride thanks to Hinton’s layer-by-layer training method, the use of high-speed computer chips called graphical processing units (GPU), and an explosive rise in the number of images and recorded speech available to be used for training. The networks can now correctly recognize about 88 percent of the words spoken in normal, human, English-language conversations, compared with about 96 percent for an average human listener. They can identify cats and thousands of other objects in images with similar accuracy and in the past three years have come to dominate machine learning competitions.

Researchers are finding too that the Boltzmann machine algorithm seems to have a biological analogy in the sleeping human brain. Sejnowski, who earlier this year became an adviser on the Obama administration’s new BRAIN Initiative —  a $100 million research effort to develop new techniques for studying the brain — says t he easiest way for the brain to run the Boltzmann algorithm, is to switch from building up synapses during the day to cleaning them up and organizing them during the night.

Giulio Tononi, head of the Center for Sleep and Consciousness at the University of Wisconsin-Madison, has found that gene expression inside synapses changes in a way that supports this hypothesis: Genes involved in synaptic growth are more active during the day, and those involved in synaptic pruning are more active during sleep.

Sparse coding is another method the brain may use to deal with information overload, filtering the incoming data into manageable units. Bruno Olshausen, a computational neuroscientist and director at Jeff Hawkins' Redwood Center for Theoretical Neuroscience at the University of California-Berkeley, who helped develop the theory of sparse coding. “So it’s like you have a Boltzmann machine sitting there in the back of your head trying to learn the relationships between the elements of the sparse code.”

Olshausen and his research team recently used neural network models of higher layers of the visual cortex to show how brains are able to create stable perceptions of visual inputs in spite of image motion. In another recent study, they found that neuron firing activity throughout the visual cortex of cats watching a black-and-white movie was well described by a Boltzmann machine.

A potential application of that work is in the building of neural prosthesis, such as an artificial retina. With an understanding of “the formatting of information in the brain, you would know how to stimulate the brain to make someone think they are seeing an image,” Olshausen said.


SOURCE  Quanta Magazine

By 33rd SquareSubscribe to 33rd Square

0 comments:

Post a Comment