AI System Spontaneously Reproduces Aspects of Human Neurology

Thursday, December 1, 2016

AI System Spontaneously Reproduces Aspects of Human Neurology


Artificial Intelligence

Researchers have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.


Researchers at MIT and their colleagues have developed a new computational model of the human brain’s face-recognition system that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

This rotation property was not built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

“This is not a proof that we understand what’s going on,” says Tomaso Poggio, CSAIL principal investigator and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”

The researchers’ new paper, published in Current Biology, includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.

Related articles
The new paper is “a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,” Poggio says. “That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”

Knowing that different groups of neurons fired in the brain when different facial angles were presented, the researchers knew what their machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”

The researchers’ machine-learning system is a neural network, consisting of very simple processing units, arranged into layers, that are densely connected to the processing units — or nodes — in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion — say, correctly determining whether a given image depicts a particular person.

using angles in facial recognition

The experimental approach produced invariant representations: A face’s signature turned out to be roughly the same no matter its orientation. But the mechanism — memorizing templates — was not, Poggio says, biologically plausible.

Instead, the new network uses a variation on Hebb’s rule, which is often described in the neurological literature as “neurons that fire together wire together.” That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently.

This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.

The researchers conclude:
Our feedforward model, which succeeds in explaining the main tuning and invariance properties of the macaque face-processing system, may serve as a building block for future object-recognition models addressing brain areas such as prefrontal cortex, hippocampus and superior colliculus, integrating feed-forward processing with subsequent computational steps that involve eye-movements and their planning, together with task dependency and interactions with memory.




SOURCE  CSAIL


By  33rd SquareEmbed



0 comments:

Post a Comment