Researchers Unlock Brain's Enigma Code for Processing Visual Images

Sunday, April 24, 2016

Researchers Unlock Brain's Enigma Code for Processing Visual Images


Cognitive Neuroscience

Researchers have discovered what two parts of the brain are saying to one another when processing visual images. This breakthrough research could lead to human like machine vision and computer models of neurological diseases.


Until now, scientists have only been able to tell whether two parts of the brain are communicating with each other. Modern neuroscience has attempted to model the brain as a network of densely interconnected functional nodes, but the dynamic information processing mechanisms of perception and cognition, has been nearly impossible to translate into a core mathematical statement, or algorithm. The pursuit has been likened to Turing's quest to solve the Enigma machine during the Second World War.

Related articles
Now, researchers at the University of Glasgow have discovered what two parts of the brain are saying to one another when processing visual images. The breakthrough study, 'Tracing the Flow of Perceptual Features in an Algorithmic Brain Network', has been published in Scientific Reports.

Using innovative an method called Directed Feature Information, the scientists reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture.

In this case the scientists used a picture of Salvador Dali’s Slave Market with the Disappearing Bust of Voltaire, focusing on the face of Voltaire and the images of the two nuns surreally embedded in typical Dali style within the image.

Researchers Unlock Brain's Enigma Code for Processing Visual Images

Dr Robin Ince, the lead author on the paper, explains: “By randomly showing different small sections of the image, we were able to see how each part of the image affected the recorded brain signals.”

In each observer, they identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. "Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition," they write.

The research marks a huge development in interpreting brain activity, opening up range of opportunities to study what happens to the brain’s network as it ages or the effects of a stroke when brain processes are disrupted. It also raises the possibility of future research into machine vision.

Philippe Schyns, professor of psychology at the university’s centre for cognitive neuroimaging, said: “With Enigma, we knew the Germans were communicating, but we didn’t know what they were saying. Just like if you’re walking down the street and you see two people talking in the distance: you know they are communicating with each other, but you don’t know what they are saying.

“Communication between brain regions has so far been like these examples: we know it’s happening, but we don’t know what it’s about. Through our research, we have been able to ‘break the code,’ so to speak, and therefore glean what two parts of the brain are saying to each other.”

The research will have valuable applications in other areas as well. Ince adds: “Being able to measure the content of communication between brain regions is crucial for studying the detailed function of brain networks and how, for example, that changes with aging or disease.”

Schyns added: “Through these discoveries, by knowing how to code and integrate these messages between different parts of the brain, we could one day give robots the same visual capabilities as people.”


SOURCE  The Scotsman


By 33rd SquareEmbed


0 comments:

Post a Comment