Algorithms Can Now Read Your Mind Using Brain Scan Data

Wednesday, May 10, 2017

Mind Reading from fMRI


Mind Reading

A newly developed neural network method now makes it easier and more accurate to decode fMRI scans. Using deep learning, researchers were able to reconstruct images a viewer was looking at through brain scan data analysis. The 'deep generative multiview model' also learned to correlate the data so that the accompanying standard fMRI noise could be accounted for in the generation of the reconstructed images.


One of the far reaching goals of neuroscience is to be able to read a person's thoughts. Such a technology is the inspiration, in part for Elon Musk's new company, Neuralink, along with other brain-machine interface ventures. So far, for data coming from functional magnetic resonance imaging (fMRI) scans, the task has proven to be very challenging.

fMRI scans are inherently noisy, and the activity in one voxel is well known to be influenced by activity in other voxels. This kind of correlation is computationally difficult and expensive to manage. Most work in this area has simply not dealt with it. This has significantly reduces the quality of the image reconstructions they produce.

Now, Changde Du at the Research Center for Brain-Inspired Intelligence in Beijing, China, and they his research team have developed a better ways to process data from fMRI scans to produce more accurate brain-image reconstructions. The team's research has been published online.

Their method uses deep learning techniques that handle nonlinear correlations between voxels more capably. The result is a much better way to reconstruct the way a brain perceives images.

Changde used several data sets of fMRI scans of the visual cortex of a human subject looking at a simple image—a single digit or a single letter. Each data set consists of the scans and the original image. They mapped the data to find a way to use the fMRI scans to reproduce the viewer's perceived image. In total, the team has access to over 1,800 fMRI scans and original images.

Related articles
According to the researchers it was a straightforward deep learning task. They used 90 percent of the data to train the network to understand the correlation between the brain scan and the original image. Next, they tested the neural network on the remaining data by feeding it the scans and asking it to reconstruct what the viewed images were.

This approach had the advantage of having the network learn which voxels were used to reconstruct the image, avoiding the need to process the data from them all.

The neural network also learned how the data from the fMRI data was correlated. This was an important part of the research, because if the correlations are ignored, they end up being treated like noise and discarded. So the new approach—the so-called deep generative multiview model or DGMM—exploits these correlations and distinguishes them from real noise.  

The team compared their results from those of a number of other brain image reconstruction techniques. (See image at top). Generally, the reconstructed images are clear representations of the originals, and were for the most part superior to those derived by other methods.

“Extensive experimental comparisons demonstrate that our approach can reconstruct visual images from fMRI measurements more accurately,” write the study authors.

The research may have other implications other than regenerating what a view sees by interpreting a brain scan. "Although we focused on visual image reconstruction problem in this paper, our framework can also deal with brain encoding tasks," write the study authors.

The next steps for the research will include ways to analyze scenes more complex than simple numbered text and possibly moving images.

SOURCE  MIT Technology Review


By  33rd SquareEmbed





0 comments:

Post a Comment