bloc 33rd Square Business Tools - cognitive neuroscience 33rd Square Business Tools: cognitive neuroscience - All Post
Showing posts with label cognitive neuroscience. Show all posts
Showing posts with label cognitive neuroscience. Show all posts

Monday, December 19, 2016

Image Processing Artificial Intelligence Learns Mostly On Its Own, Just Like a Human


Artificial Intelligence

Artificial intelligence and neuroscience researchers have taken inspiration from the human brain in creating a new deep learning system that enables computers to learn about the visual world largely on their own, just like human babies do.


Artificial intelligence and neuroscience experts from Rice University and Baylor College of Medicine using inspiration from the human brain have developed a new deep learning method that lets computers learn about the visual world largely on their own, much the same way human babies do.

In tests, the group’s “deep rendering mixture model” (DRMM) largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students. The results which were  presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona,the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself.

The algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.

"The DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark and comparable to state of the art," conclude the authors.

Related articles
“In deep learning parlance, our system uses a method known as semisupervised learning,” said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. “The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.

“Humans don’t learn that way,” Patel said. “When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: ‘Bottle. Chair. Momma.’ But the baby can’t even understand spoken words at that point. It’s learning mostly unsupervised via some interaction with the world.”

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn’t require much “hand-holding” in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

DRMM

The semisupervised Rice-Baylor algorithm is a “convolutional neural network,” a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.

“It’s essentially a very simple visual cortex,” Patel said of the convolutional neural net. “You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision.”

"The way the brain is doing it is far superior to any neural network that we’ve designed."
Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.

“Edges are very important,” Nguyen said. “Many of the lower layer neurons tend to become edge detectors. They’re looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

“When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on,” he said. “The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it’s able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus.”

Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.

“There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly,” Patel said. “What the brain is doing may be related, but it’s still very different. And the key thing we know about the brain is that it mostly learns unsupervised.

“What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that’s being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?” he said. “Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we’ve designed.”

SOURCE  Rice University


By  33rd SquareEmbed



Sunday, April 24, 2016

Researchers Unlock Brain's Enigma Code for Processing Visual Images


Cognitive Neuroscience

Researchers have discovered what two parts of the brain are saying to one another when processing visual images. This breakthrough research could lead to human like machine vision and computer models of neurological diseases.


Until now, scientists have only been able to tell whether two parts of the brain are communicating with each other. Modern neuroscience has attempted to model the brain as a network of densely interconnected functional nodes, but the dynamic information processing mechanisms of perception and cognition, has been nearly impossible to translate into a core mathematical statement, or algorithm. The pursuit has been likened to Turing's quest to solve the Enigma machine during the Second World War.

Related articles
Now, researchers at the University of Glasgow have discovered what two parts of the brain are saying to one another when processing visual images. The breakthrough study, 'Tracing the Flow of Perceptual Features in an Algorithmic Brain Network', has been published in Scientific Reports.

Using innovative an method called Directed Feature Information, the scientists reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture.

In this case the scientists used a picture of Salvador Dali’s Slave Market with the Disappearing Bust of Voltaire, focusing on the face of Voltaire and the images of the two nuns surreally embedded in typical Dali style within the image.

Researchers Unlock Brain's Enigma Code for Processing Visual Images

Dr Robin Ince, the lead author on the paper, explains: “By randomly showing different small sections of the image, we were able to see how each part of the image affected the recorded brain signals.”

In each observer, they identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. "Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition," they write.

The research marks a huge development in interpreting brain activity, opening up range of opportunities to study what happens to the brain’s network as it ages or the effects of a stroke when brain processes are disrupted. It also raises the possibility of future research into machine vision.

Philippe Schyns, professor of psychology at the university’s centre for cognitive neuroimaging, said: “With Enigma, we knew the Germans were communicating, but we didn’t know what they were saying. Just like if you’re walking down the street and you see two people talking in the distance: you know they are communicating with each other, but you don’t know what they are saying.

“Communication between brain regions has so far been like these examples: we know it’s happening, but we don’t know what it’s about. Through our research, we have been able to ‘break the code,’ so to speak, and therefore glean what two parts of the brain are saying to each other.”

The research will have valuable applications in other areas as well. Ince adds: “Being able to measure the content of communication between brain regions is crucial for studying the detailed function of brain networks and how, for example, that changes with aging or disease.”

Schyns added: “Through these discoveries, by knowing how to code and integrate these messages between different parts of the brain, we could one day give robots the same visual capabilities as people.”


SOURCE  The Scotsman


By 33rd SquareEmbed


Monday, August 19, 2013

Computer Programmed to Read Letters Directly from the Brain


 Mind Reading
Using a mathematical model, researchers in The Netherlands have reconstructed thoughts from data collected from fMRI test subjects - essentially they read the minds of the participants.




By analysing MRI images of the brain with an elegant mathematical model, researchers from Radboud University Nijmegen have reconstruct thoughts more accurately than ever before. In this way, they have succeeded in determining which letter a test subject was looking at.

The researchers work has been published in the  journal Neuroimage.

Functional MRI scanners have been used in cognition research primarily to determine which brain areas are active while test subjects perform a specific task. The question is simple: is a particular brain region on or off? A research group at the Donders Institute for Brain, Cognition and Behaviour at Radboud University has gone a step further: they have used data from the scanner to determine what a test subject is looking at.

The researchers 'taught' a model how small volumes of 2x2x2 mm from the brain scans -- known as voxels -- respond to individual pixels. By combining all the information about the pixels from the voxels, it became possible to reconstruct the image viewed by the subject. The result was not a clear image, but a somewhat fuzzy speckle pattern. In this study, the researchers used hand-written letters.

Computer reads fMRI Scans of letters

Related articles
"After this we did something new", says lead researcher Marcel van Gerven. "We gave the model prior knowledge: we taught it what letters look like. This improved the recognition of the letters enormously. The model compares the letters to determine which one corresponds most exactly with the speckle image, and then pushes the results of the image towards that letter. The result was the actual letter, a true reconstruction."

"Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information. For example, you can recognize the lines and curves in this article as letters only after you have learned to read. And this is exactly what we are looking for: models that show what is happening in the brain in a realistic fashion. We hope to improve the models to such an extent that we can also apply them to the working memory or to subjective experiences such as dreams or visualisations. Reconstructions indicate whether the model you have created approaches reality."

In other words, the researchers claim to be very close to the ability to read your mind with their technique.  Such an understanding may also open up the possibility of implanting thoughts, knowledge or, on a more sinister level, control the actions of individuals without their authority.

"In our further research we will be working with a more powerful MRI scanner," explains Sanne Schoenmakers, who is working on a thesis about decoding thoughts. "Due to the higher resolution of the scanner, we hope to be able to link the model to more detailed images. We are currently linking images of letters to 1200 voxels in the brain; with the more powerful scanner we will link images of faces to 15,000 voxels."


SOURCE  Radboud University Nijmegen

By 33rd SquareSubscribe to 33rd Square

Tuesday, November 20, 2012

 How To Create A Mind
In How to Create a Mind: The Secret of Human Thought Revealed, the bold futurist and author of The New York Times bestseller The Singularity Is Near explores the limitless potential of reverse engineering the human brain. Ray Kurzweil is arguably today's most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.
I n How to Create a Mind: The Secret of Human Thought Revealed the bold futurist and author of The New York Times bestseller The Singularity Is Near explores the limitless potential of reverse engineering the human brain. Ray Kurzweil is arguably today's most influential—and often controversial—futurist.

In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.

Kurzweil discusses how the brain functions, how the mind emerges from the brain, and the implications of vastly increasing the powers of our intelligence in addressing the world's problems. He thoughtfully examines emotional and moral intelligence and the origins of consciousness and envisions the radical possibilities of our merging with the intelligent technology we are creating. Certain to be one of the most widely discussed and debated science books of the year, How to Create a Mind is sure to take its place alongside Kurzweil's previous classics.
It is rare to find a book that offers unique and inspiring content on every page. How To Create A Mind achieves that and more. Ray has a way of tackling seemingly overwhelming challenges with any army of reason, in the end convincing the reader that it is within our reach to create non-biological intelligence that will soar past our own. This is a visionary work that is also accessible and entertaining.
-Rafael Reif, President of MIT
Kurzweil's new book on the mind is magnificent, timely, and solidly argued!! His best so far!
-Marvin Minsky, Co-founder of the MIT Artificial Intelligence Lab

One of the eminent AI pioneers, Ray Kurzweil, has created a new book to explain the true nature of intelligence, both biological and non-biological. The book describes the human brain as a machine that can understand hierarchical concepts ranging from the form of a chair to the nature of humor. His important insights emphasize the key role of learning both in the brain and AI. He provides a credible roadmap for achieving the goal of super human intelligence which will be necessary to solve the grand challenges of humanity.

-Raj Reddy, founder, Robotics Institute, Carnegie Mellon University
If you have ever wondered about how your mind works, read this book. Kurzweil's insights reveal key secrets underlying human thought and our ability to recreate it. This is an eloquent and thought-provoking work.

-Dean Kamen, founder of FIRST
Ray Kurzweil - Author of How To Create A Mind


Ray Kurzweil has been described as "the restless genius" by the Wall Street Journal, and "the ultimate thinking machine" by Forbes. Inc. magazine ranked him #8 among entrepreneurs in the United States, calling him the "rightful heir to Thomas Edison," and PBS included Ray as one of 16 "revolutionaries who made America," along with other inventors of the past two centuries.

As one of the leading inventors of our time, Kurzweil was the principal developer of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. His website Kurzweil AI.net has more than one million readers.

Among Kurzweil's many honors, he is the recipient of the $500,000 MIT-Lemelson Prize, the world's largest for innovation. In 1999, he received the National Medal of Technology, the nation's highest honor in technology, from President Clinton in a White House ceremony. And in 2002, he was inducted into the National Inventor's Hall of Fame, established by the US Patent Office. He has received 19 honorary Doctorates and honors from three U.S. presidents. Kurzweil is the author of five books, four of which have been national best sellers.

The Age of Spiritual Machines has been translated into nine languages. His last book, The Singularity Is Near was a New York Times-best seller and has been translated into eight languages.

This talk was hosted by Boris Debic on behalf of Authors at Google. —



SOURCE  GoogleTalks

By 33rd SquareSubscribe to 33rd Square


Sunday, June 24, 2012



 Neuroscience
Dr. Henry Markram of the Human Brain Project proposes building a platform to catalyze efforts, integrate knowledge, and use supercomputers to simulate what is known about the brain, to predict gaps in our knowledge of the brain, and to test hypotheses about how it works.
K nowledge of the brain is highly fragmented and we have no way to prioritize the many experiments needed to fill the gaps in our understanding. It is time for a strategy of global collaboration, where scientists of all disciplines work together to solve this problem. 

Dr. Henry Markram of the Human Brain Project proposes building a platform to catalyze efforts, integrate knowledge, and use supercomputers to simulate what is known about the brain, to predict gaps in our knowledge of the brain, and to test hypotheses about how it works.

Markram is the Coordinator of the Human Brain Project, a proposed international effort to understand the human brain. His research career started in medicine and neuroscience in South Africa, then at the Weizmann Institute in Israel, at NIH and UCSF in the United States, and the Max-Planck Institute in Germany. In 2002, he joined the EPFL, where he founded the Brain Mind Institute.

His career has spanned a wide spectrum of neuroscience research, from whole animal studies to gene expression in single cells. He is best known for his work on synaptic plasticity. In the past 15 years he has focused on the structure and function of neural microcircuits -- the basic components in the architecture of the brain. In 2005, he launched the Blue Brain Project: the first attempt to begin a systematic integration of all biological knowledge of the brain into unifying brain models for simulation on supercomputers. The strategies, technologies and methods developed in this pioneering work lie at the heart of the Human Brain Project.



SOURCE  TEDx Talks

By 33rd SquareSubscribe to 33rd Square



 Artificial Brains
The Cornell – IBM SyNAPSE team has developed a key building block of a modular neuromorphic architecture: a neurosynaptic core, IBM Almaden scientist Dr. Dharmendra S Modha’s Cognitive Computing Blog reports.
Dharmendra Modha, the Manager of the Cognitive Computing Systems at IBM, has shared a paper on IBM Research's efforts to help shape the new age of cognitive computing via the development of a neuromorphic core processor on his blog.

Modha described IBM's research into Whole Brain Emulation and their plans to simulate the brain by 2018 at the 2008 Singularity Summit.

The core incorporates central elements from nanotechnology, neuroscience and supercomputing, including 256 leaky integrate-and-fire neurons, 1024 axons, and 256x 1024 synapses using an SRAM crossbar memory. It fits in a 4.2mm square area, using a 45nm SOI process.

A design prototype of the core was announced in August 2011, part of the SyNAPSE project, a DARPA program that aims to develop electronic neuromorphic (neuron-like) machine technology similar to the mammalian brain. Such artificial brains would be used in robots whose intelligence matches that of rats, cats, and ultimately even humans.

“One of the main obstacles holding back the widespread utility of low-power neuromorphic chips is the lack of a consistent software-hardware neural programming model, where neuron parameters and connections can be learned off-line to perform a task in software with a guarantee that the same task will run on power-efficient hardware,” the team said in an open-access paper.

The core replaces supercomputers and commodity chips (DSP, GPU, FPGA), both of which require high power consumption, the authors say. The compact design is also compatible with mobile devices. It consumes just 45pJ (picojoule) per spike.

“This is a flexible brain-like architecture capable of a wide array of real-time applications, and designed for the ultra-low power consumption and compact size of biological neural systems,” explained Mohda.




SOURCE  KurzweilAI

By 33rd SquareSubscribe to 33rd Square