Neural Network Teaches Itself Speech By Talking to People

Saturday, November 14, 2015

Neural Network Teaches Itself Speech By Talking to People


Artificial Intelligence

Researchers have created a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a blank slate. The system learns, only by way of communication with a person. 


Researchers from the University of Sassari and the University of Plymouth have developed a cognitive model able to learn to communicate using human language starting from a a totally blank slate. by communicating soley with a person. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning).

The project has been published in the international scientific journal PLOS ONE. This research sheds light on the neural processes that underlie the development of language.

Scientists have long known that in the human brain there are about one hundred billion neurons that communicate by means of electrical signals. Over the years they have learned a lot about the mechanisms of production and transmission of electrical signals among neurons. There are also experimental techniques, such as functional magnetic resonance imaging, which have let to understanding which parts of the brain are most active when we are involved in different cognitive activities.

Despite this work, a detailed knowledge of how a single neuron works and what are the functions of the various parts of the brain have not been enough to give an answer how the mass of neurons in our brains can work together to perform complex cognitive tasks.

Related articles
For artificial intelligence research, there is no evidence of the existence of algorithmic programs with set rules of behavior in our brains. In fact, today many researchers believed that our brain is able to develop higher cognitive skills simply by interacting with the environment, starting from very little innate knowledge. The ANNABELL model appears to confirm this perspective.

"Our work emphasizes that the decision processes operated by the central executive are not based on pre-coded rules," write the researchers. Instead, they found they are statistical decision processes, which are learned by exploration-reward mechanisms.

"A neural architecture is suitable for modeling the development of the procedural knowledge that determines those decision processes," they conclude.

"A neural architecture is suitable for modeling the development of the procedural knowledge that determines those decision processes."
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating.

Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously. This mechanism is essential for learning and for long-term memory.

Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned “on” or “off” by a control signal coming from other neurons. When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it. The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.

The ANNABELL cognitive model has been validated using a database of about 1500 input sentences based on literature about early language development. The model has responded by producing a total of about 500 sentences in output which contains nouns, verbs, adjectives, pronouns, and other word classes, demonstrating the ability to express a wide range of capabilities in human language processing.

The current version of ANNABELL sets the stage for subsequent experiments on the fluidity of the brain and its robustness in the response to noisy or altered input signals say the researchers. They suggest that, the addition of sensorimotor knowledge to the system through visual input and action capabilities, would lead to the extension of the model for handling the developmental stages in the grounding and acquisition of language.

SOURCE  Neuroscience News


By 33rd SquareEmbed


0 comments:

Post a Comment