bloc 33rd Square Business Tools - memristor 33rd Square Business Tools: memristor - All Post
Showing posts with label memristor. Show all posts
Showing posts with label memristor. Show all posts

Wednesday, May 3, 2017



Artificial Intelligence

Researchers have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. The discovery opens the door to building large networks that operate in ways similar to the human brain.


Computer scientists often now take inspiration from the functioning of the brain in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos.

Now, scientists from France and the United States have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

The results were published in the journal Nature Communications.

“People are interested in building artificial brain networks in the future,” said Bin Xu, a research associate in the University of Arkansas Department of Physics. “This research is a fundamental advance.”

Related articles
The brain learns when synapses make connections among neurons. The connections vary in strength, with a strong connection correlating to a strong memory and improved learning. It is a concept called synaptic plasticity, and researchers see it as a model to advance machine learning.

A team of French scientists designed and built an artificial synapse, called a memristor, made of an ultrathin ferroelectric tunnel junction that can be tuned for conductivity by voltage pulses. The material is sandwiched between electrodes, and the variability in its conductivity determines whether a strong or weak connection is made between the electrodes.

Xu and Laurent Bellaiche, distinguished professor in the University of Arkansas physics department, helped by providing a microscopic insight of how the device functions, which will enable future researchers to create larger, more powerful, self-learning networks.

Memristors are not new, but until now their working principles have not been well understood. The study provided a clear explanation of the physical mechanism underlying the artificial synapse. The University of Arkansas researchers conducted computer simulations that clarified the switching mechanism in the ferroelectric tunnel junctions, backing up the measurements conducted by the French scientists.

Research focused on artificial synapses is being conducted at laboratories across the globe. So far, the function of these devices is still not entirely understood. The researchers involved in this project have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by memristors.

"Our simulations therefore emphasize the importance of a precise knowledge of the memristor dynamics, and therefore of its accurate description on the basis of a physical model," conclude the researchers.

These results could pave the way toward low-power hardware implementations of millions or billions of reliable and predictable artificial synapses. These could function in deep neural networks, and in other in future brain-inspired computers.


SOURCE  University of Arkansas


By  33rd SquareEmbed





Wednesday, April 5, 2017

Researchers Create Artificial Synapses that Learn


Neuromorphic Computing

Researchers have created an artificial synapse memristors capable of learning autonomously. This discovery potentially opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy than standard computers.


Researchers from the National Center for Scientific Research (CNRS) in Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have reportedly developed an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits.

The research has been published in Nature Communications.

In the artist's impression of the electronic synapse above, the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide's ferroelectric domain structure, which is controlled by electric voltage pulses.

Related articles
One of the goals of biomimetics is to take inspiration from the functioning of the brain in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos.

In standard computers today using Von Neumann architecture, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain's learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor.

This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is being developed at many other laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

The work has been part of the ULPEC H2020 European project, and this discovery will be used for real-time shape recognition using an innovative camera where the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects.


SOURCE  CNRS


By  33rd SquareEmbed





Thursday, September 3, 2015

Neuromorphic Processor Breakthrough Could Mean Big Leap in Machine Learning Capability


Neuromorphic Computing


Knowm, a start-up company, has developed the world’s first adaptive neuromemristive processor This adaptive neuromemristive processor could transform machine learning applications, autonomous platforms, and data center operations.
 


A start-up company called Knowm says it will soon begin commercializing a state-of-the-art technique for building neuromorphic computing chips. Knowm claims tha just achieved a major technological breakthrough that it should be able to push the technology into production within a few years.

The basis for Knowm’s work is a piece of hardware called a memristor, which functions much like synapses do in the brain. Rather than committing certain information to a software program and traditional computing memory, memristors are able to “learn” by strengthening the electrical charge between two resistors (the “ristor” part of memristor) much like synapses strengthen connections between commonly used neurons.

Memristors can, in theory, make computer chips much smarter, but also very energy efficient. Just as the human brain functions better than a super computer on about 20 watts, which is barely enough to run a dim light bulb, memristors could mean massive changes in computer technology.

“This memristor enables efficient on-chip learning. We can do away with the supercomputers. We can create chips that are intrinsically adaptive,” Alex Nugent, Knowm’s founder and CEO, says.

"Knowm is making hardware soft, uniting memory and processing for synaptic operations, and in the process helping to define the post-Moore’s Law era."


The implications for robotics, driverless cars, and other devices is incredible.  Memristors—especially the ones his company is working on—offer “a massive leap in efficiency” over traditional CPUs, GPUs, and other hardware now used to power artificial intelligence workloads.

Related articles

Knowm’s recently announced breakthrough is the development of a technique it calls bi-directional incremental learning, accomplished thanks to a collaboration along with a memristor expert from Boise State University. In short, this means the resistors, or synapses, can pass charge in both directions and therefore tweak the weight of their connections more accurately than existing memristor technologies, which only send an electrical charge in one direction.

The Knowm BS-AF-W Memristor, pictured above, is what Nugent calls “the ideal learning memristor.” It is capable of bi-directional incremental operation, which means that the resistance of the memristor can be nudged in both directions. The resistance is an analog to the weight of a synapse, and the process of learning comes about through nudging a device so that it hones itself onto a certain value.

Nugent said this is the final piece of the puzzle that he has been working on for a decade. Knowm already sells memristors and emulation kits for researchers to experiment with. If it’s able to attract the right funding and partnerships with chip fabricators, Nugent thinks his approach to super-smart, super-efficient computing could be available for demos in about two years and production after that, possibly as co-processors on traditional CPU-based motherboards.

“I want to do what brains can do,” he said. “They’re phenomenal enough.”

“Everything eventually reaches its physical limits. We are currently witnessing the end of Moore’s Law scaling in CMOS electronics just as we are finally gaining the computational capacity needed to demonstrate the potential of machine learning,” Nugent said in a press release. “Pursing the same digital computing methodologies for machine learning systems keeps us from realizing the full potential of machine learning and artificial intelligence. Through AHaH Computing, Knowm is making hardware soft, uniting memory and processing for synaptic operations, and in the process helping to define the post-Moore’s Law era.”



SOURCE  Fortune


By 33rd SquareEmbed



Thursday, May 7, 2015

Neural Network Chip Constructed With Memristors

 Neuromorphic Computing
Researchers have created a transistor-free metal-oxide memristor network that can learn to recognize imperfect 3 × 3 pixel black-and-white patterns as one of three letters of the alphabet. The approach could be scaled so that larger neuromorphic networks capable of more challenging tasks may be possible.





Researchers at the University of California and Stony Brook University have, for the first time, created a neural network chip that was built using just memristors.

The study, published in the journal Nature, describes how they built their chip and what capabilities it has.

Memristors are electronic analog memory devices that are modeled on human neurons and synapses. Human consciousness, some believe, is in reality, nothing more than an advanced form of memory retention and processing, and it is analog, as opposed to computers, which of course are digital.

The concept for memristors was first worked out University of California professor Leon Chua back in 1971, but it was not until a team working at Hewlett-Packard in 2008, first built one. Since then, a lot of research has gone into studying the technology, but until now, no one had ever built a neural-network chip based exclusively on them.

Until recently, most neural networks have been theoretical or just software based. Google, Facebook and IBM, for example, are all working on computer systems running such learning networks, mostly meant to pick faces out of a crowd, or return an answer based on a human phrased question.

The gains in such technology have been obvious, the limiting factor is the hardware—as neural networks grow in size and complexity, they begin to tax the abilities of even the fastest computers.

Neuromorphic memristor
Unlike other brain-inspired neuromorphic chips, which use the same silicon transistors and digital circuits that make up ordinary computer processors. The memristor-based chip is better suited for mimicking synapses, says Dmitri Strukov, an assistant professor at the University of California, Santa Barbara, who led work on the new memristor chip.

Many transistors and digital circuits are needed to represent a single synapse. By contrast, each of the 100 or so synapses on the UCSB chip is represented using only a single memristor.

Related articles
The next step, according to experts, is to replace transistors with memristors. Each memristor is able to learn, in ways similar to the way neurons in the brain learn when presented with something new. Constructing the memristors on a chip would of course reduce the overhead needed to run the neural network.

"This demonstration is an important step towards the implementation of much larger and more complex memristive neuromorphic networks."


The new chip, the team reports, was created using transistor-free metal-oxide memristor crossbars and represents a basic neural network able to perform just one task—to learn and recognize patterns in very simple 3 × 3 pixel black and white images. In the diagram at the top, the memristors are the yellow components.

The experimental chip is an important step towards the creation of larger neural networks that tap the real power of memristors. It also makes possible the idea of building computers in lock-step with advances in research looking into discovering just how exactly our neurons work at their most basic level.
Memristor Neuromorphic Chip

"We believe that this demonstration is an important step towards the implementation of much larger and more complex memristive neuromorphic networks," state the researchers.

Commenting on the work, Robert Legenstein, an associate professor at Graz University of Technology in Austria, wrote: “If this design can be scaled up to large network sizes, it will affect the future of computing … Laptops, mobile phones and robots could include ultra-low-power neuromorphic chips that process visual, auditory and other types of sensory information.”


SOURCES  PhysOrg and MIT Technology Review

By 33rd SquareEmbed

Wednesday, October 1, 2014

Researchers Create Nanoscale Memory Devices That Mimic The Human Brain

 Memory
Researchers have built a novel brain inspired nano-structure that offers a new platform for the development of highly stable and reliable nanoscale memory devices.




Researchers from RMIT University have brought ultra-fast, nano-scale data storage within striking reach, using technology that mimics the human brain.

Project leader Dr Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the nanometer-thin stacked structure was created using thin film, a functional oxide material more than 10,000 times thinner than a human hair.

"The thin film is specifically designed to have defects in its chemistry to demonstrate a 'memristive' effect - where the memory element's behavior is dependent on its past experiences," Dr Sriram said.

"With flash memory rapidly approaching fundamental scaling limits, we need novel materials and architectures for creating the next generation of non-volatile memory.

"The structure we developed could be used for a range of electronic applications - from ultrafast memory devices that can be shrunk down to a few nanometers, to computer logic architectures that replicate the versatility and response time of a biological neural network.

"While more investigation needs to be done, our work advances the search for next generation memory technology can replicate the complex functions of human neural system - bringing us one step closer to the bionic brain."

Related articles
The research relies on memristors, touted as a transformational replacement for current hard drive technologies such as Flash, SSD and DRAM.

"The finding and the material used are significant as the stable memory effect arises from pathways in the oxide that are extremely small - about 60 nanometres."


Memristors have potential to be fashioned into non-volatile solid-state memory and offer building blocks for computing that could be trained to mimic synaptic interfaces in the human brain.

Hussein Nili, a PhD researcher and lead author of the paper published in Advanced Functional Materials, said: "The finding and the material used are significant as the stable memory effect arises from pathways in the oxide that are extremely small - about 60 nanometres.

"These can also be tuned and controlled by the application of pressure, which creates new opportunities for using these memory elements as sensors and actuators."

The research was part of a collaboration with Professor Dmitri Strukov from the University of California, Santa Barbara.

Supported by an Australian Research Council Discovery grant and RMIT's Platform Technologies Research Institute, the research is one of a range of innovative projects conducted by the Functional Materials and Microsystems Research Group.


SOURCE  RMIT University

By 33rd SquareEmbed

\

Tuesday, September 23, 2014

What Will Computers Look Like in Ten Years?

 Computers
Frank Z. Wang, Professor in Future Computing and Head of the School of Computing at the University of Kent recently discussed how computers will evolve over the next ten years.




Computer science has impacted many parts of our lives. Computer scientists craft the technologies that enable the digital devices we use every day and computing will be at the heart of future revolutions in business, science, and society.

In the talk below, Frank Z. Wang, Professor in Future Computing and Head of the School of Computing, University of Kent discusses how computers will evolve over the next decade at the Science and Information Conference this year. His research targets the next generation computing paradigms and their applications.

Some of these advances include Cloud Computing, Grid Computing and the next version of the web,  Internet 2.0. According to Wang, a developed Cloud/Grid Computing platform could universally accelerate Office/Database/Web/Media applications by a factor up to ten.

Wang discusses how computing, modeled after the brain is making inroads. He shows how memristor technology is proving to work after being theoretically postulated for over 40 years. Wang's work shows that in ameoba's, memory may be captured in a type of biological memristor. "That is why we are in a better position to design the next generation of computers," states Wang.

Memristor

"Thanks to the invention of the memristor, the invention opens a new way to revive traditional neural network computers."


He and his team have applied this work to neural network computers.  In the past, modelling computer systems directly on neurons did not make much sense, because each neuron could be connected to over 20,000 synapses.  Now, "thanks to the invention of the memristor, the invention opens a new way to revive traditional neural network computers," Wang says.

Apart from computers themselves, Wang comments that concepts and technologies developed within computer science are starting to have wide-ranging applications outside the subject.  For instance computer scientists recently proposed a theory on evolution based on computer science that reduces the perceived need for competition in evolution.

Related articles
His work won an ACM/IEEE Super Computing finalist award. Wang also discusses research on Green Computing, Brain Computing and Future Computing.

Wang is the Professor in Future Computing and Head of School of Computing, University of Kent. Wang's research interests include cloud/grid computing, green computing, brain computing and future computing. He has been invited to deliver keynote speeches and invited talks to report his research worldwide, for example at Princeton University, Carnegie Mellon University, CERN, Hong Kong University of Sci. & Tech., Tsinghua University (Taiwan), Jawaharlal Nehru University, Aristotle University, and University of Johannesburg.

In 2004, he was appointed as Chair & Professor, Director of Centre for Grid Computing at CCHPCF (Cambridge-Cranfield High Performance Computing Facility). CCHPCF is a collaborative research facility in the Universities of Cambridge and Cranfield (with an investment size of £40 million). Prof Wang and his team have won an ACM/IEEE Super Computing finalist award. Prof Wang was elected as the Chairman (UK & Republic of Ireland Chapter) of the IEEE Computer Society in 2005. He is Fellow of British Computer Society. He has served the Irish Government High End Computing Panel for Science Foundation Ireland (SFI) and the UK Government EPSRC e-Science Panel.




SOURCE  SAI Conference

By 33rd SquareEmbed