bloc 33rd Square Business Tools - Rajesh Rao 33rd Square Business Tools: Rajesh Rao - All Post
Showing posts with label Rajesh Rao. Show all posts
Showing posts with label Rajesh Rao. Show all posts

Tuesday, April 11, 2017



Brain-Brain Interfaces

Neuroscientists have only just begun deciphering and decoding the mysteries of the human brain. Already though, initial work has been done that one day may allow us to share information, thoughts and experiences through direct brain-brain interfaces.


Brain-to-brain interfaces are gradually moving from the realm of science fiction, to the space of the laboratory.

Researcher Miguel Nicolelis, for instance has been able to demonstrate long distance communication between the brains of animals.  In experiments Nicolelis' team attached an "encoder" rat in Brazil, that was trained in a specific task, namely pressing a lever in its cage it to earn a reward. A brain implant recorded activity from the rat's brain and converted it into an electrical signal that was delivered via neural link to the brain implant of a second "decoder" rat.

brain-brain interface

Rajesh Rao at the University of Washington and his team of researchers have also performed what they believe was the first noninvasive human-to-human brain interface a few years ago, with one researcher able to send a brain signal via the Internet to control the hand motions of a another person.

Related articles
In humans so far, brain-brain interface technology remains in early development., The most advanced brain-to-brain interfaces will most likely require direct access to the brain. The need to perform major invasive surgery could be alleviated by the evolution of technology. One such promising avenue for this is Elon Musk's neural lace. The prolific investor/inventor/entrepreneur has recently announced the creation of a company, Neuralink, where the goal is to create minimally invasive brain implant technology.

Musk hopes that the technology may help us communicate with machines and artificial intelligence, but by extension, neural lace may also permit direct brain-brain communication as well.

The implications of the technology and its potential future uses are far broader, Anders Sandberg, from the Future of Humanity Institute at Oxford University has said. "The main reason we are running the planet is that we are amazingly good at communicating and coordinating. Without that, although we are very smart animals, we would not dominate the planet."

This video from Galactic Public Archives explores brain-brain interfaces:


"Where is this going? We have no idea. We're just scientists," Nicolelis said at a TED talk. "We are paid to be children, to basically go to the edge and discover what is out there."


By  33rd SquareEmbed





Tuesday, December 6, 2016

People Pay Video Game Using Only Direct Brain Stimulation


Direct Brain Stimulation


Scientists have published the first demonstration of humans playing a simple, two-dimensional computer game using only input from direct brain stimulation—without relying on any usual sensory cues from sight, hearing or touch.

University of Washington researchers have taken a first step in showing how humans can interact with virtual realities via direct brain stimulation.

In a paper published online in Frontiers in Robotics and AI, they describe their demonstration of humans playing a simple, two-dimensional computer game using only input from direct brain stimulation — without relying on any usual sensory cues from sight, hearing or touch.

In the game, the subjects had to navigate 21 different mazes, with two choices to move forward or down based on whether they sensed a visual stimulation artifact called a phosphene, which are perceived as blobs or bars of light. To signal which direction to move, the researchers generated a phosphene through transcranial magnetic stimulation, a well-known technique that uses a magnetic coil placed near the skull to directly and noninvasively stimulate a specific area of the brain.

“The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality,” said senior author Rajesh Rao, UW professor of Computer Science & Engineering and director of the Center for Sensorimotor Neural Engineering.

“The fundamental question we wanted to answer was: Can the brain make use of artificial information that it’s never seen before that is delivered directly to the brain to navigate a virtual world or do useful tasks without other sensory input? And the answer is yes.”

The five test subjects made the right moves in the mazes 92 percent of the time when they received the input via direct brain stimulation, compared to 15 percent of the time when they lacked that guidance.

The absence or presence of phosphenes – visual artifacts that can be created through direct brain stimulation – told the test subjects whether to move forward or down.University of Washington

"The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality."
The simple game demonstrates one way that novel information from artificial sensors or computer-generated virtual worlds can be successfully encoded and delivered noninvasively to the human brain to solve useful tasks. It employs a technology commonly used in neuroscience to study how the brain works — transcranial magnetic stimulation — to instead convey actionable information to the brain.

The test subjects also got better at the navigation task over time, suggesting that they were able to learn to better detect the artificial stimuli.

“We’re essentially trying to give humans a sixth sense,” said lead author Darby Losey, a graduate in computer science and neurobiology who now works as a staff researcher for the Institute for Learning & Brain Sciences (I-LABS).  “So much effort in this field of neural engineering has focused on decoding information from the brain. We’re interested in how you can encode information into the brain.”

"These results suggest that humans can learn to utilize information delivered non-invasively to their brains to solve tasks that cannot be solved using their natural senses. Exploring this emerging field of human sensory augmentation, with its technological as well as ethical and social implications, remains an active area of research," conclude the researchers.

People Pay Video Game Using Only Direct Brain Stimulation

Related articles
The initial experiment used binary information — whether a phosphene was present or not — to let the game players know whether there was an obstacle in front of them in the maze. In the real world, even that type of simple input could help blind or visually impaired individuals navigate.

“The technology is not there yet — the tool we use to stimulate the brain is a bulky piece of equipment that you wouldn’t carry around with you,” said co-author Andrea Stocco, a UW assistant professor of psychology and I-LABS research scientist. “But eventually we might be able to replace the hardware with something that’s amenable to real world applications.”

The testers successfully navigated an average of 92 percent of the moves when they received input via direct brain stimulation to guide them through the experimental mazes (blue) versus only 15 percent of the steps in the control mazes when they received no such input (red mazes).University of Washington

Together with other partners from outside UW, members of the research team have co-founded Neubay, a startup company aimed at commercializing their ideas and introducing neuroscience and artificial intelligence (AI) techniques that could make virtual-reality, gaming and other applications better and more engaging.

The team is currently investigating how altering the intensity and location of direct brain stimulation can create more complex visual and other sensory perceptions which are currently difficult to replicate in augmented or virtual reality.

“We look at this as a very small step toward the grander vision of providing rich sensory input to the brain directly and noninvasively,” said Rao. “Over the long term, this could have profound implications for assisting people with sensory deficits while also paving the way for more realistic virtual reality experiences.”





SOURCE  University of Washington


By  33rd SquareEmbed



Saturday, January 30, 2016

Scientists Now Able Decode Neural Signals Almost as they Happen


Mind Reading

Researchers using electrodes in patients’ temporal lobes have found that the signals carry information that let scientists predict what object patients are seeing, almost in real time.



Using electrodes implanted in the temporal lobes of awake patients, scientists have decoded brain signals at nearly the speed of perception. Further, analysis of patients’ neural responses to images of faces and houses enabled the scientists to subsequently predict which images the patients were viewing, and when, with better than 95 percent accuracy.

The research has been published in PLOS Computational Biology.

University of Washington computational neuroscientist Rajesh Rao and University of Washington Medicine (UW) neurosurgeon Jeff Ojemann, working their student Kai Miller and with colleagues in Southern California and New York, conducted the study.

Rao has also attained notoriety lately for his experiments in the field of brain-brain communication.

“We were trying to understand, first, how the human brain perceives objects in the temporal lobe, and second, how one could use a computer to extract and predict what someone is seeing in real time?” explained Rao, a UW professor of computer science and engineering.  Rao also directs the National Science Foundation’s Center for Sensorimotor Engineering, headquartered at UW.
Scientists Now Able Decode Neural Signals Almost as they Happen
In the image above, the numbers 1-4 denote electrode placement in temporal lobe, and neural responses of two signal types being measured.

Related articles
“Clinically, you could think of our result as a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked-in,” he said.

The study centered around seven epilepsy patients receiving care at Harborview Medical Center in Seattle. Each was experiencing epileptic seizures not relieved by medication, Ojemann said, so each had undergone surgery in which their brains’ temporal lobes were implanted – for about a week – with electrodes to try to locate the seizures’ focal points. 

“They were going to get the electrodes no matter what; we were just giving them additional tasks to do during their hospital stay while they are otherwise just waiting around,” Ojemann said.

In the experiment, the electrodes from multiple temporal-lobe locations were connected to powerful computational software that extracted two characteristic properties of the brain signal: “event-related potentials” and “broadband spectral changes.”

Rao characterized the former as likely arising from “hundreds of thousands of neurons being co-activated when an image is first presented,” and the latter as “continued processing after the initial wave of information.”

The subjects, watching a computer monitor, were shown a random sequence of pictures of human faces and houses, interspersed with blank gray screens. Their task was to watch for an image of an upside-down house. 

Neuroscientist Rajesh Rao and neurosurgeon Jeff Ojemann

Neuroscientist Rajesh Rao and neurosurgeon Jeff Ojemann 

“We got different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses,” Rao said.

"Our result as a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked-in."
The software used sampled and digitized the brain signals 1,000 times per second to extract their characteristics. The software also analyzed the data to determine which combination of electrode locations and signal types correlated best with what each subject actually saw.

In that way it yielded highly predictive information.

By training an algorithm on the subjects' responses to the known set of images, the researchers could examine the brain signals representing the final third of the images, whose labels were unknown to them, and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen.

This accuracy was attained only when event-related potentials and broadband changes were combined for prediction, which suggests they carry complementary information.

“Traditionally scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

The scientists' technique, he said, is a steppingstone for brain mapping, in that it could be used to identify in real time which locations of the brain are sensitive to particular types of information.

“The computational tools that we developed can be applied to studies of motor function, studies of epilepsy, studies of memory. The math behind it, as applied to the biological, is fundamental to learning,” Ojemann said.


SOURCE  The University of Washington


By 33rd SquareEmbed


Thursday, November 6, 2014

Brain-Brain Communication Experiment Repeated and Improved

 Brain-Brain Communication
Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team's initial demonstration a year ago. The researchers were able to transmit the signals from one person's brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.






University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published in the journal PLOS ONE.

In this photo above, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Brain-Brain Interface

Related articles
Collaborator Rajesh Rao, a UW professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way – except for the link between their brains.

"The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology."


Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room – with no ability to see the computer game – with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Pacific Rim Neural Bridge

Now, with a new grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain – we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.  The technology may even be the precursor of a neural bridge, like the one depicted in Pacific Rim.



SOURCE  University of Washington

By 33rd SquareEmbed

Wednesday, August 28, 2013

First Human Brain-To-Brain Interface Demonstrated

 Brain-To-Brain Interfaces
Researchers have performed what they believe is the first noninvasive human-to-human brain interface, with one researcher able to send a brain signal via the Internet to control the hand motions of a colleague.




University of Washington researchers have performed what they believe is the first noninvasive human-to-human brain interface, with one researcher able to send a brain signal via the Internet to control the hand motions of a fellow researcher.

Using electrical brain recordings and a form of magnetic stimulation, Rajesh Rao sent a brain signal to Andrea Stocco on the other side of the UW campus, causing Stocco's finger to move on a keyboard.

While researchers at Duke University have demonstrated brain-to-brain communication between two rats, and Harvard researchers have demonstrated it between a human and a rat, Rao and Stocco believe this is the first demonstration of human-to-human brain interfacing.

"The Internet was a way to connect computers, and now it can be a way to connect brains," Stocco said. "We want to take the knowledge of a brain and transmit it directly from brain to brain."

The researchers captured the full demonstration on video recorded in both labs. The version available at the end of this story.

Rao, a UW professor of computer science and engineering, has been working on brain-computer interfacing (BCI) in his lab for more than 10 years and just published a textbook, Brain-Computer Interfacing on the subject.

In 2011, spurred by the rapid advances in BCI technology, he believed he could demonstrate the concept of human brain-to-brain interfacing. So he partnered with Stocco, a UW research assistant professor in psychology at the UW's Institute for Learning & Brain Sciences.

human brain-to-brain interface
The cycle of the experiment. Brain signals from the “Sender” are recorded. When the computer detects imagined hand movements, a “fire” command is transmitted over the Internet to the TMS machine, which causes an upward movement of the right hand of the “Receiver.” This usually results in the “fire” key being hit.
Image Souce: University of Washington

On Aug. 12, Rao sat in his lab wearing a cap with electrodes hooked up to an electroencephalography machine, which reads electrical activity in the brain. Stocco was in his lab across campus wearing a purple swim cap marked with the stimulation site for the transcranial magnetic stimulation coil that was placed directly over his left motor cortex, which controls hand movement.

The team had a Skype connection set up so the two labs could coordinate, though neither Rao nor Stocco could see the Skype screens.

Rao looked at a computer screen and played a simple video game with his mind. When he was supposed to fire a cannon at a target, he imagined moving his right hand (being careful not to actually move his hand), causing a cursor to hit the "fire" button. Almost instantaneously, Stocco, who wore noise-canceling earbuds and wasn't looking at a computer screen, involuntarily moved his right index finger to push the space bar on the keyboard in front of him, as if firing the cannon. Stocco compared the feeling of his hand moving involuntarily to that of a nervous tic.

"It was both exciting and eerie to watch an imagined action from my brain get translated into actual action by another brain," Rao said. "This was basically a one-way flow of information from my brain to his. The next step is having a more equitable two-way conversation directly between the two brains."

The technologies used by the researchers for recording and stimulating the brain are both well-known. Electroencephalography, or EEG, is routinely used by clinicians and researchers to record brain activity noninvasively from the scalp. Transcranial magnetic stimulation, or TMS, is a noninvasive way of delivering stimulation to the brain to elicit a response. Its effect depends on where the coil is placed; in this case, it was placed directly over the brain region that controls a person's right hand. By activating these neurons, the stimulation convinced the brain that it needed to move the right hand.

Related articles
Computer science and engineering undergraduates Matthew Bryan, Bryan Djunaedi, Joseph Wu and Alex Dadgar, along with bioengineering graduate student Dev Sarma, wrote the computer code for the project, translating Rao's brain signals into a command for Stocco's brain.

"Brain-computer interface is something people have been talking about for a long, long time," said Chantel Prat, assistant professor in psychology at the UW's Institute for Learning & Brain Sciences, and Stocco's wife and research partner who helped conduct the experiment. "We plugged a brain into the most complex computer anyone has ever studied, and that is another brain."

At first blush, this breakthrough brings to mind all kinds of science fiction scenarios. Stocco jokingly referred to it as a "Vulcan mind meld." But Rao cautioned this technology only reads certain kinds of simple brain signals, not a person's thoughts. And it doesn't give anyone the ability to control your actions against your will.

Both researchers were in the lab wearing highly specialized equipment and under ideal conditions. They also had to obtain and follow a stringent set of international human-subject testing rules to conduct the demonstration.

"I think some people will be unnerved by this because they will overestimate the technology," Prat said. "There's no possible way the technology that we have could be used on a person unknowingly or without their willing participation."

Rao and Stocco next plan to conduct an experiment that would transmit more complex information from one brain to the other. If that works, they then will conduct the experiment on a larger pool of subjects.




SOURCE  University of Washington


By 33rd SquareSubscribe to 33rd Square

Thursday, June 13, 2013

BCI

 Brain-Computer Interfaces
University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.






Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.

Now, University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.

“What we’re seeing is that practice makes perfect with these tasks,” said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. “There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up.”

Related articles
Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, andJeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.

In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.

The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor’s movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.

Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.

“Now we have a brain marker that shows a patient has actually learned a task,” Ojemann said. “Once the signal has turned off, you can assume the person has learned it.”

While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.

“We now have a larger-scale view of what’s happening in the brain of a subject as he or she is learning a task,” Rao said. “The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren’t directly involved to get the job done.”

Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person’s head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn’t very reliable yet because signals from eye blinking and other muscle movements interfere too much.

A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburghhave demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.

The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person’s head for a longer time to be able to control computer cursors or robotic limbs at home.

“This is one push as to how we can improve the devices and make them more useful to people,” Wander said. “If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly.”


SOURCE  University of Washington

By 33rd SquareSubscribe to 33rd Square