bloc 33rd Square Business Tools - neuroscience 33rd Square Business Tools: neuroscience - All Post
Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Wednesday, July 26, 2017

The Future of Medicine is filled with Laser Technology


Once considered nothing more than science fiction, laser technology now dominates certain arenas of medicine.


Although it is somewhat ironic, the foundation for most surgical procedures is to create additional trauma before healing the trauma for which the patient first sought help. For example, to replace a joint damaged in an accident, the surgeon must cut through skin, muscle and cartilage to remove the damaged joint and replace it with a prosthetic. It was perhaps this paradox that led to the quest for a solution to conventional procedures and the eventual development of laser treatment. The latter is not as traumatic, invasive or painful as traditional surgical processes.

Understanding Laser Treatment

Laser therapy is the use of laser energy in a non invasive manner to create a photochemical response in dysfunctional or damaged tissues. This type of treatment can accelerate recovery from a broad range of chronic and acute conditions, as well as reduce inflammation and alleviate pain. According to rehabilitation therapists, the primary objective of treatment for most patients with debilitating, painful conditions is to improve mobility and function. Laser treatment is a surgery free, drug-free avenue through which to reach this goal. In addition, laser surgery has replaced conventional surgery in many areas of medicine.

In 2013, neuroscientists from Harvard University working in conjunction with researchers from the University of Michigan have revealed in a published study the benefits and effectiveness of a cutting-edge, laser-based “scalpel” that relies on something called Raman scattering. This phenomenon uses light to identify brain tumors, which contrast with the cellular structure of the surrounding tissue. This is just one example of technological advancements that may revolutionize medical procedures in the future.

The Use of Lasers for Non-Catastrophic Illnesses

Advancements in laser technology are not limited to severe illnesses or major operations. In fact, the majority of society’s ailments are everyday struggles with manageable conditions–such as the regulation of insulin in diabetics–or poor vision and congenital baldness. The use of laser treatment has also led to one of the greatest advancements in eye surgery, Laser-Assisted in situ Keratomileusis, more commonly known as LASIK, which restores normal vision to nearsighted or farsighted individuals.

Related articles

The Role of Laser Treatment in the Future

The future of Class IV Laser Treatment is seemingly very bright, even though certain advances are still years away. One such example is transparent skull implants, which researchers are currently studying. Scientists are attempting to determine whether a laser light can be shined into a person's brain in order to assist the surgeon to complete various procedures without having to literally open the patient's skull.

Scientists from the University of California who are currently overseeing the research project have stated that it is a vital first step toward eliminating an invasive brain surgery procedure called a craniectomy, during which part of the skull is removed. The alternative procedure would involve allowing a laser to travel through a tiny “window” created in the skull to complete certain surgeries.

Additionally, laser treatment may entirely eliminate the use of scalpels in the future, similar to how other procedures, such as lobotomies and bloodletting, have become obsolete and elbowed aside by newer, healthier and more efficient procedures.

Cell Evaluation and Regeneration of New Cells

Laser technology is also useful in the regeneration of new tissues and cells. Professor Woo, from the Wake Forest University School of Medicine, uses lasers to create supporting structures that hold tissue implants in place. The laser is used somewhat like a drill, designing holes in the supporting structure that ultimately guide new cells in the appropriate direction as they mature.

laser bioprinting

Researchers also use lasers to evaluate cell components and have found them instrumental in evaluating single cells to gather information. Tuan Vo-Dinh, Duke University's director of the Fitzpatrick Institute for Photonics, has stated that the use of laser therapy is currently growing at a double-digit rate. Vo-Dinh uses advanced laser techniques to observe the rate at which molecules scatter laser light, which could eventually lead to customized therapies for people based on their individual DNA sequences.

Lasers may also help medical engineers design small devices to reduce pain and injury at injection sites when compared with conventional techniques.

Lasers are highly advantageous as diagnostic tools as well, as they allow non invasive probing of live tissue.

Safety

Despite the exciting possibilities, the safety of laser treatment must be evaluated further. For instance, light and heat exposure can result in tissue damage and burns, and therefore laser use must be carefully monitored.

One of the biggest obstacles in getting lasers past medical regulatory boards is general concerns about radiation exposure. Fortunately, technological advancements concerning lasers and their therapeutic use have reduced the amount of light to which tissues are exposed during laser surgery. A device approved by the FDA in 2010, called a femtosecond laser, was one of the first instruments designed specifically to lessen tissue exposure.

Ultimately, studies involving this non invasive treatment are likely to continue as the 21st Century progresses. Hopefully, such research will lead to better and more efficient treatments and procedures in the future.


Top Image by Andrea Pacelli

By  Isaac ChristiansenEmbed

After graduating from medical school at the University of Michigan, Isaac started his own private orthopedic practice in Riverton Utah. Having dealt with and overcome many of the obstacles that come with entrepreneurship and small business ownership, Isaac has found a passion for helping the up and coming generation thrive in their careers and ultimately their lives.



Wednesday, May 10, 2017

Mind Reading from fMRI


Mind Reading

A newly developed neural network method now makes it easier and more accurate to decode fMRI scans. Using deep learning, researchers were able to reconstruct images a viewer was looking at through brain scan data analysis. The 'deep generative multiview model' also learned to correlate the data so that the accompanying standard fMRI noise could be accounted for in the generation of the reconstructed images.


One of the far reaching goals of neuroscience is to be able to read a person's thoughts. Such a technology is the inspiration, in part for Elon Musk's new company, Neuralink, along with other brain-machine interface ventures. So far, for data coming from functional magnetic resonance imaging (fMRI) scans, the task has proven to be very challenging.

fMRI scans are inherently noisy, and the activity in one voxel is well known to be influenced by activity in other voxels. This kind of correlation is computationally difficult and expensive to manage. Most work in this area has simply not dealt with it. This has significantly reduces the quality of the image reconstructions they produce.

Now, Changde Du at the Research Center for Brain-Inspired Intelligence in Beijing, China, and they his research team have developed a better ways to process data from fMRI scans to produce more accurate brain-image reconstructions. The team's research has been published online.

Their method uses deep learning techniques that handle nonlinear correlations between voxels more capably. The result is a much better way to reconstruct the way a brain perceives images.

Changde used several data sets of fMRI scans of the visual cortex of a human subject looking at a simple image—a single digit or a single letter. Each data set consists of the scans and the original image. They mapped the data to find a way to use the fMRI scans to reproduce the viewer's perceived image. In total, the team has access to over 1,800 fMRI scans and original images.

Related articles
According to the researchers it was a straightforward deep learning task. They used 90 percent of the data to train the network to understand the correlation between the brain scan and the original image. Next, they tested the neural network on the remaining data by feeding it the scans and asking it to reconstruct what the viewed images were.

This approach had the advantage of having the network learn which voxels were used to reconstruct the image, avoiding the need to process the data from them all.

The neural network also learned how the data from the fMRI data was correlated. This was an important part of the research, because if the correlations are ignored, they end up being treated like noise and discarded. So the new approach—the so-called deep generative multiview model or DGMM—exploits these correlations and distinguishes them from real noise.  

The team compared their results from those of a number of other brain image reconstruction techniques. (See image at top). Generally, the reconstructed images are clear representations of the originals, and were for the most part superior to those derived by other methods.

“Extensive experimental comparisons demonstrate that our approach can reconstruct visual images from fMRI measurements more accurately,” write the study authors.

The research may have other implications other than regenerating what a view sees by interpreting a brain scan. "Although we focused on visual image reconstruction problem in this paper, our framework can also deal with brain encoding tasks," write the study authors.

The next steps for the research will include ways to analyze scenes more complex than simple numbered text and possibly moving images.

SOURCE  MIT Technology Review


By  33rd SquareEmbed





Monday, April 10, 2017

Researcher Points to How We Will Work with AI in the Near Future


Artificial Intelligence

Dr Micheal Harre has been thinking alot about how our workplaces will involve incorporating artificial intelligence. His use of the technology in exploring the emergence of economic bubbles is already heavily reliant on AI to provide detailed analysis. 


Dr. Michael Harré an artificial  intelligence enthusiast and lecturer in Complex Systems at the University of Sydney, believes living and working with AI will force the world to reassess basic assumptions about our sense of self.

"What will it be like to regularly confront an AI, or a robot with an AI in it, that behaves like a human?"
"What will it be like to regularly confront an AI, or a robot with an AI in it, that behaves like a human?" Harré asks. "The fact that we will be interacting with the appearance of consciousness in things that are clearly not biological will be enough for us to at least unconsciously revise what we think consciousness is."

Today, AI systems and people have very different decision-making processes. Humans rely strongly on intuition, while AIs calculate all possible options and deduce the most likely answer. All this data-crunching comes at a cost: the vast computational power that's needed limits the number of tasks AI can do.

According to Harré, AI is a very different discipline from robotics. Artificial intelligence is a field of computer science that mimics the natural learning process of the human brain by creating what are called artificial neural networks. A popular technique used today is reinforcement learning. In such systems, each correct answer reinforces the AI's neural pathways, so it actually learns from experience. The software isn't specifically coded – rather the program evolves its own algorithms and uses feedback to refine the results.

This form of machine learning is very good at dealing with big data, and makes them invaluable for services such as fraud detection and security surveillance. Working with these huge inputs of information makes AI a power-hungry beast that devours huge computational resources.

Related articles
AI has been moving into various industries since the 1990s – from finance to communications, heavy industry and even toys – constantly evolving and becoming more sophisticated. Over the last two years alone, there has been dramatic evolution of artificial intelligence. Energy-efficient computers and microchips based on the neural structure of the brain are driving the surge in AI advancement. Digital assistants such as Apple's Siri and Amazon's Alexa, movie recommendation services and online customer support are all examples of artificial intelligence in services that we increasingly take for granted.

Harré is part of a new wave of researchers exploring the relationship between human thinking, artificial intelligence and economics. He believes that understanding human cognition will drive AI advancement – and vice versa. "The stronger the connection we can draw between economics, psychology and neuroscience – three very different fields studying humans at very different scales – the better our understanding will be in all three areas."

"We are not well equipped, cognitively speaking, to deal with the complexity of the systems that have come to dominate our world: financial and economic systems, climatic systems and even our social interactions and how information is spread," states Harré. "Everything depends on everything else, often leading to the impression that chaos and disorder dominate, and that trying to understand such systems is a lost cause. But if we scratch the surface, there are often some basic underlying principles. Understanding these is the biggest challenge we face today, and this is what my work aims to do."

"I look at the mathematics reflected in complex systems. This can give us a better understanding of how relatively small variations in human behaviour can lead to quite significant and sometimes sudden system-level consequences - such as the behaviours of individual buyers and sellers leading to a financial market crash."

To that end, Harré and his colleagues are developing simple AIs called agent-based models that simulate the Australian housing market and identify if it is at risk of collapse. Millions of households are modelled by these AIs, which interact with each other to buy and sell houses. The project will allow them to look at different suburbs, cities and regions across Australia to identify the factors that might lead to a system-wide collapse.

"We want to know what drives bubbles, whether those drivers are in the current Australian market, and how we deflate the problem of a potential crash by helping inform policy," Harré says.
For Harré, it's not just the computational power of AI that is useful, it's the potential of a future in which we will work with and interact daily with AI personalities – and he is excited about the diversity of viewpoints this implies. AI, by its nature, will have opinions.

"I think we're going to end up with a very dynamic workforce," he says. "The key thing for the future is going to be people who are more willing to be agile within the jobs they take."


SOURCE  PhysOrg


By  33rd SquareEmbed





Monday, April 3, 2017

David Cox Explains How Neuroscience and Computer Science are Merging

Artificial Intelligence

David Cox gave a talk about current work in neuroscience and computer science at the World Economic Forum, that points to a variety of implications including brain uploading and the development of advanced artificial intelligence based on biology.


At the recent World Economic Forum, David Cox gave a talk about current work in neuroscience and computer science that points to a variety of implications including brain uploading and the development of advanced artificial intelligence based on biology.

David Cox Explains How Neuroscience and Computer Science are Merging
Image Source: World Economic Forum / Walter Duerst

"The only reason we can have this conversation today, is that there are two fields that are exploding right now, and are on a collision course with one another."
The fast-advancing fields of neuroscience and computer science are converging explains Cox. "The only reason we can have this conversation today, is that there are two fields that are exploding right now, and are on a collision course with one another."

"It might seem weird to connect technology and neuroscience," states Cox, "but is actually something we have been doing for a very long time." Cox points that through history, we have always used metaphors for our mind, like pneumatic power, steam power, and today's technology of the computer. "Now computers are the lens through which we see our brains."

Computer science does give us a new model for looking at our brains. There is an equivalence. "If we understand the algorithms of the brain, we can think about running that on other hardware that we have [like] silicon," Cox says.

Within the last five years alone, there has been a tectonic shift in the world of artificial intelligence Cox claims. However, "we are not quite there yet."

Cox is working on studying the brain to figure out what is missing in artificial intelligence so far

Despite significant progress in developing AI algorithms and machine learning over the past few years, today’s AI systems do not generalize well. In contrast, the brain is able to robustly separate and categorize signals in the presence of significant noise and non-linear transformations, and can extrapolate from single examples to entire classes of stimuli. This performance gap between software and wetware persists despite some correspondence between the architecture of the leading machine learning algorithms and their biological counterparts in the brain.

Cox's lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.

To this end Cox is at work on the IARPA project, Machine Intelligence from Cortical Networks (MICrONS). This neuroscience project is slated to take place over the next five years, and according to Cox, it is equivalent in scale to the Human Genome project.

MICrONS seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain. The program is expressly designed as a dialogue between data science and neuroscience. Over the course of the program, participants will use their improving understanding of the representations, transformations, and learning rules employed by the brain to create ever more capable neurally derived machine learning algorithms. Some of the ultimate goals for MICrONS include the ability to perform complex information processing tasks such as one-shot learning, unsupervised clustering, and scene parsing, aiming towards human-like proficiency.

The project involves creating connectome of a rat's brain, which amount to about two petabytes of data. Cox admits that the project he is a part of actually does conjure up the ambition of brain uploading.

"What I can tell you is that way before humans upload their brain, it's going to be rats that get their brains up into the cloud first," says Cox.

If you are excited about this idea, Cox shares good news, bad news and neutral news. The good news is that there is nothing in principle that makes mind uploading impossible. "It could happen, I'm just going to put that out there," he says. The bad news is we have no idea how to do this yet. The work so far is these are just the first steps.

Related articles
As far as neutral news, Cox says that the work his team and others around the world is leading already to even greater strides in machine learning and robots. "I would submit that the brain power of a rat, properly implemented is enough to drive a car," he says as an example.

Cox is an Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard University. His research spans across neuroscience and computer science, with the goal of understanding how brains process sensory information; employs a variety of experimental techniques to measure brain function and uses this biological information to build advanced machine learning algorithms. Actively engaged in innovation in online learning, particularly at the intersection of machine intelligence and education.

Cox also is working on the ARIADNE project—a multi-university effort to study a living animal brain like never before to figure out how it learns. This project will create some of the largest neuroscience datasets ever collected, and could give computers new abilities to learn and perceive the way our brains do.


SOURCE  World Economic Forum


By  33rd SquareEmbed





Monday, January 16, 2017

Link Found Between Concussion and Alzheimer’s Disease


Alzheimer's Disease

A study has found concussions accelerate Alzheimer's disease-related brain atrophy and cognitive decline in people who are at genetic risk for the condition. The results demonstrate the importance of documenting head injuries even within the mild range as they may interact with genetic risk to produce negative long-term health consequences such as neurodegenerative disease.


Related articles
New research has found concussions accelerate Alzheimer’s disease-related brain atrophy and cognitive decline in people who are at genetic risk for the condition. The findings, which were published in the journal Brain, show promise for detecting the influence of concussion on neurodegeneration.

Moderate-to-severe traumatic brain injury is one of the strongest environmental risk factors for developing neurodegenerative diseases such as late-onset Alzheimer’s disease, although it is unclear whether mild traumatic brain injury or concussion also increases this risk.

"Having a concussion was associated with lower cortical thickness in brain regions that are the first to be affected in Alzheimer’s disease."
Researchers from Boston University School of Medicine (BUSM) studied 160 Iraq and Afghanistan war veterans, some who had suffered one or more concussions and some who had never had a concussion. The researchers used MRI imaging, to determine the thickness of their cerebral cortex in seven regions that are the first to show atrophy in Alzheimer’s disease, as well as seven control regions.

“We found that having a concussion was associated with lower cortical thickness in brain regions that are the first to be affected in Alzheimer’s disease,” explained corresponding author Jasmeet Hayes, PhD, assistant professor of psychiatry at BUSM and research psychologist at the National Center for PTSD, VA Boston Healthcare System. “Our results suggest that when combined with genetic factors, concussions may be associated with accelerated cortical thickness and memory decline in Alzheimer’s disease relevant areas.”

The researchers found that these brain abnormalities were found in a relatively young group, with the average age being 32 years old. “These findings show promise for detecting the influence of concussion on neurodegeneration early in one’s lifetime, thus it is important to document the occurrence and subsequent symptoms of a concussion, even if the person reports only having their “bell rung” and is able to shake it off fairly quickly, given that when combined with factors such as genetics, the concussion may produce negative long-term health consequences,” said Hayes.

The researchers hope that others can build upon these findings to find the precise concussion-related mechanisms that accelerate the onset of neurodegenerative diseases such as Alzheimer’s disease, chronic traumatic encephalopathy, Parkinson’s and others. “Treatments may then one day be developed to target those mechanisms and delay the onset of neurodegenerative pathology,” she added.

SOURCE  Science Daily


By  33rd SquareEmbed



Tuesday, January 10, 2017

Arthur's Fist


Neuroscience

One of the biggest memes last year was Arthur's Fist. We all know the feeling, stressful situations have you fighting the urge to deck someone. Now, researchers have found neuronal connections between the prefrontal cortex and an area of the brainstem that is directly responsible for controlling our instinctive responses.


Scientists have uncovered exactly which neuronal projections prevent social animals like human beings from acting out our base impulses like the urge to lash out physically in stressful situations. The study, published in Nature Neuroscience, could have implications for schizophrenia and mood disorders like depression.

"We need to be able to dynamically control our instinctive behaviours, depending on the situation."
“Instincts like fear and sex are important, but you don’t want to be acting on them all the time,” says Cornelius Gross, who led the work at the European Molecular Biology Laboratory (EMBL). “We need to be able to dynamically control our instinctive behaviours, depending on the situation.”

The region at the base of the brain—the brainstem, just above the spinal chord drives our instincts. Scientists have known for some time that another brain region, the prefrontal cortex, plays a role in keeping those instincts in check (see background information down below). But exactly how the prefrontal cortex puts a break on the brainstem has remained unclear.

Related articles
Now, Gross and colleagues have actually found the connection between prefrontal cortex and brainstem. The EMBL scientists teamed up with Tiago Branco from the Medical Research Council Laboratory of Molecular Biology (MRC LMB) at Cambridge University, and traced connections between neurons in a mouse brain.

The researchers have discovered that the prefrontal cortex makes prominent connections directly to the brainstem. The teams also found that this physical connection was the mechanism that inhibits instinctive behaviour.

They found that in mice that have been repeatedly defeated by another mouse – the mouse equivalent to being bullied – this connection weakens, and the mice act more scared. The scientists found that they could elicit those same fearful behaviours in mice that had never been bullied, simply by using drugs to block the connection between prefrontal cortex and brainstem.

How Your Intelligence Overcomes Instinct

These findings provide an explanation, based on the anatomy, for why it’s much easier to stop yourself from hitting someone than it is to stop yourself from feeling the urge to do so. The scientists found that the connection from the prefrontal cortex is to a very specific region of the brainstem, the PAG, which is responsible for the acting out of our instincts. However, it doesn’t affect the hypothalamus, the region that controls feelings and emotions. So the prefrontal cortex keeps behaviour in check, but doesn’t affect the underlying instinctive feeling: it stops you from running off-stage, but doesn’t stop the butterflies in your stomach.

The work has implications for schizophrenia and mood disorders such as depression, which have been linked to problems with prefrontal cortex function and maturation.

“One fascinating implication we’re looking at now is that we know the prefrontal cortex matures during adolescence. Kids are really bad at inhibiting their instincts; they don’t have this control,” says Gross, “so we’re trying to figure out how this inhibition comes about, especially as many mental illnesses like mood disorders are typically adult-onset.”


SOURCE  EMBL


By  33rd SquareEmbed



Sunday, January 1, 2017

Researchers Engineer Gene Pathway to Grow Brain Organoids with Surface Folding


Stem Cells

Scientists have demonstrated that 3D human cerebral organoids can be effective in modeling the molecular, cellular, and anatomical processes of human brain development. They also suggest their work could be a new path for identifying the cells affected by Zika virus.


In newly published research in the journal Cell Stem Cell, researchers at Whitehead Institute have found a specific gene pathway that appears to regulate the growth, structure, and organization of the human cortex. They demonstrated that 3D human cerebral organoids—miniature, lab-grown versions of specific brain structures—can be effective in modeling the molecular, cellular, and anatomical processes of human brain development. The researchers suggest their work could also provide a new path for identifying the cells affected by Zika virus.

“We found that increased proliferation of neural progenitor cells (NPs) induces expansion of cortical tissue and cortical folding in human cerebral organoids,” says Yun Li, a lead author of study and post-doctoral researcher at Whitehead Institute. “Further, we determined that deleting the PTEN gene allows increased growth factor signaling in the cell, unleashing its growth potential, and stimulating proliferation.”

Researchers Engineer Gene Pathway to Grow Brain Organoids with Surface Folding

Related articles
The findings lend support to the notion that an increase in the proliferative potential of NPs contributes to the expansion of the human cerebral neocortex, and the emergence of surface folding.

With normal NPs, the human organoid developed into relatively small cell clusters with smooth surface appearance, displaying some features of very early development of a human cortex. However, deleting PTEN allowed the progenitor population to continue expanding and delayed their differentiation into specific kinds of neurons—both key features of the developing human cortex.

“Because the PTEN mutant NPs experienced more rounds of division and retained their progenitor state for an extended period, the organoids grew significantly larger and had substantially folded cortical tissue,” explains Julien Muffat, also a lead author and post-doctoral researcher at MIT's Whitehead Institute.

"We have demonstrated that 3D human cortical organoids can be very effective for Zika modeling."
The researchers found that while PTEN deletion in mouse cells does create a somewhat larger than normal organoid, it does not lead to significant NP expansion or to folding. “Previous studies have suggested that abnormal variation in PTEN expression may play an important role in driving brain development conditions leading to syndromes such as Autism Spectrum Disorders,” says Rudolf Jaenisch, Founding Member of Whitehead Institute and senior author of the study. “Our findings suggest that the PTEN pathway is also an important mechanism for controlling brain-structure differences observed between species.”

Brain Organoids
Image Source: Yun Li and Julien Muffat

In the study, deletion of the PTEN gene increased activation of the PI3K-AKT pathway and thereby enhanced AKT activity in the human NPs comprising the 3D human cerebral organoids; it promoted cell cycle re-entry and transiently delayed neuronal differentiation, resulting in a marked expansion of the radial glia and intermediate progenitor population. Validating the molecular mechanism at work with PTEN, the investigators used pharmacological AKT inhibitors to reverse the effect of the PTEN deletion. They also found that they could regulate the degree of expansion and folding by tuning the strength of AKT signaling—with reduced signaling resulting in smaller and smooth organoids, and increased signaling producing larger and more folded organoids.

Finally, the researchers utilized the 3D human cerebral organoid system to show that infection with Zika virus impairs cortical growth and folding. In the organoids, Zika infection at the onset of surface folding (day 19 of development) led to widespread apoptosis; and, ten days later, it had severely hampered organoid growth and surface folding. Zika infection of 4-week-old organoids, showed that PTEN mutant organoids were much more susceptible to infection than normal control organoids; notably, they showed increased apoptosis and decreased proliferation of progenitor cells.

“Although not an original goal of our study, we have demonstrated that 3D human cortical organoids can be very effective for Zika modeling—better enabling researchers to observe how human brain tissue reacts to the infection and to test potential treatments,” Li says.


SOURCE  Whitehead Institute via Newswise


By  33rd SquareEmbed



Monday, December 19, 2016

Image Processing Artificial Intelligence Learns Mostly On Its Own, Just Like a Human


Artificial Intelligence

Artificial intelligence and neuroscience researchers have taken inspiration from the human brain in creating a new deep learning system that enables computers to learn about the visual world largely on their own, just like human babies do.


Artificial intelligence and neuroscience experts from Rice University and Baylor College of Medicine using inspiration from the human brain have developed a new deep learning method that lets computers learn about the visual world largely on their own, much the same way human babies do.

In tests, the group’s “deep rendering mixture model” (DRMM) largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students. The results which were  presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona,the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself.

The algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.

"The DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark and comparable to state of the art," conclude the authors.

Related articles
“In deep learning parlance, our system uses a method known as semisupervised learning,” said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. “The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.

“Humans don’t learn that way,” Patel said. “When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: ‘Bottle. Chair. Momma.’ But the baby can’t even understand spoken words at that point. It’s learning mostly unsupervised via some interaction with the world.”

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn’t require much “hand-holding” in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

DRMM

The semisupervised Rice-Baylor algorithm is a “convolutional neural network,” a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.

“It’s essentially a very simple visual cortex,” Patel said of the convolutional neural net. “You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision.”

"The way the brain is doing it is far superior to any neural network that we’ve designed."
Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.

“Edges are very important,” Nguyen said. “Many of the lower layer neurons tend to become edge detectors. They’re looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

“When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on,” he said. “The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it’s able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus.”

Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.

“There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly,” Patel said. “What the brain is doing may be related, but it’s still very different. And the key thing we know about the brain is that it mostly learns unsupervised.

“What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that’s being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?” he said. “Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we’ve designed.”

SOURCE  Rice University


By  33rd SquareEmbed



Thursday, December 1, 2016

AI System Spontaneously Reproduces Aspects of Human Neurology


Artificial Intelligence

Researchers have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.


Researchers at MIT and their colleagues have developed a new computational model of the human brain’s face-recognition system that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

This rotation property was not built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

“This is not a proof that we understand what’s going on,” says Tomaso Poggio, CSAIL principal investigator and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”

The researchers’ new paper, published in Current Biology, includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.

Related articles
The new paper is “a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,” Poggio says. “That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”

Knowing that different groups of neurons fired in the brain when different facial angles were presented, the researchers knew what their machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”

The researchers’ machine-learning system is a neural network, consisting of very simple processing units, arranged into layers, that are densely connected to the processing units — or nodes — in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion — say, correctly determining whether a given image depicts a particular person.

using angles in facial recognition

The experimental approach produced invariant representations: A face’s signature turned out to be roughly the same no matter its orientation. But the mechanism — memorizing templates — was not, Poggio says, biologically plausible.

Instead, the new network uses a variation on Hebb’s rule, which is often described in the neurological literature as “neurons that fire together wire together.” That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently.

This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.

The researchers conclude:
Our feedforward model, which succeeds in explaining the main tuning and invariance properties of the macaque face-processing system, may serve as a building block for future object-recognition models addressing brain areas such as prefrontal cortex, hippocampus and superior colliculus, integrating feed-forward processing with subsequent computational steps that involve eye-movements and their planning, together with task dependency and interactions with memory.




SOURCE  CSAIL


By  33rd SquareEmbed



Thursday, October 13, 2016

Researchers Use Brain Maps of Poker Players to Identify Differences in Brain Activity


Neuroscience

A new study has collected data about the brain activity of poker players. The study set out to record the levels of brain activity in poker players operating at different levels of competence: the beginner, the amateur and the professional.


A study conducted by a London based behavioural research consultancy has collected some interesting data about the brain activity of poker players. The study set out to record the levels of brain activity in poker players operating at different levels of competence: the beginner, the amateur and the professional. Six players, two from each category, were observed playing forty minutes of Texas Hold’em poker. Half played for money, half for free. The players wore EEG headsets which recorded the location and intensity of brain activity in four areas of the brain: delta, theta, alpha and beta. This data was then converted into interactive brain maps, allowing us to observe brain activity in the players at key moments of the game.

Deal


Brain Maps of Poker Players

At the start of the game the beginner shows high theta activity in the right frontal lobe, indicating high levels of emotion, in contrast, the amateur displays high beta activity in the left frontal lobe, indicating decision making driven by logic. This is the stage of the game where the amateur is most engaged and where they spend the most time processing the information. The brain map of the professional is similar but indicates a much lower level of activity. The experienced player makes a quicker decision with less mental effort.


Flop




Related articles
This is where the first three cards are placed down altogether, face up. The beginner’s brain map shows little activity. Their lack of experience renders them incapable of responding to the new data, either with emotion or logic. The amateur shows alpha activity indicating logic at work but both brain maps are in stark contrast to that of the professional where a high level of activity in both frontal lobes indicates both logical thinking and emotional instinct at work.

River




This is a key phase of the game when the fifth card is placed, face up. The beginner’s brain exhibits exclusive right frontal lobe activity, indicating an entirely emotional response to the situation. The amateur brain map shows high levels of activity in both frontal lobes, with slightly more activity on the right lobe, suggesting that emotion is dominant. The effect of the final card on the professional is to stimulate a flurry of activity on the left lobe. The professional is in control of emotion and is relying on logic to make the decision.

Call



This is when a player adds to the pot, money equal to the most recent bet. The fact that this is a relatively safe play is reflected in similar brain maps for all three subjects. As we might expect, the response of the beginner is predominantly emotional whereas the brain maps for the amateur and professional show more of a spread across both frontal lobes.

Raise

Excitement peaks for all players when the stakes are raised and this is evidenced by the brain maps. What is also revealed is that the professional, although led by emotion, has far more brain activity devoted to processing information.


By  33rd SquareEmbed



Friday, October 7, 2016

Apes Know That We All Think of Things Differently


Animal Intelligence

New research has demonstrated that multiple species of apes appear to understand that individuals have different perceptions about the world. This work overturns the human-only paradigm of the theory of mind, and once again shows that perhaps we are not the only intelligent animals on this planet.



New research on chimpanzees, bonobos and orangutans suggests our primate relatives may also be able to tell when that someone’s beliefs may differ from reality. They also have been found to use this knowledge in their choice of actions.

The findings suggest the ability is not unique to humans, but has existed in the primate family tree for at least 13 to 18 million years, since the last common ancestors of chimpanzees, bonobos, orangutans and humans.

The study, led by researchers at Duke University, Kyoto University, the University of St. Andrews and the Max Planck Institute for Evolutionary Anthropology, has been published in the journal Science.

As humans, we tend to believe that our cognitive skills are unique, not only in degree, but also in kind. Research like this shows that the more closely we look at other species, the clearer it becomes that the difference is one of degree. The researchers examined three different species of apes, finding they were able to anticipate that others may have mistaken beliefs about a situation.

The capacity to tell when others hold mistaken beliefs is seen as a key milestone in human cognitive development. We develop this awareness in early childhood, usually by the age of five. This step marks the beginning of a young child’s ability to fully comprehend the thoughts and emotions of others—what psychologists call theory of mind.

These skills are essential for getting along with other people and predicting what they might do. They also are the foundation our ability to trick people into believing something that isn’t true. Moreover, the inability to infer what others are thinking or feeling is considered an early sign of autism.

"This cognitive ability is at the heart of so many human social skills."
“This cognitive ability is at the heart of so many human social skills,” said Christopher Krupenye of Duke, who led the study along with comparative psychologist Fumihiro Kano of Kyoto University.

Related articles
To some extent apes can read minds too. Over the years, studies have shown that apes are remarkably skilled at understanding what others want, what others might know based on what they can see, and other mental states. But when it comes to understanding what someone else is thinking even when those thoughts are false, apes have consistently failed the test.

Understanding that beliefs may be false requires grasping, on some level, that not all things inside our heads correspond to reality, explained study co-author Michael Tomasello, professor of psychology and neuroscience at Duke and director at the Max Planck Institute for Evolutionary Anthropology. “It means understanding that there exists a mental world distinct from the physical world,” Tomasello said.

In the study, the apes watched two short videos. In the first, a person in a King Kong suit hides himself in one of two large haystacks while a man watches. Then the man disappears through a door, and while no one is looking the King Kong runs away. In the final scene the man reappears and tries to find King Kong.

The second video is similar, except that the man returns to the scene to retrieve a stone he saw King Kong hide in one of two boxes. But King Kong has since stolen it behind the man’s back and made a getaway.

The researchers teased out what the apes were thinking while they watched the movies by following their gaze with an infrared eye-tracker installed outside their enclosures.

“We offer them a little day at the movies,” said Krupenye, now a postdoctoral researcher at the Max Planck Institute for Evolutionary Anthropology in Germany. “They really seem to enjoy it.”

To pass the test, the apes must predict that when the man returns, he will mistakenly look for the object where he last saw it, even though they themselves know it is no longer there. In both cases, the apes stared first and longest at the location where the man last saw the object, suggesting they expected him to believe it was still hidden in that spot.

Their results mirror those from similar experiments with human infants under the age of two, and suggest apes have taken a key first step toward fully understanding the thoughts of others.

“This is the first time that any nonhuman animals have passed a version of the false belief test,” Krupenye said. “If future experiments confirm these findings, they could lead scientists to rethink how deeply apes understand each other.”



SOURCE  Duke University


By  33rd SquareEmbed