bloc 33rd Square Business Tools - computer science 33rd Square Business Tools: computer science - All Post
Showing posts with label computer science. Show all posts
Showing posts with label computer science. Show all posts

Saturday, November 12, 2016

Artificial Intelligence Can Now Read Lips Much Better Than You


Artificial Intelligence

Researchers have finally taught computers how to read lips. LipNet, created at the University of Oxford, is the first deep learning system to successfully lip read full sentences, including difficult pronunciations and non-intuitive sentences.


When HAL 9000 read Dave Bowman and Frank Poole's lips in 2001: A Space Odyssey, it was a key moment in the film showing the superhuman power of artificial intelligence (and its malevolence). Now a new AI system has made this a reality.

The research has been published online.

Lip reading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches to program machines to do this task separated the problem into two stages: designing or learning visual features, and prediction. So far, all previous research has led to only word classification, not sentence-level sequence prediction—until now.

Artificial Intelligence Can Now Read Lips Much Better Than You

The new system, called LipNet developed by researchers at the Department of Computer Science at the University of Oxford, including Nando de Freitas, who is also lead research scientist at Google DeepMind, is incredibly accurate, achieving an impressive 93.4 percent accuracy, compared with a paltry 52 percent accuracy by an experienced human lip reader.

Other studies have shown that human lip reading performance increases for longer words, indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, the researchers worked to create LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a Long Term, Short Term Memory (LSTM) recurrent neural network, and the connectionist temporal classification loss, trained entirely end-to-end.

LipNet architecture

Related articles
"To the best of our knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model," they report. Comparatively, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.

Machine lipreaders have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces, covert conversations, speech recognition in noisy environments, biometric identification, and silent-movie processing.

LipNet could potentially work as a tool for the hearing impaired, or could even be a way for people to communicate with their devices if they aren't comfortable speaking aloud. Imagine if you are in a crowded office or an elevator, and you don't really want to draw attention to your self by speaking aloud, to seemingly no-one; just mouth the words to the camera.

With the association of the researchers to Google's DeepMind, we wouldn't be surprised if LipNet sees commercial applications sooner than later.



SOURCE  Yannis Assael


By  33rd SquareEmbed



Monday, August 1, 2016

4 Ways Computer Science Is Changing Our Society


Computer Science

Computer science has not only changed the way we conduct our lives, it has also created new jobs, and even helped preserve antiquity. Here are four concrete ways that computer science has changed the world.


Computer science has changed the world in ways that people couldn't have imagined just 20 or 30 years ago. Computers have not only changed the way we conduct our personal lives, they have also opened up jobs, created new jobs, and even helped preserve antiquity. Here are four concrete ways that computer science has changed the world.

Translation

The global economy dictates that documents be translated quickly. From the newest Harry Potter novel to the most pressing legal documents, translation bridges the gap between cultures. In the past, this painstaking process existed in the realm of language professionals. In the computer age, it still does, but the process is sped up considerably thanks to computer science, according to an article on the Wired website.

Specifically, a computer is able to scan patterns, words, and phrases between two languages faster than any human can. While translation will probably always require the aid of human translators, using computers helps language professionals more quickly spot and correct problems than they could have on their own.

Related articles

Sorting Information

Millions and millions of bits of information explode onto the Internet every day. Because of this, new jobs in networking, database management, and software creation exist today that didn't 20 years ago. For the person who wants to learn the skills to manage all this new information, a degree in computer science is practical.

Some of the fastest-growing jobs, including translation, forensic science, and statisticians rely on being able to process information quickly. The person who can create the software to help this process along and to help sort it in a meaningful way will find a position in tomorrow's new economy. Programs like an online master’s of computer science can help people break into a career in the tech industry.

Computing Scientific Probabilities

Much of the science world relies on using mathematics on a large scale. Being able to process the statistical probability of real-world problems like the spread of disease means scientists get answers more quickly to pressing problems. Computer science leads the charge in this endeavor. Aside from projecting outcomes, computer science has another use here. Scientists around the globe can more readily share information with their colleagues, allowing them to collaborate on big projects without ever leaving home.

Creative Commons

Documents, pictures, videos, and illustrations in the realm of the Creative Commons got a major boost from computer science. For example, one of the largest collections of public domain books is on the Project Gutenberg website. Novels like "Frankenstein," "Dracula," and "The Metamorphosis," along with some more obscure books are available free of charge on this online library. At the time of this writing, Project Gutenberg has transcribed and uploaded 50,000 classic and antique books for the public to enjoy. Other uses for this type of project include museum databases and online genealogy sites.

Computer science has given us more than just access to our daily emails. As the above examples show, it has changed the way we communicate, the way we catalogue information, the way we participate in the scientific process, and given us access to the most important ancient texts. Truly, not since the invention of the printing press has technology world changed so much.




By Anica Oaks Embed


Author Bio - A recent college graduate from University of San Francisco, Anica loves dogs, the ocean, and anything outdoor-related. She was raised in a big family, so she's used to putting things to a vote. Also, cartwheels are her specialty. You can connect with Anica here. 


Friday, March 25, 2016

Scalable and Programmable Quantum Computer May Unlock Powerful Computation

Quantum Computers

A major hurdle in the development of quantum computers has been overcome through the development of the world’s first programmable and scalable system. Researchers have learned how to control quantum particles with the precision necessary to run quantum algorithms on a small scale with just a few qubits.


Researchers at the University of Maryland in College Park have unveiled a five-qubit quantum computer module that can be programmed to run any quantum algorithm. They say their module can be linked to others to perform powerful quantum computations involving large numbers of qubits.

The study, 'Demonstration of a programmable quantum computer module', has been published online.

“This small quantum computer can be scaled to larger numbers of qubits within a single module, and can be further expanded by connecting many modules,” say Shantanu Debnath.

"This small quantum computer can be scaled to larger numbers of qubits within a single module, and can be further expanded by connecting many modules."
The new device builds on work over the last two decades on trapped ion quantum computers. The device uses five ytterbium ions lined up and trapped in an electromagnetic field. The electronic state of each ion can be controlled by engaging it with a laser. This allows each ion to store a bit of quantum information.

Because they are charged, the ions exert a force on each other, and this causes them to vibrate at frequencies that can be precisely controlled and manipulated. These vibrations are quantum in nature and allow the ions to become entangled.

Wiht this method, the quantum bits, or quibits they hold can interact.

Controlling these interactions, the physicists can carry out quantum logic operations. Quantum algorithms are simply a series of these logic operations one after the other.

Quantum Algorithms

Related articles
Few of the quantum computers developed so far are capable of doing multiple operations; most have been designed to perform a only specific single quantum algorithm.

The Maryland researchers have built a self-contained module capable of addressing each of the ions with a laser and reading out the results of the interaction between qubits. So far he team has put the device through many tests, implementing several different quantum algorithms:

“As examples, we implement the Deutsch-Jozsa, Bernstein-Vazirani, and quantum Fourier transform algorithms,” they say. “The algorithms presented here illustrate the computational flexibility provided by the ion trap quantum architecture.”

This impressive work is only the tip of the iceberg say the researchers. They also claim that their module is scalable—that several five-qubit modules can be connected together to form a much more powerful quantum computer.

"This small quantum computer can be scaled to larger numbers of qubits within a single module, and can be further expanded by connecting many modules through ion shuttling rr photonic quantum channels," write the researchers.

The team has not yet demonstrated this scalability, but it is their next logical step. What Debnath and his team need to do next is show how to connect these modules and how this increases the utility of the computations that are possible.

If successful, such a development would be a watershed moment for quantum computer progress.


SOURCE  MIT Technology Review


By 33rd SquareEmbed


Tuesday, September 1, 2015

Mastering the Language of the Smartest Machines: Career in Computer Science


Computers


From designing hardware to developing software, computers have opened up a whole world of professions and careers. Computer science has emerged as one of the most popular and profitable areas of study.
 



Computers are machines on which the whole world is running, literally. From business and education to governance and society, there is no sector of life which does not harness the power of these smart machines.

From designing their hardware to developing the required software, computers have opened up a whole world of professions and careers. Computer science has emerged as one of the most preferred courses by students who are interested in computers and their functionality.

A career in computer science basically involves many things such as programming, developing computing solutions and devising new ways of computer usage. In general however, a career in computer science can be divided into 4 major categories, designing and implementing software, devising new ways to use computers, planning and managing the technology infrastructure and developing new ways to solve computing problems.

Computer science is considered as one of the most challenging and interesting careers in the STEM careers.


Like the different aspects of computers and their functionality, the courses related to computer sciences are also varied in nature. Although the courses may differ in their approaches and studies, they usually deal with the theoretical foundations of information and computers. The study of computer science also involves a systematic study of methodical processes: algorithms.

With the extensive use of computers and the dependence on them increasing every day, the following data won’t come as a surprise. According to a report published by the Georgetown University, computer occupations will dominate the STEM field in the US by 2018 with 51% of jobs relating to the field. The report has predicted 183,760 jobs in computer occupations till the end of 2018.

Related articles

With a degree in computer sciences, one can take up jobs like database manager, games developer, information systems manager, systems analyst and developer. Computer professionals enjoy a high salary even at the start of their careers. According to Payscale.com, they earn more than $43,000 every year during the beginning of their careers

The reason for this is explained by Igor Markov, EECS professor at Michigan. He says that the impact of computer science-style work is easier to measure than other fields so that best performers can be encouraged. Another reason is the massive change that computers are bringing in business and industries. Professor Markov says,

“Just look at how the Kindle streamlined book distribution and sales, how Walmart opened Walmart Labs recently and is hiring data scientists, and how all TVs became digital (and software based). Both aerospace and car manufacturers hire a large number of software developers.”

A career in computer science is the best choice for those who are interested in computers and want to work on the cutting edge of technology. Computer science is considered as one of the most challenging and interesting careers in the STEM careers (science, technology, engineering and mathematics).



By Melody CleoEmbed



Monday, January 12, 2015

Disney Researchers Create System To Organize Your Vacation Photos

 Machine Learning
Computer science researchers have created an automated method to assemble story-driven photo albums from an unsorted group of images.




Taking photos has never been easier, thanks to the ubiquity of mobile phones, tablets and digital cameras. However, editing a mass of vacation photos into an album remains a chore. A new automated method developed by Disney Research could ease that task while also telling a compelling story.

The method developed by a team led by Leonid Sigal, senior research scientist at Disney Research, attempts to not only select photos based on quality and relevance, but also to order them in a way that makes narrative sense.

"Professional photographers, whether they are assembling a wedding album or a photo slideshow, know that the strict chronological order of the photos is often less important than the story that is being told," Sigal said. "But this process can be laborious, particularly when large photo collections are involved. So we looked for ways to automate it."

Sigal and his collaborators presented their findings at WACV 2015, the IEEE Winter Conference on Applications of Computer Vision, in Waikoloa Beach, Hawaii. Others involved include Disney Research's Rafael Tena, Fereshteh Sadeghi, a computer science PhD student at the University of Washington and Ali Farhadi, assistant professor of computer science and engineering at the University of Washington.

The team looked at ways of arranging vacation photos into a coherent album. Previous efforts on automated album creation have relied on arranging photos based largely on chronology and geo-tagging, Sigal noted.

Darth Vader in Disneyland

But when four people were asked to choose and assemble five-photo albums that told a story, the researchers noted that these individuals took photos out of chronological order about 40 percent of the time. Subsequent preference testing using Mechanical Turk showed people preferred these annotated albums over those chosen randomly or those based on chronology.

Related articles
To create a computerized system capable of creating a compelling visual story, the researchers built a model that could create albums based on variety of photo features, including the presence or absence of faces and their spatial layout; overall scene textures and colors; and the esthetic quality of each image.

Their model also incorporated learned rules for how albums are assembled, such as preferences for certain types of photos to be placed at the beginning, in the middle and at the end of albums. An album about a Disney World visit, for instance, might begin with a family photo in front of Cinderella's castle or with Mickey Mouse. Photos in the middle might pair a wide shot with a close-up, or vice versa. Exclusionary rules, such as avoiding the use the same type of photo more than once, were also learned and incorporated.

The researchers used a machine learning algorithm to enable the system to learn how humans use those features and what rules they use to assemble photo albums. The training sets used for this purpose were created for the study from thousands of photos from Flickr. These included 63 image collections in five topic areas: trips to Disney theme parks, beach vacations and trips to London, Paris and Washington, D.C. Each collection was annotated by four people, who were asked to assemble five-photo albums that told stories and to group images into sets of near duplicates.

The system relies purely on visual information for features and exemplar album annotations to drive the machine learning procedure.

Once the system learned the principles of selecting and ordering photos, it was able to compose photo albums from unordered and untagged collections of photos. Sigal noted that such a system also can learn the preferences of individuals, in assembling these collections, to customize the album creation process.


SOURCE  Disney Research via EurekAlert

By 33rd SquareEmbed

Tuesday, November 25, 2014

columnar database

 Computer Science
Efficient storage and retrieval of information is vitally important when it comes to working with big data. Database systems using columnar structures offer some tangible benefits for certain conditions.




In the era of big data, efficient storage and retrieval of information is important when it comes to processing the massive amount of information involved. Databases area key to this, and the very structure of the database can have dramatic impacts on performance.

A column-oriented DBMS is a database management system (DBMS) that stores its content by column rather than by row. This translates into substantial advantages for data warehouses and library catalogs where aggregates are computed over large numbers of similar data items.

Rows and columns—this may seem like a trivial distinction, it is the most important underlying characteristic of columnar databases.

Related articles
The main difference between columnar and the more traditional row-based database structure is that when they are stored, all of the columns are not entered successively into storage.  This eliminates redundant metadata, which minimizes the data management requirements of the system. This also means the database can be navigated and searched more rapidly.

These features make columnar databases ideal for high-volume, incremental data gathering and processing, real-time information exchange like messaging and frequently changing content management. These are also elements of the three 'V's' of big data, volume, velocity and variety.

Shutterstock has deployed a columnar database to be the foundation on which its platform is monitored, using it for immediate anomaly detection and real-time analysis of more than 20 thousand data points per second.

Compared with relational row-based databases, columnar database systems offer better analytic performance when simultaneous queries are not used. The method also allows for more rapid joins and aggregation with data streaming along in an incremental manner.

The columnar database approach is also highly suitable for compression, by eliminating multiple indexes and views. With tools like this, the process of changing big data into information, which then changes into knowledge is another step closer.


By 33rd SquareEmbed

Monday, November 17, 2014

Algorithm Designed To Show the Benefit of Quantum Computers Finally Tested

 Quantum Computers
For the first time, A 20-year-old algorithm that demonstrated the benefit of using quantum computers to solve certain problems has finally been run, and the results matched the predictions.




Quantum algorithms are expected to solve problems faster than their classical equivalents, but few have been tested experimentally. Now Mark Tame, from the University of KwaZulu-Natal in South Africa, and his team have used a prototype quantum computer to run the a version of Simon’s algorithm, and achieved the predicted results.

Related articles
First conceived by computer scientist Daniel Simon in 1994, provides instructions for a computer to determine whether a black box returns a discrete output for every possible input. Simon's was the first example of problem-solving software that quantum computers should be able to execute exponentially faster than conventional computers as the problem gets harder. the first algorithm predicted to run exponentially faster on a quantum computer than a classical one. Although Simon’s algorithm doesn’t have practical applications, it could provide a useful way to test the capabilities of future quantum computers.

"This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model."


Tame and collaborators ran a quantum version of this algorithm on an optical quantum computer, in which entangled photons served as qubits. Their setup, which utilizes a total of six qubits, solved Simon’s problem in three quarters of the steps that it would take a classical computer to solve the equivalent classical black box function. Assuming theoretical predictions about Simon’s algorithm’s performance are correct, this gain in efficiency will increase exponentially on computers with more qubits.

"This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model," claim Tame and his collaborators.

Their work is published in the journal Physical Review Letters.


SOURCE  Physics

By 33rd SquareEmbed

Friday, November 14, 2014

Software Created That Self-Repairs to Thwart Cyber Attacks

 Cyber Security
Computer scientists have developed software that not only detects and eradicates never-before-seen viruses and other malware, but also automatically repairs damage caused by them. The software then prevents the invader from ever infecting the computer again.




Computer scientists have developed software that not only detects and eradicates never-before-seen viruses and other malware, but also automatically repairs damage caused by them. The software then prevents the invader from ever infecting the computer again.

A3 is a software suite that works with a virtual machine – a virtual computer that emulates the operations of a computer without dedicated hardware. The A3 software is designed to watch over the virtual machine’s operating system and applications, says Eric Eide, University of Utah research assistant professor of computer science leading the university’s A3 team with Utah computer science associate professor John Regehr. A3 is designed to protect servers or similar business-grade computers that run on the Linux operating system. It also has been demonstrated to protect military applications.

The new software called A3, or Advanced Adaptive Applications, was co-developed by Massachusetts-based defense contractor, Raytheon BBN, and was funded by Clean-Slate Design of Resilient, Adaptive, Secure Hosts, a program of the Defense Advanced Research Projects Agency (DARPA). The four-year project was completed in late September.

There are no plans to adapt A3 for home computers or laptops, but Eide says this could be possible in the future.

Eric Eide
Utah University's Eric Eide
Image Source - Dan Hixson/University of Utah College of Engineering
“A3 technologies could find their way into consumer products someday, which would help consumer devices protect themselves against fast-spreading malware or internal corruption of software components.  But we haven’t tried those experiments yet,” he says.

Related articles
Utah computer scientists have created “stackable debuggers,” multiple de-bugging applications that run on top of each other and look inside the virtual machine while it is running, constantly monitoring for any out-of-the-ordinary behavior in the computer.

"A3 technologies could find their way into consumer products someday, which would help consumer devices protect themselves against fast-spreading malware or internal corruption of software components."


Unlike a normal virus scanner on consumer PCs that compares a catalog of known viruses to something that has infected the computer, A3 can detect new, unknown viruses or malware automatically by sensing that something is occurring in the computer’s operation that is not correct. It then can stop the virus, approximate a repair for the damaged software code, and then learn to never let that bug enter the machine again.

While the military has an interest in A3 to enhance cyber security for its mission-critical systems, A3 also potentially could be used in the consumer space, such as in web services like Amazon. If a virus or attack stops the service, A3 could repair it in minutes without having to take the servers down.

To test A3’s effectiveness, the team from the Utah and Raytheon BBN used the infamous software bug called Shellshock for a demonstration to DARPA officials in Jacksonville, Florida, in September. A3 discovered the Shellshock attack on a Web server and repaired the damage in four minutes, Eide says. The team also tested A3 successfully on another half-dozen pieces of malware.

“It is a pretty big deal that a computer system could automatically, and in a short amount of time, find an acceptable fix to a widespread and important security vulnerability,” Eide says. “It’s pretty cool when you can pick the Bug of the Week and it works.”

Now that the team’s project into A3 is completed and proves their concept, Eide says the Utah team would like to build on the research and figure out a way to use A3 in cloud computing, a way of harnessing far-flung computer networks to deliver storage, software applications and servers to a local user via the Internet.

The A3 software is open source, meaning it is free for anyone to use, but Eide believes many of the A3 technologies could be incorporated into commercial products.

Could A3 also be the birth of self-improving artificial intelligence?  We have to wonder.


SOURCE  University of Utah

By 33rd SquareEmbed

Tuesday, September 23, 2014

What Will Computers Look Like in Ten Years?

 Computers
Frank Z. Wang, Professor in Future Computing and Head of the School of Computing at the University of Kent recently discussed how computers will evolve over the next ten years.




Computer science has impacted many parts of our lives. Computer scientists craft the technologies that enable the digital devices we use every day and computing will be at the heart of future revolutions in business, science, and society.

In the talk below, Frank Z. Wang, Professor in Future Computing and Head of the School of Computing, University of Kent discusses how computers will evolve over the next decade at the Science and Information Conference this year. His research targets the next generation computing paradigms and their applications.

Some of these advances include Cloud Computing, Grid Computing and the next version of the web,  Internet 2.0. According to Wang, a developed Cloud/Grid Computing platform could universally accelerate Office/Database/Web/Media applications by a factor up to ten.

Wang discusses how computing, modeled after the brain is making inroads. He shows how memristor technology is proving to work after being theoretically postulated for over 40 years. Wang's work shows that in ameoba's, memory may be captured in a type of biological memristor. "That is why we are in a better position to design the next generation of computers," states Wang.

Memristor

"Thanks to the invention of the memristor, the invention opens a new way to revive traditional neural network computers."


He and his team have applied this work to neural network computers.  In the past, modelling computer systems directly on neurons did not make much sense, because each neuron could be connected to over 20,000 synapses.  Now, "thanks to the invention of the memristor, the invention opens a new way to revive traditional neural network computers," Wang says.

Apart from computers themselves, Wang comments that concepts and technologies developed within computer science are starting to have wide-ranging applications outside the subject.  For instance computer scientists recently proposed a theory on evolution based on computer science that reduces the perceived need for competition in evolution.

Related articles
His work won an ACM/IEEE Super Computing finalist award. Wang also discusses research on Green Computing, Brain Computing and Future Computing.

Wang is the Professor in Future Computing and Head of School of Computing, University of Kent. Wang's research interests include cloud/grid computing, green computing, brain computing and future computing. He has been invited to deliver keynote speeches and invited talks to report his research worldwide, for example at Princeton University, Carnegie Mellon University, CERN, Hong Kong University of Sci. & Tech., Tsinghua University (Taiwan), Jawaharlal Nehru University, Aristotle University, and University of Johannesburg.

In 2004, he was appointed as Chair & Professor, Director of Centre for Grid Computing at CCHPCF (Cambridge-Cranfield High Performance Computing Facility). CCHPCF is a collaborative research facility in the Universities of Cambridge and Cranfield (with an investment size of £40 million). Prof Wang and his team have won an ACM/IEEE Super Computing finalist award. Prof Wang was elected as the Chairman (UK & Republic of Ireland Chapter) of the IEEE Computer Society in 2005. He is Fellow of British Computer Society. He has served the Irish Government High End Computing Panel for Science Foundation Ireland (SFI) and the UK Government EPSRC e-Science Panel.




SOURCE  SAI Conference

By 33rd SquareEmbed

Thursday, June 12, 2014


 Artificial Intelligence
Computer scientists from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program that teaches everything there is to know about any visual concept.




Computer scientists from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program that teaches everything there is to know about any visual concept. Called Learning Everything about Anything, or LEVAN, the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail.

"The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them."


“It is all about discovering associations between textual and visual data,” said Ali Farhadi, a UW assistant professor of computer science and engineering. “The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them.”

The research team will present the project and a related paper this month at the Computer Vision and Pattern Recognition annual conference in Columbus, Ohio.


The program learns which terms are relevant by looking at the content of the images found on the Web and identifying characteristic patterns across them using object recognition algorithms. It’s different from online image libraries because it draws upon a rich set of phrases to understand and tag photos by their content and pixel arrangements, not simply by words displayed in captions.

Users can browse the existing library of roughly 175 concepts. Existing concepts range from “airline” to “window,” and include “beautiful,” “breakfast,” “shiny,” “cancer,” “innovation,” “skateboarding,” “robot,” and the researchers’ first-ever input, “horse.”

If the concept you’re looking for doesn’t exist, you can submit any search term and the program will automatically begin generating an exhaustive list of subcategory images that relate to that concept. For example, a search for “dog” brings up the obvious collection of subcategories: Photos of “Chihuahua dog,” “black dog,” “swimming dog,” “scruffy dog,” “greyhound dog.” But also “dog nose,” “dog bowl,” “sad dog,” “ugliest dog,” “hot dog” and even “down dog,” as in the yoga pose.

The technique works by searching the text from millions of books written in English and available on Google Books, scouring for every occurrence of the concept in the entire digital library. Then, an algorithm filters out words that aren’t visual. For example, with the concept “horse,” the algorithm would keep phrases such as “jumping horse,” “eating horse” and “barrel horse,” but would exclude non-visual phrases such as “my horse” and “last horse.”

Once it has learned which phrases are relevant, the program does an image search on the Web, looking for uniformity in appearance among the photos retrieved. When the program is trained to find relevant images of, say, “jumping horse,” it then recognizes all images associated with this phrase.

“Major information resources such as dictionaries and encyclopedias are moving toward the direction of showing users visual information because it is easier to comprehend and much faster to browse through concepts. However, they have limited coverage as they are often manually curated. The new program needs no human supervision, and thus can automatically learn the visual knowledge for any concept,” said Santosh Divvala, a research scientist at the Allen Institute for Artificial Intelligence and an affiliate scientist at UW in computer science and engineering.

The research team also includes Carlos Guestrin, a UW professor of computer science and engineering. The researchers launched the program in March with only a handful of concepts and have watched it grow since then to tag more than 13 million images with 65,000 different phrases.

LEVAN

Right now, the program is limited in how fast it can learn about a concept because of the computational power it takes to process each query, up to 12 hours for some broad concepts. The researchers are working on increasing the processing speed and capabilities.

The team wants the open-source program to be both an educational tool as well as an information bank for researchers in the computer vision community. The team also hopes to offer a smartphone app that can run the program to automatically parse out and categorize photos.


SOURCE  University of Washington

By 33rd SquareEmbed

Thursday, May 15, 2014


 Computers
At this year's SXSW Conference, Stephen Wolfram introduced Wolfram Language.  Now, video of his presentation shows some of  the profound implications of this new technology.




Imagine a future where there's no distinction between code and data. Where computers are operated by programming languages that work like human language, where knowledge and data are built in, where everything can be computed symbolically like the X and Y of school algebra problems. Where everything obvious is automated; the not-so-obvious revealed and made ready to explore. A future where billions of interconnected devices and ubiquitous networks can be readily harnessed by injecting computation.

"Of the various things I've been trying to explain, this is one of the more difficult ones."


That's the future Stephen Wolfram has pursued for over 25 years: Mathematica, the computable knowledge of Wolfram|Alpha, the dynamic interactivity of Computable Document Format, and soon, the universally accessible and computable model of the world made possible by the Wolfram Language and Wolfram Engine.

Stephen Wolfram

Related articles
"Of the various things I've been trying to explain, this is one of the more difficult ones," Wolfram told Wired recently. What Wolfram Language essentially does, is work like a plug-in-play system for programmers, with many subsystems already in place.  Wolfram calls this knowledge-based programming.

Wolfram Language has a vast depth of built-in algorithms and knowledge, all automatically accessible through its elegant unified symbolic language. Scalable for programs from tiny to huge, with immediate deployment locally and in the cloud, the Wolfram Language builds on clear principles to create what Wolfram claims will be the world's most productive programming language.

In the video above recorded at SXSW this year as he introduced Wolfram Language, Wolfram discusses the profound implications of this new future on product development, industry, and research, and demonstrate new technology that will soon be part of our present.


SOURCE  SXSW

By 33rd SquareEmbed

Monday, April 28, 2014

Neurogrid

 Neuromorphic Computing
Bioengineer Kwabena Boahen's Neurogrid can now simulate one million neurons and billions of synaptic connections. Boahen is working with a team developing prosthetic limbs that would be controlled by a neuromorphic chip.




Scientists at Stanford University have developed a neuromorphic circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

Not only is the PC slower than the human brain, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power. Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."


"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed "Neurocore" chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections.

The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

Related articles
"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

In his article, Boahen notes the larger context of neuromorphic research, including the European Union's Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Initiative– short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen's article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM's SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University's BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.



SOURCE  Stanford University

By 33rd SquareEmbed