Baidu Relying on Deep Learning to Make it Easier to Talk to Your Devices

Friday, September 5, 2014

Baidu Relying on Deep Learning to Make it Easier to Talk to Your Devices

 Artificial Intelligence
Baidu says it is building a 100-billion-neuron deep learning system and that it will be complete within six months, powering a fast transition away from text for search requests. With smartphones and the company's new Baidu Eye technology, the company expects voice and image search to be more used than text.




Chinese search engine company Baidu is working on a massive computing cluster for deep learning that will be 100 times larger than the cat-recognizing system Google built in 2012 and that should be complete in six months.

Baidu Chief Scientist and machine learning expert Andrew Ng told Bloomberg News recently.  Ng, who was chiefly responsible for the system used at Google to recognize cats, called the Google Brain, joined Baidu in May this year.

"About 10 percent of Baidu search queries are done by voice. Within five years, voice and image searches will surpass text queries"


Ng and his team will use graphics processing units, or GPUs for the creation of their neural network.

"I hope that this will allow us to make incremental improvements to some of the current deep learning applications within Baidu, to support those teams–search, advertising, language translations, optical character recognition, speech recognition," Ng told Forbes recently.

Where the Google Brain was part of a research project  Baidu’s 100-billion-neural-connection system will be handling live search traffic for Baidu’s hundreds of millions of users. Baidu is trying to do some heavy AI work with Ng and his team, as well as Baidu’s Beijing-based artificial intelligence lab.

Baidu CEO Robin Li told Bloomberg that 10 percent of the company’s search queries are currently done by voice, and that voice and image search will surpass text queries within five years.

Baidu Eye


Related articles
At the Baidu World conference this week, the company also announced its lens-less version of Google Glass, called Baidu Eye, which is suspected to be heavily reliant on deep learning, audio and voice control. The device features a camera on one side of the headset analyzes the things it sees around it and sends audio information into an earpiece on the other side, as well as to a smartphone. Users can control Eye via voice commands or gestures.

Deep learning capability by computers configured to simulate human brain function has already reduced speech recognition error rates by 25 percent, Ng said.


SOURCE  Gigaom
By 33rd SquareEmbed

0 comments:

Post a Comment