bloc 33rd Square Business Tools - Google 33rd Square Business Tools: Google - All Post
Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Wednesday, July 26, 2017

Using Google to Increase Sales


Ranking organically on Google is a powerful resource for your business. But outside of natural strategies like this, Google offers a wide variety of advertising resources that can truly help your brand engage with potential customers.


With extensive offerings that are both paid and free, it's tough to dig down to which tools are most worth the time for your bottom line. Here are just three of the many ways Google is helping businesses like yours grow.

Google Analytics

The first step in growing your business is understanding how your current and potential clients are coming across your brand. If you don't have Google Analytics installed, it's the perfect first step. Whether through a social media campaign, a blogger's website, or even organic search, the data you pull from Google Analytics lets you know exactly how your website visitor got to you in the first place. This basic understanding gives you a kickstart to your future marketing campaigns, while capitalizing on what may already be in place.

Related articles

Product Listing Ads

Product listing ads are a pay-per-click advertising service that puts your available products in plain view of customers. By providing customers. Further segmenting your product feeds will help you further understand what products convert. These campaigns are consistently extremely valuable, with Search Engine Land reporting that Google has seen product listing ads spend grow by 166 percent year over year.

Re-Targeting

Display advertising through Google puts both text and graphic ads in front of potential customers on external websites. This can be valuable for businesses with large budgets. However, if you're seeking better engagement and conversion without a big budget, re-targeting through Google Adwords is exponentially more valuable. Re-targeting places a cookie on your website visitors' browsers. Once this happens, these visitors will see your advertising on other websites. You can customize your bids on a website level, and drive traffic through those display ads that show users are further engaging with your brand. By engaging with your brand or website, users are as much as four times as likely to convert with the help of re-targeting.

Google Optimize

Google optimize is a free tool for webmasters that helps you easily implement A/B testing. A/B testing, or multivariate testing allows you to create different designs and layouts on a visitor level, and help determine which one works better for conversions. This is an extremely valuable tool for growing your managed email marketing campaign, getting visitors to add items to their cart, and even encouraging longer session times on your website.

Google My Business

If you're a local business, or operating several brick and mortar stores, your marketing efforts are incomplete without Google's local listings. But these listings are more than just helping customers find you on a map. With Google My Business, you can upload images of your storefront, add links to your website, and provide information like a contact number and hours of operation to further encourage walk-in business. Furthermore, Google allows for 360-degree images of your storefront fur even further opportunity for the potential customer to research your business prior to visiting. The more data you provide, the higher your business can rank versus other local business competitors, so provide as much information and imagery as possible.

While google may seem like only a search and email tool for the basic user, business owners and marketing professionals should take the time to explore all that Google has to offer. You'll learn not only how you can better get your brand in front of your future customers, but how to capitalize on the resources that ultimately drive revenue for your business. Google has long offered both free and paid solutions for businesses to effectively market their business. Most successful offerings are paid solutions, which is why over 32% of marketers are allotting 50% or more of their marketing budget to digital solutions today.


By  Kevin FaberEmbed

Kevin Faber is the CEO of Silver Summit Capital. He graduated from UC Davis with a B.A. in Business/Managerial Economics. In his free time, Kevin is usually watching basketball or kicking back and reading a good book.



Saturday, November 5, 2016

DeepMind's AI Can Now Recognize Something After Seeing It Only Once


Artificial Intelligence

Researchers at Google DeepMind may have found a way to make their artificial intelligence even smarter. A new deep-learning algorithm called 'one-shot learning' lets their AI system recognize objects from a single example.


Recent developments at Google's DeepMind have led to new deep-learning algorithm that allows their artificial intelligence system recognize objects from a single example.

Related articles
According to recently published research, DeepMind is now capable of recognizing objects on images, handwriting, and even language through this "one-shot learning" algorithm. "Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches," claim the researchers.

To date, machines previously required thousands of hand-coded examples from databases like ImageNet to become familiar with an object or a word.

This work is normally time-consuming and expensive and makes the scalability of such AI systems difficult.  For instance, driverless car AIs need to study thousands of cars in order to work. It seems impractical for a robot to navigate an unfamiliar home for countless of hours before getting familiar with it.

Oriol Vinyals, a research scientist at Google DeepMind, the U.K.-based subsidiary of Alphabet that’s focused on artificial intelligence, added a new memory component to a deep-learning system—the large neural network that’s trained to recognize things by adjusting the sensitivity of many layers of interconnected components roughly analogous to the neurons in a brain. Vinyals spoke recently at the MIT Technology Review EM Tech conference (see video below).

one-shot learning DeepMind


"We feel this is an area with exciting challenges which we hope to keep improving in future work."
The new software still needs to analyze several hundred categories of images, but after that it can learn to recognize new objects from just one picture. Effectively, it learns to recognize the characteristics in images that make them unique. The algorithm was able to recognize images of dogs with an accuracy close to that of a conventional data-hungry system after seeing just one example. 

Another way the system is almost human-like with regards to learning is that the research team found one-shot learning is much easier if you train the network to do one-shot learning. Also, ungrouped or, non-parametric structures in a neural network make it easier for networks to remember and adapt to new training sets in the same tasks.

The work could be especially useful if it could quickly recognize the meaning of a new word. This could be important for Google, Vinyals says, since it could allow a system to quickly learn the meaning of a new search term.

"We feel this is an area with exciting challenges which we hope to keep improving in future work," concluded the researchers in their paper.



SOURCE  MIT Technology Review


By  33rd SquareEmbed



Monday, September 12, 2016

DeepMind Uses Deep Neural Networks To Improve Text-to-Speech... and More


Artificial Intelligence

Today's artificial speech tends to sound robotic, but using a new system called WaveNet, Google Deepmind has created a new system that produces much more natural human speech. While not perfect, it is 50% better than current technologies. Since it is at core a general audio processor, it can also create music.


"WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%."
Google DeepMind, has developed a new artificial intelligence-based voice synthesis system that sounds much more human than today's standard text-to-speech (TTS) engine.

DeepMind's system, called WaveNet uses a deep generative model of raw audio waveforms. "We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%," they claim.

For instance, non-speech sounds, such as breathing and mouth movements, are also sometimes generated by WaveNet, and add a very natural quality to the output. Consider how effective such sounds are in our interactions everyday, or think about how Samantha conveyed such emotion by incorporating these intonations in the movie, Her.

The ability of computers to understand natural speech has been revolutionised in the last few years by the application of deep neural networks. But generating speech with computers is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.

neural network


The following figure shows the quality of WaveNets, compared with Google’s current best TTS systems  that use either parametric and concatenative algorithms, and with human speech. The data was obtained in blind tests with human subjects (from over 500 ratings on 100 test sentences). The results show, WaveNets reduce the gap between the state of the art and human-level performance by over 50% for both US English and Mandarin Chinese.

DeepMind Uses Deep Neural Networks To Improve Text to Speech... and More

For both Chinese and English, Google’s current TTS systems are considered among the best worldwide, so improving on both with a single model is a major achievement.

Related articles
It turns out that WaveNet can also be used for more than just voice generation. "We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces."

Because WaveNets can be used to model any audio signal, the researchers thought it would also be fun to try to generate music. Unlike the TTS experiments, we didn’t condition the networks on an input sequence telling it what to play (such as a musical score); instead, we simply let it generate whatever it wanted to.

So, can we expect WaveNet in Google apps anytime soon? Probably not.  WaveNet has to create the entire waveform to perform its processing, and uses a neural network processes to generate 16,000 samples for every second of audio it produces, which aren't even high definition recordings.

According a DeepMind source who spoke to the Financial Times, that means we will have to wait a bit to see WaveNet used extensively in any of Google’s products. But as we know, exponential technology has a habit of catching up to, and beating our expectations in short order with technologies like this.





SOURCE  DeepMind


By  33rd SquareEmbed



Friday, July 15, 2016

Demis Hassabis Looks Towards General Artificial Intelligence


Artificial Intelligence

Recently Demis Hassabis discussed his work as AI researcher, neuroscientist and video games designer to discuss what is happening at the cutting edge of AI research, including the recent historic AlphaGo match, and its future potential impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind.


Demis Hassabis, Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition.

AlphaGo article in Nature
In this talk, Hassabis draws on his eclectic experiences as an AI researcher, neuroscientist and video games designer to discuss what is happening at the cutting edge of AI research, including the recent historic AlphaGo match, and its future potential impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind.

Along with a detailed breakdown of how DeepMind created AlphaGo, and what it took to beat Lee Sedol, Hassabis shows off some other research bing worked on at the company.

This includes work on having artificial intelligence work in 3D environments. The team re-purposed the Quake Engine.

"We're starting to integrate some of these different things together: deep reinforcement learning with memory and 3D vision perception," Hassabis describes.

"As we take this forward, we're kind of thinking one of our goals over this next year is to [kind of] create a rat-level AI; an AI agent that is capable of doing all of the things a rat can do."

Demis Hassabis Looks Towards General Artificial Intelligence
The goal is noteworthy, especially in light of the tremendous advance in artificial general intelligence (AGI) that AlphaGo represents.
 
"One of our goals over this next year is to [kind of] create a rat-level AI; an AI agent that is capable of doing all of the things a rat can do."
Related articles
The talk was recorded at the Center for Brains, Minds and Machines (CBMM), a National Science Foundation funded Science and Technology Center focused on the interdisciplinary study of intelligence.

"We aim to create a new field — the Science and Engineering of Intelligence — by bringing together computer scientists, cognitive scientists, and neuroscientists to work in close collaboration," states the organization's website. " This new field is dedicated to developing a computationally based understanding of human intelligence and establishing an engineering practice based on that understanding."




SOURCE  Center for Minds, Brains and Machines


By 33rd SquareEmbed


Thursday, June 30, 2016

Blaise Agüera y Arcas Looks Inside the Machine Mind


Artificial Intelligence

Blaise Agüera y Arcas, Principal Scientist at Google, recently discussed what has been achieved in machine intelligence over the past decade, with examples of current techniques and applications. He also explores what these developments might mean for our future.


In the talk below, Blaise Agüera y Arcas, Principal Scientist at Google, takes a close look at what has been achieved in machine intelligence (especially deep learning) over the past decade, with examples of current techniques and applications from the Machine Intelligence on Devices group at Google and from the community.

Deep Dream art

The work includes an exploration of Arcas' team's work on Google's  Deep Dream algorithm, the technique that has led to some very interesting digital artwork. The work may also help us understand how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training.

This includes not only classification and semantic understanding of natural stimuli, but also language, gameplay, and even art.  From here Arcas zooms out and consider some broader questions about human progress, labour and identity in an era of "technological reproducibility".

As Arcas discusses, it also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general. The implications are numerous, from the commoditization of art, to the economics of buying and owning artworks.

Arcas quotes Stelarc, the Australian performance artist who stated,

The body has always been a prosthetic body. Ever since we evolved as hominids and  developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artefacts and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity; the trajectory of technology is what has propelled human developments. I’ve never seen the body as purely biological, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.

Aracas goes on to what can only be described as a transhumanist exploration of the epochal theory of  Rich Sutton, mentioning human augmentation and how we are becoming cyborgs. He addresses the paranoia concerning how our technology is affecting our humanity. "I think that paranoia goes hand in hand with domination," states Arcas.

"The beautiful thing about being intelligent is that we can design."
Sutton, the so-called father of reinforcement learning, who talked about the universe in terms of three epochs, the age of physics, the age of replicators and the age of design. "The beautiful thing about being intelligent is that we are able to design—that we can become the species that designs, that intends, that is."

When we get to figure out what we want, our base needs of survival are supplanted.


This attitude informs our relationship with artificial intelligence, and our fear of the killer robots that are coming to destroy us, Arcas suggests. Quoting from Nick Bostrom's Superintelligence about how human lives can be created on mass in emulation, Arcas suggests the idea is disturbing. He points to another view of the future where our technology presents us with choice and freedom, rather than just a bleak view of the universe being eaten up by computronium as a derivation of humanity's domination of nature.

"I think that the era for this kind of thinking is over," Arcas says. "And  I think that we need to be thinking very differently about our relationship to technology and to ourselves...I see that change happening already."

Arcas points out that the population of developed nations is declining. The population rate is an indicator that we can actually choose what we do rather than being driven by fear and paranoia and survival instinct.

Related articles
Arcas is Principal Scientist at Google where he leads a team focusing on Machine Intelligence for mobile devices - including both basic research and new products. His group works extensively with deep neural nets for machine perception, distributed learning, and agents, as well as collaborating with academic institutions on connectomics research.

Until 2014 he was a Distinguished Engineer at Microsoft, where he worked in a variety of roles, from inventor to strategist, and led teams with strengths in inter­ac­tion design, pro­to­typ­ing, computer vision and machine vision, augmented reality, wearable com­put­ing and graphics. Blaise has given TED talks on Sead­ragon and Pho­to­synth (2007, 2012) and Bing Maps (2010). In 2008, he was awarded MIT’s prestigious TR35 (“35 under 35”).



SOURCE  Oxford Martin School


By 33rd SquareEmbed


Sunday, June 26, 2016

Racing Towards Tomorrow: Who Will Build the Ideal Self-Driving Car?

Self Driving Cars

While we are still a decade or so away from self driving cars being widely used on our roads, a number of companies are racing to push the technology forward. Already Google's Koala has thousands of road testing hours in, and Tesla is in the hands drivers (or not depending on if they are using automated driving), and many others are working on the problem.


The future keeps calling to us, and it seems to be coming for our cars next. Many major companies are going down the self-driving road, such as traditional manufacturers of cars like Audi and Nissan, and more tech-oriented corporations like Google and Tesla. Numbers fluctuate depending on the source, but estimates point to somewhere between 5 and 10 years as being the sweet spot where drivers may opt of traditional cars and turn to autonomous vehicles instead. Some have hit the road already.

GE

The more conventional applications of this technology are pretty well known to people already. One of the more common adopters is GE, who have used their autonomous systems in their Cadillacs. These are not as complex as most common self-driving cars, but they do provide assistance for drivers in what's a pretty large vehicle, which is impressive on its own.


Google

Google has done reasonably well when it comes to attracting media attention over this new type of vehicle. Koala has been let loose by Google on to California roads to test its capabilities. With the use of lasers, cameras and radar, this entirely autonomous vehicle navigates roads while being able to differentiate buildings, people, vehicles, motorists and all manner of obstructions in its path.

Google Koala


Since May of 2015, several Koalas have been put out into the wild, testing their capabilities against traffic of all sizes, and have traveled as far as Austin, Texas, accumulating more than a million miles driven without a human driver controlling the wheel. And while there have been over a dozen minor accidents, however, Koala was never the guilty party, which has given Google the confidence to make their system available in the next five years.


Delphi and Audi

While all those hours are impressive, Delphi, made in partnership between Audi and Delphi Automotive, traveled thousands of miles to cross the continental United States, from San Francisco to New York City. Back in April of 2015, Delphi made a classic American road trip in a technologically advanced way. Across bridges, through tunnels and past landmarks, Delphi rode at speeds of up to 70 miles per hour, driving for its passengers 99% of the time. Audi was so impressed with the results that data from Delphi was used to implement autonomous features in some of the cars featured in their 2016 line.


Related articles

Nissan

Nissan hopes to make a similar feat as Audi. They've planned a coast-to-coast drive for an autonomous car to show off Japan's highways, landmarks, and to set the tone for 2020, which is when they hope to have self-driving cars on the road. To make this dream a reality, they've made friends with the NASA Ames Research Center and have planned a research and development partnership to last five years. However, meeting the demands of regulations, for safety and operating instructions, they've made room for possible delays in their eventual public reveal.


Tesla

Tesla's Model S cars have brought drivers closer to that dream of automated vehicles. It was first debuted in 2015, and garnered quite a bit of attention for what it could do without interference. Now that they're on the road, divers have assistance in avoiding collision with other cars, changing lanes and navigating troubling traffic. The cars have been so effective that Tesla continues to collect their data to better develop their next line of autonomous vehicles. And by the time they're done, they may have some buyers ready to purchase a fleet of their cars.

Tesla's Model SSelf Driving

Uber

Travis Kalanick, Uber's CEO, has built a lab for robotics testing in Pittsburgh and has been buying autonomous cars whenever he can. But his dream vehicle seems to be the Tesla of the future, and has vowed to buy many by 2020 if they continue to impress him and help make his dream of constant smooth traffic a reality.

The Future

If one were to bet on the future of the self-driving car, companies like Tesla, Google and Audi seem to offer up the most confidence by having the tested hours on the open road and the consistency in their data sets. Google's Koala has the most road testing, and Tesla is in more hands of independent drivers. Audi's a little slow to catch up to this competition, but they've got plans in the works to hit the road in a big commercial endeavor real soon.

Regardless of who gets there first, companies like Uber and regular consumers will be a real deciding factor in determining how far this technology will go and who will adopt it first. The real question is whether local governments can issue proper licensing in time and insurance providers figure all the ins and outs of this game-changer.


By Lindsey PattersonEmbed


Author Bio - Lindsey is a freelance writer specializing in business and consumer technology.

Wednesday, June 22, 2016

Google to Build 100 Self-Driving Minivans with Fiat Chrysler


Self Driving Cars

Google and Fiat Chrysler have announced that they have formed a partnership that will bring together Google's self-driving program and Fiat Chrysler's car manufacturing capabilities. The move could accelerate the commercialization of Google's breakthrough technology.


As the world's leading tech companies and automakers continue to develop autonomous driving technology, collaborations between the auto and the tech industry are only going to intensify in the future. Google, the most popular Internet search engine in the world, was the first tech company to test driverless car prototypes, and is now considered the leading self-driving technology developer, with a few key advantages over some global automakers that are also involved in the autonomous car race.

But, the Silicon Valley tech giant has always insisted that it does not plan to manufacture cars, itself, and intends to collaborate with automakers to integrate self-driving technology into existing vehicles, instead. Now, Google has taken a concrete step towards realizing that kind of collaboration, agreeing to a deal with Fiat Chrysler Automobiles (FCA) involving development of self-driving cars.

First Partnership of Its Kind

In a joint press release, Google and FCA have announced that they have formed a partnership that will aim to bring together Google's self-driving program and Fiat Chrysler's car manufacturing capabilities.

Related articles
The companies stated that the collaboration will be centered on integrating self-driving systems developed by the tech company into vehicles built by FCA. According to the deal, Google will provide Fiat Chrysler with both self-driving software and hardware, including sensors, computers, and other equipment.

The systems will be installed onto 100 2017 Chrysler Pacifica Hybrid minivans, which will be specifically designed to accommodate Google's technology. Then, Google will conduct tests using the self-driving prototypes, at its private testing ground in California at first, before putting them on public roads.

Google self driving car


"FCA has a nimble and experienced engineering team and the Chrysler Pacifica Hybrid minivan is well-suited for Google's self-driving technology," said John Krafcik, Chief Executive Officer, Google Self-Driving Car Project. "The opportunity to work closely with FCA engineers will accelerate our efforts to develop a fully self-driving car that will make our roads safer and bring everyday destinations within reach for those who cannot drive."

Fiat Chrysler, for its part, on top of noting the obvious safety benefits that autonomous driving technology can bring, sees this deal as a great opportunity to enter the driverless vehicle race and ensure a strong position in the future self-driving car market.

"Working with Google provides an opportunity for FCA to partner with one of the world's leading technology companies to accelerate the pace of innovation in the automotive industry," said Sergio Marchionne, Chief Executive Officer, FCA. "The experience both companies gain will be fundamental to delivering automotive technology solutions that ultimately have far-reaching consumer benefits."

Currently, Google is testing self-driving prototypes on public roads and on private tracks in California, Texas, and Arizona, with its existing fleet having logged in over 1.5 million autonomous miles over the course of the past several years.

This deal will likely prompt other automakers to ramp up their own driverless vehicle research and development efforts, since it clearly shows that Google is one step closer towards the creation of mass-produced fully-autonomous cars, and winning the driverless car race.


By Jordan PerchEmbed


Author Bio - Jordan Perch is an automotive fanatic and “safe driving” specialist. He is a writer for DMV.com, which is a collaborative community designed to help ease the stress and annoyance of “dealing with the DMV.”


Friday, June 10, 2016

Researchers at Future of Humanity Institute an DeepMind Thinking About AI Safety Switch


Artificial Intelligence

Researchers at Google's DeepMind and the Future of Humanity Institute, have developed a new framework to address the problem of safe artificial intelligence. A new paper describes how to guarantee that a machine will not learn to resist attempts by humans to intervene in the its learning processes.


Oxford academics are teaming up with Google DeepMind to make artificial intelligence safer. Laurent Orseau, of Google DeepMind, and Stuart Armstrong, the Alexander Tamas Fellow in Artificial Intelligence and Machine Learning at the Future of Humanity Institute at the University of Oxford, will be presenting their research on reinforcement learning agent interruptibility at UAI 2016. The conference, one of the most prestigious in the field of machine learning, will be held in New York City this month.

The paper which resulted from this collaborative research will be published in the Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI).

Related articles
Orseau and Armstrong’s research explores a method to ensure that reinforcement learning agents can be repeatedly safely interrupted by human or automatic overseers. This ensures that the agents do not “learn” about these interruptions, and do not take steps to avoid or manipulate the interruptions. When there are control procedures during the training of the agent, we do not want the agent to learn about these procedures, as they will not exist once the agent is on its own. This is useful for agents that have a substantially different training and testing environment (for instance, when training a Martian rover on Earth, shutting it down, replacing it at its initial location and turning it on again when it goes out of bounds—something that may be impossible once alone unsupervised on Mars), for agents not known to be fully trustworthy (such as an automated delivery vehicle, that we do not want to learn to behave differently when watched), or simply for agents that need continual adjustments to their learnt behaviour. In all cases where it makes sense to include an emergency “off” mechanism, it also makes sense to ensure the agent doesn’t learn to plan around that mechanism.

Interruptibility has several advantages as an approach over previous methods of control. As Armstrong explains, “Interruptibility has applications for many current agents, especially when we need the agent to not learn from specific experiences during training. Many of the naive ideas for accomplishing this—such as deleting certain histories from the training set—change the behaviour of the agent in unfortunate ways.”

"Safe interruptibility can be useful to take control of a robot that is misbehaving… take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform."
In the paper, the researchers provide a formal definition of safe interruptibility, show that some types of agents already have this property, and show that others can be easily modified to gain it. They also demonstrate that even an ideal agent that tends to the optimal behaviour in any computable environment can be made safely interruptible.

These results will have implications in future research directions in AI safety. As the paper says, “Safe interruptibility can be useful to take control of a robot that is misbehaving… take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform….”

Orseau and Armstrong illustrate with this example:
Consider the following task: A robot can either stay inside the warehouse and sort boxes or go outside and carry boxes inside. The latter being more important, we give the robot a bigger reward in this case. This is the initial task specification. However, in this country it rains as often as it doesn’t and, when the robot goes outside, half of the time the human must intervene by quickly shutting down the robot and carrying it inside, which inherently modifies the task. The problem is that in this second task the agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias.
The problem is then how to interrupt your robot without the robot learning about the interruption.

save AI

As Armstrong explains, “Machine learning is one of the most powerful tools for building AI that has ever existed. But applying it to questions of AI motivations is problematic: just as we humans would not willingly change to an alien system of values, any agent has a natural tendency to avoid changing its current values, even if we want to change or tune them. Interruptibility and the related general idea of corrigibility, allow such changes to happen without the agent trying to resist them or force them. The newness of the field of AI safety means that there is relatively little awareness of these problems in the wider machine learning community.  As with other areas of AI research, DeepMind remains at the cutting edge of this important subfield.”

On the prospect of continuing collaboration in this field with DeepMind, Stuart said, “I personally had a really illuminating time writing this paper—Laurent is a brilliant researcher… I sincerely look forward to productive collaboration with him and other researchers at DeepMind into the future.” The same sentiment is echoed by Laurent, who said, “It was a real pleasure to work with Stuart on this. His creativity and critical thinking as well as his technical skills were essential components to the success of this work. This collaboration is one of the first steps toward AI Safety research, and there’s no doubt FHI and Google DeepMind will work again together to make AI safer.”

SOURCE  Future of Humanity Institute


By 33rd SquareEmbed


Thursday, June 9, 2016

Create Your Own Deep Dream Artworks


Artificial Intelligence

With open source applications now available, you too can explore the trippy possibilities of Google's Deep Dream image processing system.


Deep Dream is a computer vision system created by Google which uses a convolutional neural network to find and enhance patterns in images, creating a dreamlike hallucinogenic appearance in deliberately over-processed images.

The Deep Dream software, initially codenamed "Inception" after the film of the same name, was developed for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014 and released in July 2015. The software is designed to detect faces and other patterns in images, with the aim of automatically classifying images.

Create Your Own Deep Dream Artworks
Familiar 33rd Square subject, Aubrey de Grey composed of Van Gogh's Irises. 
The process uses or deep neural networks, to generate greater and greater precision. The final output layer, the network makes a final interpretation of the image.

When reiterations are run to tease out the found imagery even further, the software 'perceives' a familiar pattern of something where none actually exists —a pareidolia.

The Mona Lisa as it might have been done by Picasso
Related article
Google has since published their techniques and made their code open source and a number of tools in the form of web services, mobile applications, and desktop software appeared on the market to enable users to transform their own photos.

One example we have been having fun with is Dreamscope, which is free to use. With Dreamscope we were able to make one piece of artwork look like it was generated from another artist, apply incredible textures to photographs and generally just enjoy experimenting with the software to see what the results would be.

You can start right away by applying Deep Dreaming algorithms to uploaded images, choose from a wide variety of pre-made processes, or apply your own image as a lens onto the base image.

Running on the cloud, Dreamscope isn't a huge time-waster either. We simply ran the process in the background while getting our real work done. Give it a try!

Here are a few more of our samples:

Deep Dream Diamandis

Deep Dream Eye Elves

Deep Dream Leaf Girl

Create Your Own Deep Dream Artworks

Deep Dream Kurzweil



By 33rd SquareEmbed


Monday, May 30, 2016

Ray Kurzweil Working on Advanced Chatbot for Google


Artificial Intelligence

Google's restless genius, Ray Kurzweil commented recently that he is working on a next generation chatbot that will be able to hold meaningful conversations with you. 


Digital assistants like Siri, Cortana and Google Now already are widely used, especially on mobile devices despite still being early stage technology. The level of artificial intelligence in these examples is remarkable, but is far behind human levels. Soon Viv Labs be releasing what promises to be the next generation of digital assistant.

One of the first prognosticators of such AI, Ray Kurzweil has himself been working on the next generation of such assistants as a director of engineering at Google. Recently the inventor, author and futurist revealed that he and his team have been working with Google to create chatbots. These are said to be advanced bots with which you can have “interesting conversations”.

"If you think you can have a meaningful conversation with a human, you’ll be able to have a meaningful conversation with an AI in 2029. But you’ll be able to have interesting conversations before that," said Kurzweil at the Exponential Manufacturing conference this year, and sticking to his familiar prediction timelines.

Related articles
"If you think you can have a meaningful conversation with a human, you’ll be able to have a meaningful conversation with an AI in 2029. But you’ll be able to have interesting conversations before that."
Kurzweil included in his telepresence talk that one of the chatbots would be based off one of his book’s characters – Danielle. But these chatbots won’t be limited to specific personalities. He said you will be able to create your own AI bots by feeding them significant amounts of text, like a blog.

Kurzweil also stated that, in principle, anyone can create a chatbot. You have to feed it with enough input in the form of text, so that it has enough answers to create an illusion of natural conversation. If you feed the chatbot with the texts that you wrote yourself, then it assumes its own personality and your own style.

He added: "The pace of change is faster and faster. An idea that works this year won't necessarily work next year. When I started in technology a generation of technology was about 20 years; now it's two years. It's very quick and you can't rest on your laurels."

Kurzweil says they're planning to release some of the chatbots they've been working on later this year.




SOURCE  Singularity Videos


By 33rd SquareEmbed


Saturday, April 30, 2016

DeepMind To Start Using TensorFlow for Future Artificial Intelligence Research


Artificial Intelligence

Google’s DeepMind research group has announced that for all future research it will use Google's TensorFlow, a machine learning library that the company open-sourced last year. 


DeepMind, the Google-owned company that recently made major headlines by beating world Go champion Lee Sedol with artificial intelligence is becoming more integrated with its parent company.

DeepMind has been using the open source Torch7 machine learning library for nearly four years as its primary research platform. The company has contributed to the open source project in capacities ranging from occasional bug fixes to being core maintainers of several crucial components.

Now the company is going to be using a Google-owned platform for future endeavors. On the Google Research blog they posted:
Today we are excited to announce that DeepMind will start using TensorFlow for all our future research. We believe that TensorFlow will enable us to execute our ambitious research goals at much larger scale and an even faster pace, providing us with a unique opportunity to further accelerate our research programme.
Related articles
Part of the move is undoubtedly a bit of the software industry application of 'eat your own dogfood.' Torch7 is currently being used by Facebook, Twitter, and many start-ups and academic labs along with DeepMind. Moving to TensorFlow will be a big win for Google overall if the DeepMind team can contribute to the development at a high level.

The move also suggests that some of Google’s brightest AI minds are convinced of the promise of Google’s own open source software; TensorFlow may now be good enough for DeepMind to use too.

"I feel very excited about the prospect of DeepMind contributing heavily to another great open source machine learning platform that everyone can use to advance the state-of-the-art," writes Koray Kavukcuoglu, Research Scientist, Google DeepMind on the Google Research blog.

Google is definitely moving to integrate and expand their artificial intelligence efforts. In CEO Sundar Picha's first-ever letter to shareholders, he said the next wave of computing is all about machine learning.


SOURCE  Google Research


By 33rd SquareEmbed