Stanford University Releases First Report of the 100 Year Study of Artificial Intelligence

Tuesday, September 6, 2016

Stanford University Releases First Report of the 100 Year Study of Artificial Intelligence


Artificial Intelligence

Stanford University has released an initial report following plans that were laid two years ago to study the long-term potential and problems of artificial intelligence. The report is designed to address the general public, industry, local, national, and international governments and AI researchers, as well as their institutions and funders, to set priorities and consider the ethical and legal issues raised by AI research and its applications.


Researchers at the Stanford University have released the first report of their One Hundred Year Study on Artificial Intelligence (AI100). The study is an attempt to predict the potential impacts of artificial intelligence on our lives in a longer-term window.

"The overarching purpose of the One Hundred Year Study’s periodic expert review is to provide a collected and connected set of reflections about AI and its influences as the field advances.."
Titled "Artificial Intelligence and Life in 2030," the report comes just two years since the researchers began their work.


According to the study, substantial increases in the future uses of artificial intelligence applications, including self-driving cars, healthcare diagnostics and targeted treatment, and physical assistance for elder care can be expected. The study's authors focused on the history of AI technology and how it is being used in various fields today, such as in the development of robots for medical purposes and self-driving vehicles for transportation.

"Many have already grown accustomed to touching and talking to their smart phones," write the authors. "People’s future relationships with machines will become ever more nuanced, fluid, and personalized."

Now, the field of AI is shifting toward building intelligent systems that can collaborate effectively with people, including creative ways to develop interactive and scalable ways for people to teach robots.

The researchers point out the areas of AI development that are at the forefront today including:
Large-scale machine learning, deep learning, reinforcement learning, robotics, computer vision, natural language processing, collaborative systems, crowdsourcing and human computation, algorithmic game theory and computerized social choice, the Internet of Things (IoT) and neuromorphic computing. All of these areas represent dramatic potential areas for advancement and change to our society over the next decades.

Related articles

What is abundantly clear is that we will be interacting more and more with our machines in the future. "Over the next fifteen years, coincident advances in mechanical and AI technologies promise to increase the safe and reliable use and utility of home robots in a typical North American city," write the authors.

AI100 was initiated by Eric Horvitz, managing director of Microsoft Research's Redmond laboratory. It is meant to create a better understanding on how artificial intelligence is being developed and how it will impact the world over the coming century.

The report said that there hasn't yet been machines developed with the ability to sustain long-term goals and intent on their own—or what we know as artificial general intelligence (AGI). The authors also claim tat there are also no plans to create such machines in the near future.

"Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind," write the authors.

Stanford University AI100 Study
Image - Stanford University AI100 Study
The real dangers of artificial intelligence lie not in its potential to become an existential risk, but rather in the unintended consequences that could come about from a helpful technology such as the displacement of human labor and the erosion of privacy.

"AI could widen existing inequalities of opportunity if access to AI technologies—along with the  high-powered computation and largescale data that fuel many of them—is unfairly distributed across society," warn the authors. To avoid such an event, it is crucial for AI researchers and policymakers to find a balance between developing innovations and adhering to social mechanisms. This is to ensure that the benefits of such a technology will be widely distributed.

The AI100 researchers pointed out that if society views artificial intelligence with "fear and suspicion," it could slow down its development or drive it underground entirely. It could also impede the work that developers are doing to ensure that AI technologies remain safe and reliable.

They write, "society is now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote rather than hinder democratic values such as freedom, equality, and transparency."


SOURCE  Stanford University


By 33rd SquareEmbed



0 comments:

Post a Comment