Bostrom and Yudkowsky: The Ethics of AI

Friday, January 13, 2012




new paper by Nick Bostrom and Eliezer Yudkowsky on the ethics of Artificial Intelligence has been released. It will appear in the Cambridge Handbook of Artificial Intelligence:
The possibility of creating thinking machines raises a host of ethical issues.  These
questions relate both to ensuring that such machines do not harm humans and other
morally relevant beings, and to the moral status of the machines themselves.  The first
section discusses issues that may arise in the near future of AI.  The second section
outlines challenges for ensuring that AI operates safely as it approaches humans in its
intelligence.  The third section outlines how we might assess whether, and in what
circumstances, AIs themselves have moral status.  In the fourth section, we consider
how AIs might differ from humans in certain basic respects relevant to our ethical
assessment of them.  The final section addresses the issues of creating AIs more
intelligent than human, and ensuring that they use their advanced intelligence for
good rather than ill.
This paper serves as a good introduction to the problem of Friendly AI.




Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and
Director of the Future of Humanity Institute within the Oxford Martin School.  He is
the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global
Catastrophic Risks (ed., OUP, 2008), and Enhancing Humans (ed., OUP, 2009).  His
research covers a range of big picture questions for humanity.  He is currently working
a book on the future of machine intelligence and its strategic implications.

Eliezer Yudkowsky is a Research Fellow at the Singularity Institute for Artificial
Intelligence where he works full‐time on the foreseeable design issues of goal
architectures in self‐improving AI.  His current work centers on modifying classical
decision theory to coherently describe self‐modification.  He is also known for his
popular writing on issues of human rationality and cognitive biases




0 comments:

Post a Comment