| Following up on a recent article in the Economist, Massimo Barbato finds that the development of robost legal and moral codes for machines and artificial intelligence will require greater collaboration between engineers, ethicists, lawyers and policy makers. |
A recent article in The Economist, presented two real case scenarios: (1) Should a drone - an unmanned military vehicle - fire on a house where a target is known to be hiding, which may also be sheltering innocent civilians? (2) Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants?
Moral dilemmas like these are emerging in other fields, too, such as engineering, medicine, business and finance, as the speed of technological innovation overtakes legislation needed to manage it. As a result, a whole new discipline of machine ethics is forming.
As the need to create a legal code, with the aim of giving autonomous machines the ability to make such choices appropriately, arises. Barbato asks how could such a code be implemented realistically?
Isaac Asimov's ‘Three Laws of Robotics’ in 1942, which has greatly influenced other writers, filmmakers and thinkers in their treatment of the subject. The three laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
A limit to defining laws for robotics now is that with the current level of technology these "laws" are irrelevant and can not be implemented. But we need other rules and laws to govern the use of autonomous and telepresence systems. Clear responsibilities need to be defined and enforced, without stifling innovation, development and fun.
As far as the law stands at the moment, robots are just inanimate property without rights or duties. Computers are not legal persons and have no standing in the judicial system.
In a world with AI, many robots will be performing useful roles in society. In Japan, where robots are embraced in society, Narito Hosomi, president of Toyo Riki Co Ltd. based in Osaka, has already developed a robot that can help patients’ rehabilitation as a shortage of care givers poses a growing problem in Japan’s aging society.
Barbato writes,
As robots become more autonomous, laws will be needed to protect humans, to prevent the misuse of robots and to spell out their rights, roles and responsibilities in society. A top priority will be laws to determine if the designer or manufacturer is at fault, for instance, if a drone strike goes wrong or there is a car accident. The moral judgments made by autonomous machines must seem to be right to most people. For designers such as Mr Hosomi, the technology must be human-centred and able to empathise with human perspectives and emotions. Also, responsible government and cyber-security agencies must monitor the evolution of AI, in order to detect potential anomalies that could pose a real threat to the human collective. All of this requires greater collaboration between engineers, ethicists, lawyers and policy makers.More and more researchers are finding that the exponential growth of AI and machine capability is pushing the need to define moral codes for machines. For instance Hod Lipson of Cornell University, at the recent swissnex conference made a special point at the end of his talk to stress the need for open scientific responsibility when it comes to the development of AI.
SOURCE The Singularity Principle
| By 33rd Square | Subscribe to 33rd Square |


0 comments:
Post a Comment