Crime and Punishment...Where do Robots Fit??
Kpeevey | October 14, 2007 at 09:41 amby
622 views | 0 Recommendations | 0 comments
<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />
One of the topics discussed in our article “Trust Me, I’m a Robot”from www.economist.com that I found quite interesting was the part about robot responsibility. What I mean by that is who are what are responsible for the actions of a robot? Is it the designers, the manufacturer or the robot itself? It is questions like these that are going to become more and more important in the years to come with this ever evolving technology. “It is important to be clear that legal responsibility is not exactly the same thing as moral responsibility”1 with that said, can robots be held not only legally responsible for their actions but morally responsible as well? With how fast the technology is advancing robots are becoming an ever present part of our lives, so if one hurts someone or even kills them, what actions as a society should we take. Right now as robot technology stands, the need to hold robots accountable in a criminal setting is not immediate, but there is a very big issue with robots when it comes to civil law. Corporations have been held responsible for the action or “malfunctions” of house hold robots for years. So far in these cases there has been no “assumptions about the intentions, consciousness, or moral agency of robots”1. This is because the technology in robotics has not yet come to a point that robots have become dangerous to most members of the general public, but does that mean that we should not think about the possibilities until they happen? In order to protect ourselves we have to discuss these issues now before they are staring us in the face. With that said “there are two principle problems with applying criminal law to robots: 1) Criminal actions require a moral agent to perform them, and 2) how is it possible to punish a robot?”1 Can you throw a robot in jail, could it be reformed and released back into society? Or is it even worth it. In the future when a robot commits a crime will we just destroy it? But if we do would that be considered capital punishment? When deciding whether or not o punish a robot we have to consider it they knew what they we doing was wrong, did their programmers give them a sense of morality? Because without an understanding of right and wrong, with out morals how can there be guilt? It would be like putting a child on trial, which does not often happen because we understand that their sense of morality is not yet fully developed and therefore they can not be held responsible. “Lawrence Solum  has given careful consideration to the question of whether an artificial intelligence (AI) might be able to achieve legal personhood, using a thought experiment in which an AI acts as the manager of a trust. He concludes that while personhood is not impossible in principle, it is also not clear how we would know that any particular AI has achieved it. The same argument could be applied to robots. Solum imagines a legal Turing test in which it comes down to the determination of a court whether an AI could stand trial as a legal agent in its own right, and not merely a proxy or agent of some other legal entity”1 In the coming years we will have to take a hard look at the direction that robotics is heading in and set up guidelines to protect ourselves and the robots. If one day we do develop strong AI we will hopefully have a legal system that is ready to deal with all that challenges that brings.
Robots and Responsibility from a Legal Perspective; Peter M. Asaro, Member, IEEE