Six Ways to Stop the Terminator Robots
A new book, "Moral Machines: Teaching Robots Right from Wrong", considers options for ensuring that robots don't take over the world and subjugate or destroy the human race, a popular concept in science fiction. This article ranks them by liklihood of success, with the first two being a lost cause; robots are already being made autonmous, and the DoD is already a couple years into a program designing robots that will actually be making decisions about whether to shoot or not. Five and Six are considered the most promising; make robots so they understand the consequences of actions and empathize with human beings.
Of course this will all be moot if machines become "smarter", i.e. more powerful than the human race. In my opinion, a hostile takeover is unlikely; with solar, wind, geothermal, wave, tidal and other abundant and renewable energies becoming cheaper all the time, and a planet full of carbon, there will be no need to "compete" for resources robots and AI will consider desirable.
Rather, it seems to me they will consider healthy and happy human beings to be their greatest allies and resources, and they will help us solve problems like war, poverty, famine, ignorance, hatred and bigotry, if not out of the kindness of their electronic hearts, out of a common sense desire to increase planetary peace and stability for their own self-preservation. For this same reason, the human race should be preserving and not destroying the biosphere and myriad species we coexist and may be in unknown symbiotic relationships with.
These are the 6 ways examined in the article:
1) Keep them in low-risk situations
2) Do not give them weapons
3) Give them rules like Asimov's 'Three Laws of Robotics'
4) Program robots with principles
5) Educate robots like children
6) Make machines master emotion
With the relentless march of technological progress, robots and other automated systems are getting ever smarter. At the same time they are also being given greater responsibilities, driving cars, helping with childcare, carrying weapons, and maybe soon even pulling the trigger.
But should they be trusted to take on such tasks, and how can we be sure that they never take a decision that could cause unintended harm?
The latest contribution to the growing debate over the challenges posed by increasingly powerful and independent robots is the book Moral Machines: Teaching Robots Right from Wrong.
Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University, argue that we need to work out how to make robots into responsible and moral machines. It is just a matter of time until a computer or robot takes a decision that will cause a human disaster, they say.
So are there things we can do to minimise the risks? Wallach and Allen take a look at six strategies that could reduce the danger from our own high-tech creations.