“Can We Teach Robots Ethics?” It Depends

hand-2692450_960_720

On Oct. 15, 2017 BBC News posted an article entitled Can We Teach Robots Ethics? It explores the different arenas in which artificial intelligence may be used and the ethical questions it will raise.

The article opens with possible dilemmas faced by autonomous cars. What should an autonomous car do if two children tumble into the street as the car approaches? The car could swerve to the left, but it would then hit a motorbike. Either way someone is injured or killed.

Also consider whether the car should be willing to sacrifice its passenger to avoid killing pedestrians or passengers in other cars. For example, should the car choose to drive off of the road down a steep thirty foot drop to avoid hitting someone in the road? These are questions which developers of autonomous cars need to consider.

Consumers will need to consider them as well. Would you buy a car that might be willing to sacrifice you? Will consumers be able to choose a car’s ethical system the way they choose its color? One could easily imagine insurance companies concluding that some ethical systems carry different levels of liability for the owner and may therefore apply higher or lower insurance premiums to cars based on the ethical system similar to the way they adjust premiums for cars with certain safety features.

Also consider medical assistance robots. How much latitude should they have? If a sick or elderly patient refuses to take his or her medicine, should the robot be able to intervene if it determines that the patient’s life is at stake?

Now let’s bring this up a level – artificially intelligent, autonomous weapons. How should they respond to innocent civilians? Of course they should attempt to avoid harming civilians. That is simple enough, but how should autonomous weapons respond to civilians forced to serve as human shields? Could the weapons tell the difference if the civilians were forced to wear military clothes? Should we use autonomous weapons at all? On the one hand, someone could argue that we need to develop them because our enemies will eventually do the same. On the other hand, what if the weapon is damaged and goes rogue? As with any computerized system, there is also the potential for hackers to hijack the weapon and turn it against its own country. That is a risk with any captured military technology, but a captured artificially intelligent weapon raises the risk significantly.

Developers of artificially intelligent systems, legal experts, and consumers will need to consider all of these questions as artificial intelligence becomes more prevalent. So, let’s return to the original question of this post. Can we teach robots ethics? It depends . . . on what we mean by ethics.

Is ethics synonymous with morals? One could argue for a distinction. Ethics, on the one hand, refers to standards of behavior in an organization or a specific profession, such as the rules required by a company for its employees or HIPAA regulations in the medical profession. Morals, on the other hand, refers to normative standards of right and wrong (not merely approved or disapproved) which apply to everyone regardless of affiliation.

Whether the words “ethics” and “morals” are synonymous will depend on the context in which someone is speaking. In common usage the words often are synonymous. In this case we should make a distinction. If artificial intelligence continues to advance to the point that robots can engage in fully autonomous action, then it is not difficult to conclude that they could be taught ethics defined as standards of behavior. If they can be programmed to identify traffic signs on the road or the difference between military and civilian targets and act accordingly, then they can be programmed with standards of behavior as a component of the decision-making algorithms.

Whether robots can be taught morals, however, is a different question. Morals, as defined above, imply a sense of right and wrong that goes beyond accepted standards of behavior. We disapprove of murder, not only because it harms society, but because we know deep down that it is wrong. As a Christian I would argue that we have an implanted moral sense which is part of human nature. We also identify with other humans on an emotional level and place genuine value on their lives. A robot would have no such innate sense or emotional attachment. It would follow cold, logical rules in determining the correct action.

Ethical systems are grounded in moral judgments. We must therefore ask whose morals would be the standard. First, whether they are based on theological or secular assumptions would have significant impact. Second, the philosophical basis would be important. Would a robot follow an absolutist moral system in which some things are always right or always wrong, or a situational system in which moral judgment depends on the situation? As asked above, could we choose which system our robot would follow? Maybe a robot would be programmed with a greater good system which would value the highest good for the largest number of people. That would present a problem if a robot concluded that it was in the interest of the greater good to eliminate its owner. As noted above, the robot would follow cold logic in making such a decision with no innate attachment to life (or any form of compassion) to intervene.

If artificially intelligent machines are placed in homes, hospitals, or corporations, then some people will no doubt be uncomfortable. This reminds me of Isaac Asimov’s short story “Robbie” in his short story collection I, Robot. A girl named Gloria has a robot named Robbie with which she plays constantly. Her mother Mrs. Weston is uncomfortable with the effect that the robot will have on her daughter. She is also concerned that he could malfunction and harm Gloria. Mr. Weston tries to assure her that such behavior is contrary to Robbie’s programming due to the first law of robotics, which states that a robot cannot harm a human or by inaction allow a human to come to harm. In Asimov’s stories, that law is so fundamental to the robot’s operation that any damage severe enough to compromise the first law would first render the robot inoperable.

Eventually Mrs. Weston convinces Mr. Weston to get rid of Robbie, and so he is sent to work at a factory. Gloria misses Robbie, and so Mr. Weston takes her to the factory to see him. When she sees him she runs toward him into the path of an oncoming tractor. When Robbie sees her in danger the first law kicks in. He runs in front of the tractor and pulls her to safety. Mr. and Mrs. Weston then welcome him back into their family.

Some of the discomfort with robots may fade as we become more accustomed to encountering artificial intelligence in everyday life. Each generation may have less trouble trusting them. Some discomfort, however, may always endure, depending on the nature of the ethical programming of the robots and how dependable that programming is.

Artificially intelligent systems and robots could be a great asset, but we must take care in how we use them. What should an artificially intelligent robot’s ethical programming include? Which moral system should serve as its basis? Should artificial intelligence be used in cars, hospitals, or the military? These are complex questions which will continue to be topics of debate, but they are questions we need to consider carefully. Artificial intelligence continues to advance. We do not know how far it will advance, but with each improvement these questions become more important.

 

About henrywm

I am a graduate from Southeastern Baptist Theological Seminary with a Ph.D. in Systematic Theology. I am interested in Christian theology and church history. I also enjoy science fiction and stories which wrestle with deep questions.
This entry was posted in Technology and tagged . Bookmark the permalink.

2 Responses to “Can We Teach Robots Ethics?” It Depends

  1. Pingback: Some Hard Questions on Autonomous Drones | Here I Ponder

  2. Pingback: Writing Prompt – Artificially Intelligent Judge | Here I Ponder

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.