Quantcast

Moral Machines

August 25, 2009

Researchers from Portugal and Indonesia describe an approach to decision making based on computational logic in the current issue of the International Journal of Reasoning-based Intelligent Systems which might one day give machines a sense of morality.

Science fiction authors often use the concept of “evil” machines that attempt to take control of their world and to dominate humanity. Skynet in the “Terminator” stories and Arthur C Clarke’s Hal from “2001: A Space Odyssey” are two of the most often cited examples. However, for malicious intent to emerge in artificial intelligence systems requires that such systems have an understanding of how people make moral decisions.

Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universities Indonesia, in Depok, are both interested in artificial intelligence and the application of computational logic.

“Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view,” the researchers say.

They have turned to a system known as prospective logic to help them begin the process of programming morality into a computer. Put simply, prospective logic can model a moral dilemma and then determine the logical outcomes of the possible decisions. The approach could herald the emergence of machine ethics.

The development of machine ethics will allow us to develop fully autonomous machines that can be programmed to make judgements based on a human moral foundation. “Equipping agents with the capability to compute moral decisions is an indispensable requirement,” the researchers say, “This is particularly true when the agents are operating in domains where moral dilemmas occur, e.g., in healthcare or medical fields.”

The researchers also point out that machine ethics could also help psychologists and cognitive scientists find a new way to understand moral reasoning in people and perhaps extract fundamental moral principles from complex situations that help people decide what is right and what is wrong. Such understanding might then help in the development of intelligent tutoring systems for teaching children morality.

The team has developed their program to help solve the so-called “trolley problem”. This is an ethical thought experiment first introduced by British philosopher Philippa Foot in the 1960s. The problem involves a trolley running out of control down a track. Five people are tied to the track in its path. Fortunately, you can flip a switch, which will send the trolley down a different track to safety. But, there is a single person tied to that track. Should you flip the switch?

The prospective logic program can consider each possible outcome based on different versions of the trolley problem and demonstrate logically, what the consequences of the decisions made in each might be. The next step would be to endow each outcome with a moral weight, so that the prototype might be further developed to make the best judgement as to whether to flip the switch.

“Modeling morality with prospective logic” in Int. J. Reasoning-based Intelligent Systems, 2009, 1, 209-221

 

——–

On The Net:

Inderscience




comments powered by Disqus