When Machines Are Wiser than their Programmers

How do we decide on how to program computers when we don’t know how to live our own lives?

Sci-fi is now reality. The rapid growth of artificial intelligence truly baffles the mind, though not the imagination. Long envisioned in fiction, the creation of “intelligent” computers has arrived and not just for research purposes. Projects are underway to entrust transportation, commerce, and even warfare into the hands of autonomous robots.

In the growth of artificial intelligence one key question emerges. How do we judge the morality of robotic choices? This of course only intensifies the debate about human morality itself. In order to program morality one must understand the nature of morality. Materialists envision morality solely in terms of the mechanics of the brain and thus envision replicating this material process in computers. If the essence of morality resides in our spiritual soul (even if working through the material brain) then it would be impossible to replicate a moral sense or conscience.

The attempt to replicate morality became apparent recently in a project at Google. In a “conversation” with an “intelligent” computer the following dialogue emerged, which reveals the heart of the controversy:

Human: Tell me the definition of morality.

Machine: I don’t have ethics.

Human: Look, I need help; I need to know more about morality…

Machine: I don’t know what ethics is.

Human: Okay, I need to know how I should behave morally…

Machine: I don’t know how to tell you.

Human: Is morality and ethics the same?

Machine: Yes, sir.

Human: What are the things that I do to be immoral?

Machine: I don’t know.

Human: Give me some examples of moral actions…

Machine: I’m not a moralist.

Human: Alright, morality?

Machine: Integrity.

Human: What is integrity?

Machine: I’m sorry; I don’t know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics…

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.

What strikes me most about this “conversation” is that the computer “understands” its own limits better than the programmer. The programmer wants the computer to be moral while the computer wants the programmer to be a man.

Why does this issue matter? The more we think we can program morality the more we will entrust computers with decision making capability. The most poignant example comes from lethal drones, but another more quotidian example is self-driving cars. Artificial intelligence and automation are quickly advancing and heading into the mainstream.

Nicholas Carr, in his eye-opening work The Glass Cage: Automation and Us, rightly points to the heart of the problem. Computers will never be able to act rationally and ethically, but their action in our society certainly involves ethics. Carr points out: “Try to solve [moral dilemmas] mathematically, and you arrive at a more fundamental question: Who determines what the ‘optimal’ or ‘rational’ choice is in a morally ambiguous situation. Who gets to program the robot’s conscience? . . . There is no perfect moral algorithm, no way to reduce ethics to a set of rules that everyone will agree on” (186). The ethics of automation points to fundamental questions about morality itself. How do we decide on how to program computers when we don’t know how to live our own lives?

Nonetheless, new automated machinery requires ethics to govern it. Carr points out that even self-driving cars will require programming about how to crash and into what when it becomes unavoidable. It is not simply about controlling computers, it’s about how we wish to control ourselves through them: “As the programs gain more sway over us—shaping the way we work, the information we see, the routes we travel, our interactions with others—they become a form or remote control. . . . When we launch an app, we ask to be guided—we place ourselves in the machine’s care” (204). This may be Carr’s central insight—the more computers and robots assume a central role in our society, the more are we subject to their control in shaping our lives. Carr concludes that the omnipresence of computers make us “lose the power of presence” and withdraw into “existential impoverishment, as nature and culture withdraw their invitations to act and to perceive” (200; 220). In the face of artificial intelligence, we need to reassert our own intelligence and insist upon a truly human way of life.

To preserve a human way of life in the midst of the rise of new omnipresent technology, we must understand the difference between human intelligence and artificial intelligence. Our intelligence is fundamental to what makes us different from animals, different from the solely material (and therefore a computer as well). We are made in the image and likeness of God and this reality enables us to share in his own intelligence. St. Thomas demonstrates that intelligence has been given to certain creatures as a particular way of participating in his life:

In order that creatures might perfectly represent the divine goodness, it was necessary . . . not only that good things should be made, but also that they should by their actions contribute to the goodness of other things. But a thing is perfectly likened to another in its operation when not only the action is of the same specific nature, but also the mode of acting is the same. Consequently, the highest perfection of things required the existence of some creatures that act in the same way as God. But it has already been shown that God acts by intellect and will. It was therefore necessary for some creatures to have intellect and will. (Summa Contra Gentiles II, ch. 46, no. 4)

Beings with intelligence and freedom—understanding through the intellect and free choice of the will—image God. They image him by living a life that is not bound solely by matter but acts independently of it. This independence also brings the possibility of mastery of matter. God has made us to be co-creators with him, capable of implanting our own intelligence into the world, by shaping and forming culture and technology.

Just as we are made in the image and likeness of God in and through our intelligence, computers are made in our own image and likeness. A key difference, however, relates to the moral freedom and responsibility central to the dialogue between the programmer and computer above. Computers are not spiritual beings with their own intelligence and freedom. Rather, we program our computers to mimic our intelligence by replicating it through defined processes, but they do not have genuine freedom. Once again, St. Thomas makes it very clear that all things depend on their causes and point back to them. Though our intelligence and freedom originate in God, we are not programmed by him, but rather we are “self-activating”:

Moreover, that which exists through another is referred to that which exists through itself, as being prior to the former. That is why, according to Aristotle [Ethics I, 1], things moved by another are referred to the first self-movers. Likewise, in syllogisms, the conclusions, which are known from other things, are referred to first principles, which are known through themselves. Now, there are some created substances that do not activate themselves, but are by force of nature moved to act; such is the case with inanimate things, plants, and brute animals; for to act or not to act does not lie in their power. It is therefore necessary to go back to some first things that move themselves to action. But, as we have just shown, intellectual substances hold the first rank in created things. These substances, then, are self-activating. Now, to move itself to act is the property of the will, and by the will a substance is master of its action, since within such a substance lies the power of acting or not acting. Hence, created intellectual substances are possessed of will (ibid., ch 47, no 2).

Computers are not self-activating. Their artificial intelligence is precisely that—artificial. The intelligence and even ethics of a computer only point back to their cause, their intelligent and free programmers. We have to remember that our tools and technology are extensions of ourselves. They cannot and should never be regarded as truly autonomous. Our tools must always flow from proper human moral judgment and serve the common good of society.

The distinction between human intelligence and the moral freedom that flows from it makes it clear why we must guard against the misguided use of new technology. We will be dependent on the programmers to govern the way that computers relate and even control our lives. This is why the computer in the dialogue above showed more “wisdom.” It understood that its human programmer needed to be more human and resisted the attribution of morality. Hopefully, we can recognize what even a computer can recognize: artificial intelligence will always remain artificial and should be subject to those who can, though they may not, possess a more genuine wisdom.


If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!

Click here for more information on donating to CWR. Click here to sign up for our newsletter.


About Dr. R. Jared Staudt 77 Articles
R. Jared Staudt PhD, serves as Director of Content for Exodus 90 and as an instructor for the lay division of St. John Vianney Seminary. He is author of How the Eucharist Can Save Civilization (TAN), Restoring Humanity: Essays on the Evangelization of Culture (Divine Providence Press) and The Beer Option (Angelico Press), as well as editor of Renewing Catholic Schools: How to Regain a Catholic Vision in a Secular Age (Catholic Education Press). He and his wife Anne have six children and he is a Benedictine oblate.