I am in a group discussing artificial intelligence (AI) and morality with a special emphasis on the legal system. The group has not arrived at any kind of consensus yet, but my own thoughts have begun to settle on a few principles.
Trying to put ethical limits on AI seems like a pointless exercise because it’s impossible to define AI. Current AI versions in the form of large language models and machine learning are equivalent to Model Ts and Wright Brothers’ airplanes. Governments, universities, and industry are in a race to see who can bring to reality dreams of self aware AI systems able to operate with minimal, if any, human supervision. Theologians and ethicists are left in the bewildering dust of technical jargon expressed in unknown languages.
AI is unlike any other technology that has changed the world in dramatic ways. Consider things like the printing press, gun powder, railroads, commercial electricity, the radio and the hundred other inventions you can think of. Each of them was a dumb tool, unable to make any decision on its own, and subject entirely to use at the discretion of human beings. AI, on the other hand, is being developed to ask and answer for itself moral or ethical questions. They take the form of “should I do this,” and “how should I do that”. Questions that begin with should are considered moral questions implying uncertainty about the right thing to do.
What values can be used to determine good from bad, right from wrong? When conditions require a choice between competing goods or competing bads it means some goods must be abandoned in favor of one, or some bad must be chosen over another bad. Questions like these drive humans batty and keep philosophers in business, and they are precisely the kinds of questions developers intend to make AI capable of resolving without human direction. It’s an entirely different kind of technological tool.
Because AI is intended to communicate with humans in a human like way, it means AI systems are likely to answer moral or ethical questions put to it by humans, or even suggest to humans what they should do without having been asked a question. In other words, humans would become a tool used by AI at its discretion, a complete role reversal from every other technology. The 1968 movie 2001 A Space Oddity anticipated this very problem when the computer HAL made its own life and death decisions based on its own self awareness. I doubt that kind of AI system capable of asking and answering their own questions is near at hand, yet I imagine close enough to the HAL model to create serious questions about how to proceed..
Rather than focusing on regulating AI, a technology I barely understand, it seems to me we need to focus on human behavior with rules building a fence around the development and use of AI. Moreover, I think the question demands a theological response that begins with recognition that we are in relationship to and with God. We are not only made in the image of God, we are adopted by grace into the family of God. The integrity of that reality cannot be surrendered.
Humans have other relationships, many of them very important to the meaning of life, happiness, and success. As important as they are, they must always be understood in the context of what it means to be who we are as made in God’s image and members of God’s family. That context is surrendered in whole and part whenever we permit some other thing to take precedent over our relationship with God. The danger, as I see it, is that some future AI making moral decisions for humans who obey with little reflection will have bemused a god like oracle giving more immediate and understandable answers to life’s questions. If humans think they can create an AI more morally perfect than they are, they are profoundly mistaken. An AI created in humanity’s own image cannot be other than fundamentally flawed.
Paul Bloom, writing last November in the New Yorker asked,”How moral Can AI Really Be?” His answer was not much more than it now is. He wondered if it would take God ‘himself’ to convince people what the rules should be. I think God has already done that in ways we can discern through a deep reading of holy scripture. What follows are. My thoughts on what a deeper reading reveals.
First, beware of making AI into an idol.
AI cannot be allowed to undermine what it means to be fully human.
Developers must always prioritize the good of humanity over all other measures of utility.
Probabilities of “unintentional consequences” must be made public.
There must be regular pauses in development to allow society to absorb and assess it.
Wisdom of the ages must serve as guide and guard.
AI cannot be used to make decisions about intentional killing.
AI must not endanger the integrity of human relationships.
AI cannot be used to appropriate resources for those who have no moral right to them.
AI cannot be used to privilege some at the expense of others.
Finally, we need clarification of what it means to be fully human if AI is to be developed for the benefit of humanity and protect what it means to be fully human. Of course it means different things to different people depending on one’s religious faith, ethical belief, accepted social norms, social status and cultural heritage. As a Christian theologian I suggest some commonalities. For me to be able to be fully human the other must be able to be fully human. The other may be my most beloved or least liked and trusted. The other may be a stranger, an alien in my presence. To be fully human also means to think, create, struggle, succeed, fail, laugh, cry, wonder, doubt, etc. In like manner for me to prosper, others must be able to prosper.
No matter how honorable, all humans are prone to selfishness, greed, desire for power and position, vengefulness and violence. I have no idea what the distribution is between the most honorable and least honorable, but suspect most of us are somewhere in the middle. Finally, we are not always fully rational in our moral decisions. The effort would be exhausting so we rely on habits of the heart learned from childhood and experience, while blithely ignoring pitfalls and oblivious to consequences we later say were unintended. Godly counsel is often relegated to occasional thoughts or mistaken for customary social norms. Therefore, AI cannot be expected to be more moral than we are ourselves.