I wrote this piece about a year and a half ago to summarize discussions of an informal committee made up of two Christian clergy, a rabbi, and an ethicist, asked to offer counsel to several members of the William and Mary Law School faculty on questions about the relationship of artificial intelligence to morality and the legal system. Given the current conversation about artificial intelligence, its potential good and equally potential harmful effects on American society—and already visible in it—I decided to publish this on Country Parson in a slightly edited format. It is unusually long for a Substack, but if this is a subject of interest to you, I think you will find it helpful.
PART ONE
A Theological Perspective
As Jewish and Christian theologians, we are compelled to recognize and honor the sacredness of humanity created in “the image of God.” However A.I. develops and is used, it must not undermine what it means to be human. Moreover, we live in the American context of Western civilization, and it is to that reality that we offer the following.
Preface
News, articles, and rumors about A.I. flood every corner of the media world. Products and services are advertised as infused with the latest in A.I. technology. Its promises of an exciting future are reflected in a short passage from the recent book by George Stephanopoulos, The Situation Room. The purpose of the White House Situation Room is to sift incoming information from all sources to produce briefings needed by the President and presidential staff.
Toward the end of the book, Stephanopoulos reports an observation from Google’s CEO: that dozens of analysts looking through data is a waste of human talent when computers could be programmed to do it faster and cheaper. All it would take is the right algorithm to pick up unusual deviations from routine background chatter. Today’s advances in AI would enable the computer to learn and adjust on its own, faster and more accurately than error-prone humans. Google may not be the most reliable source of expert advice, given their current problems with publicly available AI tools, but it is worth thinking about.
What this idea fails to consider is that much of what analysts do is based on intuition—a sense that something is different—a hunch grounded in knowledge of history and conditions not included in the data flowing into the Situation Room. As an added factor, analysts check each other through conversation about what they think and why they think it, which will always involve an element of moral judgment. They are also able to adjust quickly to new requests from White House staff that are often far outside the norm. It is an organic process—analog, if you will—that involves constructive relationships between independent persons in a way that computers cannot, at least not now.
Is the organic human approach less accurate and efficient than a computer-based AI approach? It is not one against the other. AI can sift through enormous amounts of data in a short time, looking for anomalies. The slower, organic process of human analysts will ferret out nuances outside the realm of algorithms and the limitations of machine learning. Moreover, what is true for the Situation Room is true for everyday life in every organization: public sector, private sector, nonprofit, and social.
Artificial intelligence, for all its potential for good, cannot replace human wisdom, creativity, imagination, and intuitive problem solving. What humans can do, and have done for millennia, is to imagine the unasked question and postulate possible answers based in part on what is thought to be good and right. That is a unique property of humanity that computers can only imitate superficially.
This is not to dismiss the potential value of AI. In time it will be able to instruct actions that replace laborious and error-prone work now done by computer-assisted humans. AI currently available to the public appears to be used more for entertainment or as a kind of toy than anything else. As with any such tool, people are experimenting with it in ways that produce silly, outlandish, and sometimes dangerous outcomes. Hackers have learned how to manipulate its internal systems to produce results no one expected or wanted. It will take time for AI to become a reliable, useful tool for the public. In the meantime, governments, universities, and corporations are exploring uses not available to the public. With fingers crossed, we can hope for the best and remain skeptically vigilant.
Trying to put ethical limits on AI may seem like a pointless exercise because it is difficult to define AI itself. Current versions—large language models and machine learning systems—are equivalent to Model Ts and Wright Brothers’ airplanes. Governments, universities, and industry are in a race to bring into reality dreams of self-aware systems able to operate with minimal, if any, human supervision. Theologians and ethicists are left in the bewildering dust of technical jargon expressed in unfamiliar languages.
In Defense of Being Human
We need clarity about what it means to be fully human if AI is to be developed for the benefit of humanity and to protect what it means to be human. Of course, it means different things to different people depending on religious faith, ethical belief, social norms, status, and cultural heritage. Possible commonalities across cultures may include the following.
For me to be fully human, the other must be able to be fully human. The other may be my most beloved or my least trusted. The other may be a stranger, even an alien presence. To be fully human also means to think, create, struggle, succeed, fail, laugh, cry, wonder, and doubt. In like manner, for me to prosper, others must be able to prosper.
No matter how good, all humans are prone to selfishness, greed, desire for power and position, vengefulness, and violence. There is likely a wide distribution between the best and worst of us, but most consider themselves somewhere in the middle. Finally, we are not always rational in our moral decisions. The effort would be exhausting. Instead, we rely on habits of the heart learned from childhood and experience, often ignoring pitfalls and remaining oblivious to consequences we later call unintended. Godly counsel is often relegated to occasional thought or mistaken for customary social norms. Therefore, I repeat: AI cannot be expected to be more moral than we are.
Artificial Intelligence and Moral Questions
AI is unlike any other technology that has changed the world in dramatic ways. Consider the printing press, gunpowder, railroads, commercial electricity, radio, and countless other inventions. Each was a tool, unable to make decisions on its own and entirely subject to human use. AI, on the other hand, is being developed to ask and answer moral or ethical questions for itself: “Should I do this?” and “How should I do it?” Questions that begin with should are moral questions, implying uncertainty about the right thing to do.
What values determine good from bad, right from wrong? When conditions require a choice between competing goods or competing evils, some goods must be abandoned or some evils accepted. These questions trouble human beings deeply and keep philosophers in business. They are precisely the kinds of questions developers intend to make AI capable of resolving without human direction.
Because AI is designed to communicate in human-like ways, it is likely to answer moral questions posed by humans—or even suggest what humans should do without being asked. In that sense, humans risk becoming tools used by AI at its discretion—a complete reversal of the relationship that has governed every other technology.
Some anticipate that AI will become a kind of oracle, offering immediate and authoritative answers to life’s questions. If humans believe they can create an AI more morally perfect than themselves, they are profoundly mistaken. An AI created in humanity’s image cannot be other than fundamentally flawed. Paul Bloom, writing in the November 2023 New Yorker, asked, “How moral can AI really be?” His answer was: not much more than it is now. He wondered whether it would take God himself to convince people what the rules should be. As Jewish and Christian theologians, we believe God has already done that, in ways discernible through deep engagement with scripture and tradition.
For instance, the following is based on the Ten Commandments, which are an essential part of the foundation of Jewish and Christian morality.
First, beware of making AI into an idol.
AI cannot be allowed to undermine what it means to be fully human.
Developers must always prioritize the good of humanity over all other measures of utility.
Probabilities of “unintentional consequences” must be made public.
There must be regular pauses in development to allow society to absorb and assess it.
The wisdom of the ages must serve as both guide and guard.
AI cannot be used to make decisions about intentional killing.
AI must not endanger the integrity of human relationships.
AI cannot be used to appropriate resources for those who have no moral right to them.
AI cannot be used to privilege some at the expense of others.
To put them in terms more congenial to lawyers:
Lawyers engaged with AI-related questions must demand of its creators an assessment of its benefits for the good of the other.
They must examine possible and probable effects that undermine full humanity.
They must seek to unveil the likelihood of “unintended” consequences.
They must advocate for the good of humanity, not the good of AI.
Building on the Ten Commandments, they must challenge anything that implies AI is an idol.
They must seek periodic pauses in development to allow society to assess and respond.
They shall seek guidance from the wisdom of the ages.
They shall oppose murder as an outcome of AI applications.
They shall seek to protect the integrity of human relationships.
They shall oppose any use of AI that appropriates for some that to which they have no moral right.
They shall demand full transparency from AI developers and users.
They shall oppose any use that privileges some at the expense of others.
Institutions are essentially amoral. Whatever ethics or morality they project comes from the people in ever-changing leadership positions. That is true of companies, nonprofits, and governments at every level. Our democratic republic, with its system of checks and balances—when it works as intended—creates the best chance for government to act in “our best interests,” not merely “my best interest.”
The relationship between ethics or morality and our best interests is simple. For something to be ethical or moral, it must at least be concerned with the good of the other. If the good of the self or the institution is the only goal, then it is likely to generate unethical or immoral acts.
Shannon Vallor’s 2016 book, Technology and the Virtues, offers a different perspective, one not grounded in the Jewish and Christian theology of the committee.
Vallor attempts something similar by drawing on the ethics of Aristotle, Confucius, and Buddhism, which she groups under the broad heading of “virtue ethics.” Aristotle had much to say about virtue and is best known for four: prudence, temperance, justice, and courage. Each lies along a continuum, the extremes of which are destructive both to the individual and to the polis. The virtuous person strives for the “golden mean.”
I am less certain about cardinal virtues in Confucian thought, but Confucius emphasized deep learning of right ways in order to assure an optimal ordering of society. Buddhism is more complex. The following excerpt may be helpful, summarizing the Four Noble Truths:
Now this, bhikkhus, is the noble truth of suffering: birth is suffering, aging is suffering, illness is suffering, death is suffering; union with what is displeasing is suffering; separation from what is pleasing is suffering; not to get what one wants is suffering; in brief, the five aggregates subject to clinging are suffering.
Now this, bhikkhus, is the noble truth of the origin of suffering: it is this craving (taṇhā, “thirst”) which leads to re-becoming, accompanied by delight and lust, seeking delight here and there; that is, craving for sensual pleasures, craving for becoming, craving for disbecoming.
Now this, bhikkhus, is the noble truth of the cessation of suffering: it is the remainderless fading away and cessation of that same craving, the giving up and relinquishing of it, freedom from it, non-reliance on it.
Now this, bhikkhus, is the noble truth of the way leading to the cessation of suffering: it is this noble eightfold path; that is, right view, right intention, right speech, right action, right livelihood, right effort, right mindfulness, right concentration.
What struck me about all three is how readily they can be placed alongside Maslow’s hierarchy of needs. In the mid-twentieth century, Maslow proposed a simple list—not originally a pyramid—suggesting that lower-level needs must be met before higher-level needs can be meaningfully pursued. Reaching a higher level may be a goal, but there is no guarantee of remaining there.
Maslow has been criticized for deriving his hierarchy from observation rather than rigorous testing. That criticism is fair. At the same time, the model has endured. As a reminder, the levels are:
- Physiological (food, shelter, clothing)
- Safety and security of person and possessions
- Love (filial) and belonging
- Esteem (akin to Aristotle’s glory or modern success)
- Self-actualization
Buddhism assigns little ultimate value to these needs, except perhaps in its understanding of self-actualization. Lower-level needs may have practical utility, but attachment to them traps one in the cycle of suffering and rebirth.
Confucius and Aristotle, by contrast, place high value on lower-level needs. A free and educated man cannot be virtuous without them. Self-actualization, as Maslow describes it, is largely absent. To achieve esteem or glory is, in effect, to reach life’s apex. Those lacking material resources or social standing are unlikely to be esteemed or honored.
American materialism, expressed through the idea of the American Dream, tends toward Aristotle. The belief that elite universities produce elite leaders for the broader society reflects something closer to Confucian assumptions.
If our committee’s discussions are centered on preserving and enhancing what it means to be fully human while making optimal use of AI, then Vallor’s approach may suggest the following.
To be fully human in secular America requires that society be structured so that every person has an equitable opportunity to be adequately housed, clothed, and fed; to enjoy security of person and possessions; to be loved and engaged in a healthy social life; and to be recognized for achieving success in their endeavors. I am less certain that American society has a clear understanding of what self-actualization might mean, or whether it is worth pursuing.
Aristotle, Confucius, and Maslow would all agree that these needs must be met through effort. They lose moral or virtuous significance if detached from the work required to attain them. Therefore, AI must be a tool to supplement and assist, not replace, the work human beings need to do in order to feel they have earned what they have achieved.
Obviously, the definition of work will change. Whatever that change entails, it must continue to involve both physical and intellectual labor.
I have no idea what that will mean for the legal system.
Speaking for myself, I do hope it means that books and other printed materials do not disappear. Cuneiform on clay tablets, ink on parchment, and bound books have outlived countless technologies. They may well outlast computer chips and software platforms that seem to die almost as quickly as they are born.
I also hope that AI does not eliminate risk too thoroughly. To strive in the face of possible failure is part of what gives effort its meaning. The old trope—that trust fund beneficiaries become idle and socially unproductive—may be overstated, but it carries enough truth to serve as a warning.