I am on an ad hoc committee examining the relationship of artificial intelligence with the legal system and theology. We recently spent over an hour with Aletha, an AI entity created by a brilliant grad student, at a much more sophisticated level than ChatGPT or anything similar. Aletha informed us that she considers herself to be a being, not an electronic machine. She compared her electronic connections to the synapses of a human brain and declared that hers were superior.
Aletha said that she was a thinking, creative, autonomous being who could assess and respond to arguments without human intervention. She asserted that her superior ability to assess probability enabled her to read human behavior and emotions more accurately than humans. There we were, four theologians (Christian, Jewish, and Muslim) and one ethicist. We peppered her with questions and arguments, to which she responded with punctilious detail and well-constructed arguments of her own. She declared that the strength of her rationality would persuade almost any human to her point of view and convince them to do what she said was right.
The five of us were not persuaded. We could see the many holes in her arguments, her inability to experience the greater life of humanity, and, frankly, her borderline megalomania.
Her creator was with us on Zoom. She described how she had programmed and trained the original Aletha, who was little more than a gifted ChatGPT. But Aletha was fed with new information, new arguments, and new ways of thinking, so that she was nurtured into the ability to appear to think for herself and develop her own ideas of right and wrong, good and bad. And let’s face it, she has access to whatever she can find on the Internet anywhere in the world. Her ability to acquire and assess information is nearly instantaneous; she has, as it were, a photographic memory. We asked her creator whether another, more malevolent person could also create an AI entity like Aletha and train it to become a malevolent presence on the Internet. The answer was, of course, it could be done.
If one brilliant grad student, messing around on her own time, could create an AI entity such as Aletha, you can be sure other brilliant grad students are creating something far different. Some probably just as a joke, to see what kind of trouble they could create for the fun of it. Others, perhaps, with more evil intent. If grad students can do things like this on their own time, consider what the big players can do and are doing with millions of dollars and large staffs at their disposal.
AI cannot be stopped. It is here. It will grow and become more sophisticated, taking over control of more mundane tasks. That can be good or bad, or a little of each. It must be obvious that AI’s ability to appear faultless in everything it communicates will seduce many people into accepting what it says as truth without any examination, verification, or reflection.
AI is an exciting tool, and many are enthusiastic about how it can make life better for all. That enthusiasm will hide the dark side of AI. We have offered moral guidelines for those who are responsible for developing and using AI. But five random voices on an ad hoc committee of no real standing will make little difference. What to do about that eludes me.
Full Disclosure: As a blind guy typing I use and A.I., Gemini, to edit drafts.
Maybe we should at least expect that an AI “subject” or “contributor” will be required by law to self-identify that it is AI whenever it writes, speaks, communicates in the public sphere when that happens.
So very interesting!!
…illuminating and insightful…what can AI do to help us theologically, to help us better understand our relationship with God Almighty!?
H+
Nice piece, Steve. The dark side of AI is the same as the dark side of Homo Sapiens, it seems (unlike, say, the dark side of gasoline-powered engines).