Regulating AI: reflections on possibilities

Congress has been muddling toward regulation of Artificial Intelligence (AI) even as it’s demonstrated only the vaguest understanding of years old social media. Rather than a fruitless attempt to regulate AI practices, I suggest legislating ethical standards that remain valid even as the technology rapidly changes. Consider, for example, the Hippocratic Oath for physicians that, with modest changes over the centuries, has remained valid for  2,500 years. A similar approach might work for AI.

What might ethical standards for AI look like? We need to start with a theological assertion: humans cannot create something more morally perfect than themselves.  A person, however moral, is always and everywhere beset by prejudices of all kinds, selfishness, greed, jealousy and the like that deny moral perfection.  No matter how self aware AI may be created, it cannot become more morally perfect than its corruptible creators. 

It is not an assertion that denies humanity’s ability to make moral progress, as indeed it has, slowly and reluctantly over the centuries. Progress has been possible only when previously unasked questions were finally asked because some unknown unknown had finally become known by at least a few. Moreover, humanity is hampered by its inability to accurately anticipate the future.  Even the best modeling is based on probabilities that cannot accommodate random contingencies, and there are always random contingencies. Generative AI may develop an ability to estimate probabilities faster, but random, and therefore unknown, contingencies will remain to mess things up. 

Cultural difference is one contingency that is not random.  What is moral in one culture is immoral in another.  Even when there’s general greeting on basic standards, different cultures may interpret them in widely differing ways. It is hard to imagine how AI can avoid having cultural biases no matter how well intentioned its creators might be. And speaking of imagination it seems unlikely that AI, no matter how generative, will be able to intuit or imagine, in say, the way Einstein intuited the Theory of Relativity. Or moved as Wordsworth to wax poetically about the ruins of Tintern Abbey. 

With that in mind, regulatory legislation may have no better foundation than an adaptation of Asimov’s rules for robots:

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm. *Britanica

The portion of a recent William and Mary law School AI conference I attended reflected on existing law protecting intellectual property, criminalizing threatening speech, defamation of character, and truth in advertising. They can be applied to AI as easily as they are applied to human behavior. It’s incumbent on legislators to recognize the tools we now have and use them appropriately. 

“The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, the Declaration was proclaimed by the United Nations General Assembly in Paris on 10 December 1948 (General Assembly resolution 217 A) as a common standard of achievements for all peoples and all nations. It sets out, for the first time, fundamental human rights to be universally protected and has been translated into over 500 languages. The UDHR is widely recognized as having inspired and paved the way for the adoption of more than seventy human rights treaties, applied today on a permanent basis at global and regional levels (all containing references to it in their preambles).” (United Nations) 

The thirty articles of rights are declarative in language and aspirational in practice. They are culturally biased in favor of Western political philosophy and no nation has fully lived up to them. Nevertheless they form a moral framework that can help guide development of AI regulation.

Everything taken together can create something of a moral fence surrounding AI without trying to regulate specific processes and applications. Neither would the fence try to anticipate unforeseen advances.  Would it stop misuse of AI?  Certainly not.  Malevolent actors have always used whatever technology is at hand to commit crimes and damage society.  Like all laws and regulations, AI can only establish generally accepted standards and terms for enforcement.

A final observation. Morality and social norms are not the same thing.  Generations grow up well educated in acceptable social norms that they take to be traditional moral principles, never to be surrendered.  But social norms are transitory, mutating quickly, and differing greatly from tribe to tribe and place to place; often becoming the core for political acts to preserve and defend them at any cost. Theology and moral philosophy attempt to stand apart from social norms  and affiliate with wisdom that has stood the test of time, and for theologians are consistent with revealed truths and godly justice. It’s a never ending tug of war. 

AI has other limitations that prevent it from ever assuming human qualities, except in part.  It cannot experience the world as a human can. It will always be more Spock than Kirk.  It can’t experience the ebb and flow of emotions and reactions that humans have in almost infinite variety and so it cannot learn in the almost infinite ways humans do.  I have no doubt it can be made self aware of its system status and even imitate human emotions. It seems unlikely that it can be made to feel as humans do with literally every nerve and fiber of their bodies.

From a Christian understanding, features and creation are sacred, what humans create cannot be in and not itself sacred. Humanity can not be manufactured.

Leave a Reply