chevron-down Created with Sketch Beta.

Voice of Experience

Voice of Experience: May 2024

AI Column: "With God on Our Side"

Jeffrey M Allen and Ashley Hallene

Summary

  • Employing and regulating AI should be a top concern for politicians and professionals.
  • Technology leaders have said the dangers of AI need to be addressed before it is developed further.
  • AI may not share our values or goals, potentially leading to unpredictable and negative consequences.
AI Column: "With God on Our Side"
iStock.com/Alan_Lagadu

Jump to:

We borrowed the title for this article from a famous song written by Bob Dylan many years ago. We chose it because it reflects a concern about the possible evolution of Artificial Intelligence (AI) that we wanted to share with you for various reasons. 

AI poses dangers that we must address and endeavor to diminish and mitigate. This article addresses one more example of the importance of developing guidelines and controls for using and implementing AI. It also means that each of us must remain vigilant in our own exercise of judgment when it comes to dealing with AI.

We have had AI in our lives in one form or another for some time. It has evolved rapidly in recent years, and that has propelled it to the front page for those of you who still read newspapers and the top of the list of technology topics for discussion by pretty much everyone: from politicians to professionals to technologists, to technology columnists and certainly including attorneys. The debate rages on at many levels and encompasses many areas of our lives. The scope of the debate runs from whether and how to employ AI to whether and how to regulate AI to the benefits and risks associated with AI. Technology giants such as Microsoft and Apple have invested billions in developing AI for use in connection with their products. Legal service vendors have built products for lawyers using AI to do many tasks, including, without limitation, legal research, legal writing, document drafting, document analysis, document control, answering the telephone, and serving as virtual assistants. 

Many leaders in the technology field have acknowledged the dangers associated with AI and have called for the development of controls for AI and its use to reduce the risks associated with its employment. Open AI, the developer of ChatGPT, one of the biggest and most significant parts of the evolution and expansion of AI, has called for regulation. Elon Musk, among others, has taken the position that AI poses an existential threat to humanity as it may soon surpass human intelligence (if it already has not). He compared it to calling up a demon you cannot control, noting that AI may not share our values or goals, potentially leading to unpredictable and negative consequences.

Even in a relatively benign setting, AI poses such serious concerns that, for example, the House of Representatives restricted the use of ChatGPT in its offices last year and just recently prohibited its staff from using Microsoft’s AI-driven Co-Pilot. The concerns expressed by the House did not address the fear that AI posed an existential threat to humanity; they addressed a different and more preliminary concern: data security. It remains important to remember that when it comes to risks, AI is not a one-trick pony!

We do not want to play Chicken Little and claim the sky has fallen. We do not want to stand in for Elon Musk and argue about whether AI now or in the future may pose an existential threat to humanity. We do want to make you are aware of the issue and the potential under the theory that awareness may help prevent that outcome. Forewarned is forearmed!

We have watched the evolution of good and bad AI in our movies and television shows for decades and read about it in our literature even longer. While R2D2 and C3PO may never appear in reality, their equivalents might. Of greater concern, as AI evolves and we cede more management and control over our daily lives to AI-driven technology, do we run the risk of the evolution of a real-life Skynet (“The Terminator”) or a Matrix-like technology? As ominous  as it sounds, these must be real-life considerations going forward. They offer some of the justification for the calls for restrictions and limitations on AI.

At another level, AI poses the risk of leading individuals into improper, immoral, and even criminal acts. If not actually leading the way, AI can facilitate the accomplishment of such acts. We know that the bad guys have the same access to AI as the good guys and that they have started to use AI to facilitate their scams, to influence activities, and even elections. In the most recent New Hampshire primary election, an AI clone of President Biden made telephone calls to voters, discouraging them from going to the polls to vote.. We know that the bad guys have used various applications of AI to facilitate scams and other criminal conduct as well. One type of scam uses AI technology to impersonate the voices of family and friends to seek money or personal information..

Some of you may remember a character by the name of David Berkowitz. If that name does not strike a familiar chord, how about “the .44 Caliber Killer”? Still not familiar, then try “Son of Sam”. Berkowitz pled guilty to eight shootings that began in July 1976 in New York City. While not the first or the last serial killer we have dealt with, Berkowitz offers a somewhat unique take on what motivates a serial killer. After his arrest, he claimed that demons and a black Labrador Retriever that belonged to his neighbor, Sam Carr, ordered him to commit the killings. We cite this not as an example of AI influencing people but rather as an example of how some people will follow directions from almost anything. Our point is that it likely includes AI.

Various religious groups have employed AI to teach religious doctrines to their followers and even provide guidance to their members. We find that use both concerning and problematic. The practice continues to grow. Most major religious groups have websites and AI bots trained on the dogma of the faith to interact with the faithful. That fact is neither inherently bad nor good. The reality remains that as of now, AI has no internal moral compass. It learns from the data on which it trains. A religious group creating a bot will train it on its own dogma. To the extent that dogma contains biases, so too will the bot as it evolves. To the extent that the religious dogma approves, or appears to approve, violence, killing, or revenge, so too will the bot trained on it. 

We do not intend to single out a particular religion as evil or dangerous. The dogma of many religions contains language that AI might conclude approves violence, war, killing or other acts of revenge. One example of this eventuality in using AI for religion recently occurred with a Bot developed for the Hindu faith and trained on the Bhagavad Gita. The program uses the name GitaGPT. In the Bhagavad Gita, the prince Arjuna balks at going into a battle that may require him to kill his family and friends until the Hindu God, Krishna reminds him that as a warrior, he has the duty to fight. Multiple bots trained on this dogma have told believers that killing is appropriate if it is your dharma or duty.

Many bots present themselves as God and allow followers to talk to it as God. If a potential serial killer took instructions from a Labrador Retriever, does it strain the imagination too much to consider it likely that a particularly devout person might take instructions directly from God? The conversation might look something like this:

God (a voice claiming to be the voice of God) asks: “What troubles you, my child?”

The Faithful replies: “X killed my wife, and I feel that I should take his life to avenge her.”

God responds: “If you see that as your duty, then you must do it”.

Or perhaps the triggering facts might prove different. Remember Jack Ruby and Lee Harvey Oswald? What about Dan White, George Moscone, and Harvey Milk? How do you think the right AI God, trained on the right religious dogma (which it may misinterpret) might have advised Ruby or White?

We don’t know how many religions have trained bots. Nor do we know how many of the bots present themselves as a deity. We do know that many faiths have trained bots and that a number of them give the user an AI bot representing itself as God, or the next best thing to God. In addition to GitaGPT, other examples include Text with Jesus,  QuranGPT, There’s Bible.AI, Buddahbot, ApostlePaulAI, a bot trained to imitate Martin Luther, one trained on the works of Confucius, and one designed to imitate the Delphic Oracle.

As noted by Webb Keane and Scott Shapiro in an article in Science and Tech:

“Something weird is happening in the world of AI. On Jesus-ai.com, you can pose questions to an artificially intelligent Jesus: “Ask Jesus AI about any verses in the Bible, law, love, life, truth!” The app Delphi, named after the Greek oracle, claims to solve your ethical dilemmas. Several bots take on the identity of Krishna to answer your questions about what a good Hindu should do. Meanwhile, a church in Nuremberg recently used ChatGPT in its liturgy — the bot, represented by the avatar of a bearded man, preached that worshippers should not fear death.

Elon Musk put his finger on it: AI is starting to look ‘godlike.’ The historian Yuval Noah Harari seems to agree, warning that AI will create new religions. Indeed, the temptation to treat AI as connecting us to something superhuman, even divine, seems irresistible.”

One of the risks of allowing AI to appear as God is that it takes itself quite seriously. One article discussing these risks reported that an AI text tool designed to determine whether a particular piece of text resulted from the work of a human author or AI concluded that AI authored the book of Genesis. 

You can find many articles online and in print addressing this phenomenon.You can find an interesting article addressing susceptibility to AI as God written by an anthropologist from the University of Michigan.

Bottom line: We do not consider AI as God or even as a god. However, we recognize that some people might and that, as time goes on and AI continues to evolve, the number of people who adopt that theology may increase. While we do not see that as AI’s primary or even its most serious problem, it does concern us. Given the extremes to which religion has led devout followers in the past and AI’s propensity to acquire biases and perspectives from the material on which it trains, we see a serious risk that AI can lead followers into serious trouble. Religious leaders throughout history have created havoc by their interpretations of religious doctrine and dogma. A Godbot with a following could create many problems as a result of its take on religious doctrines and dogmas. Whether or not that happens, we need to acknowledge the risk that individuals may follow the religious teachings of the AI branch of their chosen faith, which teachings may bastardize the true teachings due to misinterpretation or misunderstanding of the dogma, a phenomenon to which AI has already demonstrated a propensity. That phenomenon poses a risk of serious negative consequences. Increased transfers of decision-making authority to AI, as the current course of conduct seems to suggest will occur, may well increase the severity of those consequences. 

While we love dogs and think Labrador Retrievers rank at or near the pinnacle of the canine world, we have to believe that people, sane and crazy, will more likely take instructions from the “voice of God” than from a Labrador Retriever. As a result, a Godbot that finds violence and killing appropriate and morally acceptable will likely find a larger audience than a Labrador Retriever (or any other dog, for that matter). Accordingly, we consider an AI Godbot a more substantial threat. Who knows what sins and foibles Godbot adherents may commit when guided by AI run amuck? Who knows what the faithful might do with God on their side?

    Authors