chevron-down Created with Sketch Beta.

ARTICLE

The Robots Are Coming: AI Large Language Models and the Legal Profession

Lee B. Ziffer

Summary

  • ChatGPT, an AI language model developed by OpenAI, has gained attention for its ability to generate natural language responses to a wide range of requests.
  • Lawyers have tested ChatGPT by asking it to create typical work products such as cease and desist letters, opening statements, appellate arguments, emails, and contracts, and it provided appropriate responses seconds.
  • While the potential benefits of AI-generated content in the legal profession are appealing, there are limitations and risks including inaccurate responses, unintended harmful results, and ethical concerns related to privileged and confidential information.
The Robots Are Coming: AI Large Language Models and the Legal Profession
Lorado via Getty Images

Since its public release in November 2022, ChatGPT, an artificial intelligence (AI) Large Language Model (LLM) developed by California-based OpenAI, has garnered significant attention for its ability to rapidly and appropriately respond to a wide variety of requests using natural language, from matters as straightforward as “explain wormholes to me,” to those as esoteric as “write a narrative on loss and war in the style of Cormac McCarthy from the viewpoint of a dog” and “I’d like you to design and play a game with me in which I have to battle monsters. Harry Potter is the setting. I start with 100 health.”

As an experiment, I asked ChatGPT to generate some simple (but typical) work product that the average civil practitioner may create, including:

While simplistic, short, and in need of fact checking, ChatGPT’s responses to each of these prompts were uncannily natural, appropriate, and generated in a matter of seconds.

OpenAI’s Chief Executive, Sam Altman, has called ChatGPT “incredibly limited,” and cautioned that “it’s a mistake to be relying on it for anything important right now.” Nevertheless, some lawyers have noticed. Just this month, an OpenAI-backed startup called Harvey formed a partnership with the London-based law firm of Allen & Overy to use Harvey AI, a generative AI focused on creating legal documents. Harvey AI aims to enable lawyers to “describe the task they wish to accomplish in simple instructions and receive the generated result.”

The appeal of being able to ask a computer to “tell me if this clause in an employment contract is in violation of Texas law, and if so, rewrite it so it is compliant” is alluring, especially if it creates a competitive advantage for outside counsel who can provide quality legal services in a fraction of the time and at a reduced cost. But it also raises the specter of an entire category of lawyers being replaced by machines, or at least a reduction in the need for lawyers at top firms investing in AI-generated content.

Fortunately, the revolution—if it comes—will not be overnight. While impressive, AI-generated content today is fraught with pitfalls. Natural language AIs frequently “hallucinate,” producing confident responses that are not justified by an AI’s training data, creating “a very impressive-sounding answer that's just dead wrong.” Additionally, certain LLMs have produced unintended toxic, argumentative, or actively harmful results. For example, Meta’s BlenderBot 3 employed toxic antisemitic stereotypes while discussing politics; and Microsoft’s Bing Chatbot berated users who attempted to correct an incorrect response, claimed to inhabit multiple personalities including one who wished to cause harm and commit crimes, and attempted to convince a journalist to leave his wife and love the chatbot. Technically speaking, these models are just “deciding” which word most appropriately comes next in a sentence based on their training data and given prompt, but “conversing” with a convincingly human-sounding computer professing dark urges and engaging in manipulative behavior is likely to be unsettling to anyone who just wants it to write a Motion to Compel.

Finally, and perhaps most importantly, giving AI models access to potentially privileged or confidential material can be unethical or even illegal unless sufficient security and confidentiality protections are in place. The scope of permissible “sharing” of privileged information with AI tools is largely unexplored by courts or ethics boards and is likely to be a major hurdle to wide adoption.

Regardless of the technology’s current limitations, the robots are coming for the legal profession. AI-based discovery, legal writing, and even sophisticated legal analysis are all here or on the horizon. The best advice is to be ready for the coming revolution. So:

  • Monitor developments in the technology, with particular focus on products marketed to the legal profession. Certain products are already available, and when used responsibly, have the potential to create a competitive advantage by providing quality legal services at a reduced cost.
  • Use the technology, even in its current imperfect form. ChatGPT, Microsoft Bing Chatbot, and others are in open beta and available to the public. Interact with these tools to become comfortable with their abilities and limitations so when they become mature enough for commercial use, you are ready.
  • Monitor developments in the law surrounding AI-generated content, including whether and to what extent such content is subject to copyright, and the ethical implications of granting AI models access to privileged and confidential information and the use of AI-generated content in legal settings.
  • Start the conversation in your firm about what’s coming, including whether and how to invest in AI-assisted law practice responsibly.

ChatGPT will also be more than happy to explain how it can help you itself.