From the start, tech journalists warned the public that generative AI (gAI) platforms such as ChatGPT, Bard, and others, can provide answers that sound authoritative but are factually inaccurate. The name for that type of response is hallucination because much like a human experiencing a mirage, the platform producing the fabricated answer “believes” the answer it generated is correct.1 Those warnings may have not arrived soon enough, or perhaps may not have been worded strongly enough, because motions, orders, and lawsuits are coming through the courts all centering on the hallucination potential from gAI platforms. Additionally, we’ve been asked for our experienced-based opinion on what may come down from the courts with respect to evidence, eDiscovery, and gAI.
The Case.
If we had to point to a case that woke everyone up to the need to heed the hallucination warnings, Roberto Mata v. Avianca, Inc. (1:22-cv-01461-PKC, SDNY, June 22, 2023), is the one that we’d choose as the cautionary tale. Briefly, lawyers in the matter initially filed a pleading that contained three major issues. First, the pleading contained hallucinated citations and inaccurate conclusions from real cases. Second, they refused to withdraw the pleading when the problems were identified. Third, they used the same gAI platform that provided the hallucinated and inaccurate information to verify the information. Other false statements made by the attorneys aside, the court sanctioned the lawyers under FRCP Rule 11, with both monetary sanctions and various corrective actions. Judge Castel begins the Opinion and Order on Sanctions with the following:
The Reactions.
What we suspect will be just a few early promulgations, judges in Texas, Pennsylvania, and Illinois have all issued standing orders requiring attorneys and pro se parties to certify that the filing party has either not used gAI to perform legal research or writing, or that if they have used gAI to perform those functions, they’ve verified the information is accurate. In the Northern District of Texas, Judge Starr’s certification includes the words, “will be checked for accuracy, … by a human being before it is submitted to the Court,” (https://www.txnd.uscourts.gov/judge/judge-brantley-starr downloadable form available here).
While this type of certification may become a commonplace addendum to court filings, we don’t believe a change to the Federal Rules of Civil Procedure (FRCP) is on the near horizon. The FRCP is designed to transcend individual advances in technology, therefore eliminating the need to address how the products of gAI platforms and applications are used. Whatever is produced, whatever the level of accuracy, the FRCP makes clear in Rule 11 that attorneys and pro se parties bear the responsibility for making sure that what they file with the court is true, accurate, and not “frivolous,” (https://www.uscourts.gov/sites/default/files/federal_rules_of_civil_procedure_december_1_2022_0.pdf page 38).
Who’s Responsible?
If attorneys and pro se litigants are responsible for any hallucinated information they include in a court filing, why shouldn’t the gAI platform owners and application developers bear responsibility for their creation of an authoritative-sounding, but fallible, product? Radio personality Mark Walters intends to prove just that in Mark Walters v Open AI, L.L.C., (Sup. Ct. Gwinnett County, GA, 23-A-04860-2, June 5, 2023, Notice of Removal filed July 14, 2023, Notice cited as Walters v. OpenAI, L.L.C., 1:23-cv-03122, (N.D. Ga. Jul 14, 2023) ECF No. 1). Walters is suing the developer of ChatGPT for libelous responses to a journalist who was looking for confirmation that Walters was being sued for embezzlement in another legal matter. Curiously, while the matter cited, Second Amendment Foundation v Ferguson (https://saf.org/second-amendment-foundation-v-ferguson/) is real, the information provided by ChatGPT that Mark Walters was a defendant in the case is inaccurate. Walters is not a named party in the matter. Even more curiously, the Complaint in Walters states that despite ChatGPT’s response to a prompt, “The complaint has nothing at all to do with financial accounting claims against anyone” (https://cdn.arstechnica.net/wp-content/uploads/2023/06/Mark-Walters-v-OpenAI-23-A-04860-2-6-5-2023.pdf). Whether Open AI is going to be held liable is worth watching as it could open the proverbial floodgates to claims that hallucinations aren’t just unbelievable, they’re harmful.
Irrespective of the technological advancements that are coming our way, the practice of law is one that requires careful attention to details, from research to drafting, and of course, eDiscovery. At Digital Mountain, we’ve been busting ghosts from the machines reliably for twenty years, so you won’t be haunted by scary eDiscovery.
1We understand there are legitimate questions regarding ascribing human qualities to technology, but for the sake of understanding, we’re asking for your forbearance.