Thought Leadership

Trust, But Verify: Avoiding the Perils of AI Hallucinations in Court

Client Updates

Introduction

The rise of generative Artificial Intelligence (AI) presents lawyers with powerful tools and tactical advantages to streamline many aspects of their practice. AI helps lawyers provide more efficient, effective legal services to their clients. But lawyers must exercise caution when utilizing these new AI platforms to ensure they comply with their ethical obligations. A recent case from the Eastern District of Texas highlights a recurring ethical issue: litigators citing hallucinated case law generated by AI without verifying the accuracy (or even the existence) of the cases cited in a brief to the Court.

Judge Marcia Crose held that the lawyer’s oversight breached his ethical obligations, including Rule 11(b)(2) of the Federal Rules of Civil Procedure and the Eastern District of Texas’s Local Rule AT-3(b). The Court issued sanctions in the form of a $2,000 penalty and a directive to attend a continuing legal education course on using generative AI in the legal field. This case emphasizes the importance of critically examining AI outputs and cited authorities before submitting these materials to courts.

A useful paradigm for attorneys is to treat AI outputs as coming from a sharp but green first-year lawyer who requires significant oversight. A practical tip for lawyers using AI is to perform the legal work themselves first, then consult AI as a “sparring partner” to refine the work product. Lawyers should trust AI (to an extent) but should always verify that the AI’s analysis is fully accurate and in compliance with all ethical duties.

In the spirit of this article, the authors utilized AI to assist with the drafting process and manually confirmed that the materials cited exist and are accurately represented in this article.

Case Spotlight

In Gauthier v. Goodyear Tire & Rubber Co., counsel for the plaintiff submitted a response to a summary judgment motion that included citations to two nonexistent cases and multiple fabricated quotations. After opposing counsel spent time and resources searching for these phantom authorities, they raised the issue to the Court in a reply brief. Despite this, the lawyer failed to address the problem until the Court issued a show-cause order for plaintiff’s counsel to explain why the Court should not impose sanctions against him. The lawyer admitted to using a generative AI tool, “Claude,” without verifying its output, and acknowledged his error. The Court subsequently imposed sanctions, citing the attorney’s failure to exercise diligence and uphold his professional obligations under Rule 11 and the Eastern District of Texas’s Local Rules.

The Court’s Holding & Rationale

The Court sanctioned the attorney, ordering him to pay a $2,000 penalty, complete a CLE course on AI in the legal field, and provide the order to his client. The Court emphasized that Rule 11 requires attorneys to ensure their filings are grounded in existing law or nonfrivolous arguments for change, noting that at “the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” The Court underscored the harm caused by submitting fabricated authorities, including wasted time and resources, potential damage to judicial and professional reputations, and diminished trust in the legal system.

Overreliance on AI is a Growing Ethical Issue

This Texas case is the latest in a growing trend of sanctions for similar misconduct. Last year, the Southern District of New York sanctioned an attorney in the infamous Mata v. Avianca, Inc. case for submitting a brief citing nonexistent cases generated by ChatGPT. Courts around the country have encountered similar issues as lawyers increasingly rely on generative AI tools. These notorious cases serve as stark reminders that AI outputs must be critically assessed and verified.

Lawyers Have Ethical Duties to Use Technology Competently

The sanctions in these cases are consistent with Model Rule of Professional Conduct 1.1, which requires lawyers to provide competent representation. Comment 8 to Rule 1.1 specifically notes the need to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” The failure to verify AI-generated content falls squarely within this duty, highlighting the importance of technological competence in modern legal practice. Further, lawyers should familiarize themselves with additional requirements that certain jurisdictions or even individual courts may issue regarding the use of AI (like the Gauthier Court).

AI Certification Rules and Their Limits

Some courts have implemented rules requiring certifications when briefs cite AI-generated content. For instance, the Eastern District of Texas Local Rule AT-3(m) requires lawyers to certify that they “review and verify any computer-generated content to ensure that it complies” with the applicable rules. However, these rules do not supplant the broader obligations of competence and diligence under Model Rules of Professional Conduct 1.1 and 1.3. Attorneys remain responsible for ensuring the accuracy and validity of their submissions, regardless of whether they use AI.

Conclusion

Lawyers stand to gain a strategic advantage if they learn to incorporate AI into their legal practice, but these new tools also demand heightened vigilance to ensure that lawyers comply with their ethical obligations. Generative AI can be a powerful ally for litigators, but it cannot replace the exercise of independent legal judgment. This recent Texas case serves as another reminder that our professional obligations—ethical competence, diligence, and adherence to procedural rules—remain paramount. By embracing AI responsibly, we can harness its potential to deliver more effective, efficient representation to our clients while maintaining the integrity of the legal profession.


1 Gauthier v. Goodyear Tire & Rubber Co., No. 1:23-CV-281, 2024 WL 4882651, at *2 (E.D. Tex. Nov. 25, 2024).

2 Eastern District of Texas, Local Rule AT-3(m), Standards of Practice to be Observed by Attorneys,

If the lawyer, in the exercise of his or her professional legal judgment, believes that the client is best served by the use of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer’s most important asset – the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by the requirements of Federal Rule of Civil Procedure 11, Local Rule AT-3, and all other applicable standards of practice and must review and verify any computer-generated content to ensure that it complies with all such standards.

https://txed.uscourts.gov/?q=local-rule-3-standards-practice-be-observed-attorneys.

3 Gauthier, 2024 WL 4882651, at *2 (quoting Park v. Kim, 91 F.4th 610, 615 (2d Cir. 2024)) (citations omitted).

4 678 F. Supp. 3d 443 (S.D.N.Y. 2023).

5 For a list chart listing the federal courts with standing orders or guidance related to the use of AI in court filings, see the following webpage: https://www.bloomberglaw.com/external/document/XCN3LDG000000/litigation-comparison-table-federal-court-judicial-standing-orde.

ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.

Related Professionals