Reality Check: How Courts Are Confronting AI Hallucinations
As part of LMICK’s continued AI series, this month LMICK wants to explore how various courts across the country are handling AI hallucinations in court filings. In prior LMICK Minute Issues, we’ve discussed the importance of checking and double-checking any citations that AI generates and provides as the citation may be incorrect or the citation itself may point to a non-existent case or authority. These fabricated citations, misquoted authorities, and non-existent cases generated by AI tools have led to real-world consequences for attorneys and litigants alike.
The law firm of Ropes & Gray has created an interactive database of “Standing Orders, Local Rules, and Decisions of the Use of AI” for the majority of states. This database contains information concerning how both Federal and state courts have been handling parties and litigants who have been misusing AI. For Kentucky, the database discusses Buckner v. Hilton Glob., 2025 WL 1725426 (W.D. KY, June 20, 2025), in which the pro se plaintiff sought to have the judge recused and “chastised the Court for commenting on [his] citation to nonexistent caselaw” in its prior opinion (where the court previously warned the plaintiff that his pro se status “will not be tolerated as an excuse for citing nonexistent case law,” 2025 WL 890175, at *15 (W.D. Ky. Mar. 21, 2025)). Judge Michael J. Buckner, Jr. addressed the plaintiff’s critique, holding that although courts are to “construe filings by pro se litigants liberally,” the court nonetheless has an “obligation to prevent fraud on the court from misuse of [AI],” such as through the inclusion of hallucinated cases in court filings. The court further likened the use of AI to draft legal filings to “ghostwriting,” which “‘evades the requirements’ of Rule 11” and “creates serious concerns for maintaining candor to the Court.” The court concluded that the plaintiff’s “citation to nonexistent case law . . . directly violates his duty of candor,” wastes the time and money of both opposing counsel and the court, and is not a proper basis for recusal.
In Ohio, in Smith v. Gamble, 2025 Ohio App. LEXIS 2317 (Ohio App. Ct., July 7, 2025), the appellant moved to strike the appellee’s brief for including “phantom cases and inappropriate, inaccurate citations” in violation of Rule 11 of the court’s Local Rules (which require parties to submit accurate legal authority). Although the court declined to strike the brief, as the arguments asserted in the brief were necessary to address the appellant’s claims on the merits, Judge Piper concluded that the pro se appellee in fact cited to hallucinated authority and granted the appellant’s motion for sanctions, ordering the appellee to pay attorney’s fees, expenses, and costs associated with the search for the non-existent authority.
In Indiana, in Mid Cent. Operating Eng’rs Health v. Hoosiervac LLC, 2025 U.S. Dist. LEXIS 100748 (S.D. IN, May 28, 2025), the magistrate judge recommended that the defendant’s attorney be sanctioned $15,000 for submitting three briefs with false citations. The attorney claimed this penalty was inappropriate given the “‘significant and irreversible harm to [his] professional reputation’” that he has faced since the original order to show cause was issued. Judge Hanlon ultimately reduced the sanction to $6,000 based on the attorney’s attendance at continuing education programs on the “responsible use of AI” and “adhere[nce] to ‘the highest standards of professional conduct moving forward.’” The court noted that the sanction was still substantial given the repeated use of AI-generated citations and the fact that attorneys in other cases continue to cite to non-existent authorities.
Also in Indiana, in Parrish v. Miller, 2025 U.S. Dist. LEXIS 137247 (S.D. Ind., July 18, 2025), the plaintiffs included various hallucinated citations in support of their motion for reconsideration. Although the use of AI was not part of the court’s reasoning for denying the plaintiffs’ motion, the court warned that it has “recently admonished and sanctioned parties for submitting briefs containing miscited or non-existent cases.”
In Arizona, in Mavy v. Comm’r of Soc. Sec. Admin., 2025 U.S. Dist. LEXIS 157358 (D. Ariz. Aug. 14, 2025), the plaintiff’s attorney retained an “attorney contract writer” to draft the opening brief to the Social Security Administration. During the drafting process, the plaintiff’s firm flagged for the contract writer that courts in “every jurisdiction” are “cracking down on the use of artificial intelligence,” and the contract writer in fact acknowledged receipt of this message. However, “the majority of authorities cited [in the plaintiff’s opening brief] were either fabricated, misleading, or unsupported,” an outcome the court described as “egregious.” In response to the court’s order to show cause, the plaintiff’s attorney accepted “full responsibility for all filings submitted in this matter” and admitted that “several case citations” were inaccurate but were not knowingly included.
Instead of imposing monetary penalties, the court imposed the following sanctions as part of “deterrence”: (i) removing plaintiff’s counsel’s pro hac vice status; (ii) striking the plaintiff’s opening brief; (iii) serving a copy of this order on the plaintiff and affording plaintiff the ability to self-represent or retain new counsel; (iv) writing letters to the three judges “to whom [plaintiff’s counsel] attributed fictitious cases”; (v) requiring plaintiff’s attorney to send a copy of this order to every judge assigned to any case in which the attorney is counsel of record; and (vi) ordering the Clerk of Court to serve a copy of this order on the Washington State Bar Association. Magistrate Judge Bachus admonished plaintiff’s counsel for failing to both supervise the drafting attorney and carefully review a brief to which she affixed her signature. As a final matter, the court highlighted how plaintiff’s counsel’s attempt to identify and rectify her hallucinated citations in response to the order to show cause was erroneous, as certain hallucinated cases were not included in the “citation correction table,” and certain quoted language was improperly attributed. This, according to Magistrate Judge Bachus, fell short of attorneys’ duties of candor to the court.
The cases above should hopefully serve as a warning that hallucinations generated by AI can have detrimental effects on your case and to you. Courts are taking different approaches on how to deal with the improper usage of AI. LMICK recommends that you use caution when using AI for research and be sure to always check and double-check any citations generated by AI.