A Cautionary Tale: When Expert Opinion Meets Artificial Imagination
As part of LMICK’s continued AI series, this month LMICK would like to discuss the usage of AI by “experts” in either yours or opposing counsel’s cases.
In a recent court case out of the U.S. District Court for the District of Minnesota, Kohls, et al. v. Ellison, et al., 24-cv-3754, 2025 U.S. Dist. LEXIS 4928 (D. Minn., Jan. 10, 2025), the Court was faced with the issue of an expert for the defense who unintentionally included hallucinations in his declaration filed with the Court. Can you believe it? In this case, ironically enough, the Plaintiffs were challenging a new Minnesota law that was designed to prohibit “deepfakes” (images, videos, or audio that have been edited or generated using artificial intelligence) and disinformation in political campaigns. In defense of the statute, the Defendants retained an “AI expert” from Stanford University who filed a declaration with the Court in support of his testimony.
However, upon reviewing the declaration, the Court became aware that the expert’s declaration contained AI hallucinations. And, accordingly, the Court ultimately excluded the expert’s declaration as being untrustworthy. In the case, the expert admitted to using GPT-4o (a paid AI subscription) which generated two fake citations to two non-existent academic articles, and incorrectly cited the authors of a third article. The expert admitted to not verifying the AI output before signing the declaration which was filed with the Court.
In the Kohl’s case, the Court was cautious to not criticize the expert for using AI or admonishing the defense for allowing AI to be used. However, the Court stated, “But Attorney General Ellison’s attorneys are reminded that Federal Rule of Civil Procedure 11 imposes a “personal, nondelegable responsibility” to “validate the truth and legal reasonableness of the papers filed” in an action.” Accordingly, the Court makes it clear that it is incumbent upon all attorneys to ascertain the veracity of all documents and information put before the Court, whether it be your filings, or that of your expert(s).
We think this situation could have also opened defense attorneys to an alleged violation of Rule 3.3 of the Rules of Professional Conduct – Candor toward the tribunal. So, be careful to police what you submit to the Court from your experts.
Steps Attorneys Should Take to Manage AI Use in Expert Testimony:
- Explicitly Inquire About AI Use
- During expert vetting or engagement, directly ask whether and how the expert used AI tools in preparing their report or analysis.
- Include AI-related disclosure requirements in the engagement letter
- Assess the Tool and its Role
- Determine whether the AI was used for substantive analysis (e.g., modeling, diagnostics) vs. administrative tasks (e.g., grammar checks).
- Ask for documentation of data sources, methods, and software used.
- Ensure Reproducibility
- Require the expert to be able to replicate their results without relying solely on AI outputs, especially if the AI model is proprietary or opaque.
- Preserve copies of datasets, inputs, prompts, and outputs used.
- Prepare for Challenges
- Be ready to defend the expert’s methodology under Daubert or Frye, including AI’s role and its scientific validity.
- Anticipate cross-examination about AI's reliability, training data, and limitations.
- Review and Vet Reports Thoroughly
- Do not assume accuracy—lawyers should closely review expert reports for any red flags such as fabricated citations, logic gaps, or unverifiable claims.
- Educate Yourself and Your Team
- Stay current on AI tools relevant to your area of practice and expert witnesses.
- Train on identifying and interrogating AI usage in forensic, scientific, and professional analyses.
- Draft Disclosures Strategically
- Consider whether any AI usage needs to be disclosed in expert witness reports under applicable discovery rules (e.g., FRCP 26(a)(2)).