Wednesday, May 14, 2025

Judge Condemns Lawyers Over Fake AI-Generated Research

Share

A California Judge’s Verdict on AI in the Courtroom

A recent ruling by a California judge has shed light on the use of artificial intelligence (AI) in the legal profession. Judge Michael Wilner imposed $31,000 in sanctions against two law firms for their undisclosed use of AI in a court case. The firms had submitted a brief with "numerous false, inaccurate, and misleading legal citations and quotations" that were generated by AI.

The Discovery of AI-Generated Content

Judge Wilner discovered the AI-generated content when he received a supplemental brief from the law firms. He was persuaded by the authorities cited in the brief and decided to look up the decisions to learn more about them. However, he found that the decisions did not exist. This revelation led him to investigate further, and he discovered that the brief had been generated using AI.

The Role of AI in the Brief

The plaintiff’s legal representative had used AI to generate an outline for the supplemental brief. The outline was then sent to another law firm, K&L Gates, which added the information to the brief without reviewing or cite-checking it. Judge Wilner found that at least two of the authorities cited in the brief did not exist at all. When he asked K&L Gates for clarification, the firm resubmitted the brief, which contained even more made-up citations and quotations.

The Use of AI in the Legal Profession

This is not the first time lawyers have been caught using AI in the courtroom. In the past, lawyers have mistakenly used AI-generated content, thinking it was real. For example, former Trump lawyer Michael Cohen cited made-up court cases in a legal document after mistaking Google Gemini for a search engine. Another instance involved lawyers suing a Colombian airline, who included phony cases generated by ChatGPT in their brief.

The Consequences of Using AI Without Disclosure

Judge Wilner’s ruling highlights the importance of disclosing the use of AI in legal documents. The initial use of AI products to generate the brief was deemed "flat-out wrong" by the judge. He also stated that sending the AI-generated material to other lawyers without disclosing its origins put those professionals in harm’s way.

Conclusion

The use of AI in the legal profession is a growing concern. While AI can be a useful tool for research and writing, it is not a substitute for human judgment and oversight. The case highlights the need for transparency and accountability in the use of AI in legal documents. Lawyers must ensure that they properly disclose the use of AI and verify the accuracy of the content generated by these tools to maintain the integrity of the legal process.

Latest News

Related News