Many lawyers tempted to use generative AI are finding that shortcuts carry real consequences. Over the past year courts have stepped up penalties against attorneys who filed briefs containing AI‑generated mistakes. The high‑profile MyPillow matter is one example: lawyers for CEO Mike Lindell were each fined $3,000 after submitting briefs that included fabricated citations produced by AI.
Researchers tracking these incidents say the problem is widespread and growing. Damien Charlotin of HEC Paris, who maintains a global register of court sanctions tied to AI “hallucinations,” recently recorded ten separate cases in a single day. His database has surpassed 1,200 incidents overall, roughly 800 in U.S. courts, and both the frequency of filings and the size of penalties continue to climb. In one federal case in Oregon, a judge ordered an attorney to pay about $109,700 in sanctions and costs for submitting documents containing AI‑generated errors.
State high courts are confronting similar issues. Nebraska’s supreme court questioned Omaha lawyer Greg Lake over a brief that cited cases that turned out to be fictitious; Lake denied using AI and blamed a computer malfunction, but the court referred him for disciplinary review. The Georgia Supreme Court faced a comparable episode.
Legal educators and librarians are responding with training and guidance. Carla Wale, associate dean and director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics instruction for students and emphasizes a core point: professional rules still require lawyers to ensure the accuracy of anything they file, regardless of whether it was produced by an AI tool. “If the tool gives you cases or authorities, you have to read and confirm them,” she says.
Some courts have imposed disclosure mandates, requiring lawyers to flag filings that relied on AI and to describe how the technology was used so judges can vet them more carefully. Critics worry such labeling will quickly lose effectiveness: as AI becomes embedded in legal drafting software, routine disclaimers like “AI‑assisted” may become meaningless. Observers also warn against more autonomous “agentic” systems that perform end‑to‑end tasks and can obscure the steps that produced an error.
AI also threatens traditional billing models by accelerating time‑consuming work such as document review and contracts, which may pressure firms to accept initial AI drafts rather than conduct thorough review—potentially dulling lawyers’ analytical skills. Wale rejects the notion of fully automated lawyering, noting that lawyers who learn to use generative AI responsibly will outcompete those who do not.
AI has even been named as a defendant: Nippon Life Insurance Company of America sued OpenAI in federal court, alleging ChatGPT gave poor legal advice that prompted frivolous litigation and accusing OpenAI of practicing law without a license. OpenAI called the suit without merit.
For now the profession’s stance remains steady: AI can be a powerful aid, but lawyers remain accountable for the accuracy and ethics of what they file.