When it comes to using AI, some lawyers just can’t resist cutting corners. Last year saw a sharp rise in court sanctions against attorneys who filed briefs containing errors generated by artificial intelligence. The best-known example involved lawyers for MyPillow CEO Mike Lindell, who were each fined $3,000 for submitting briefs with fabricated, AI‑generated citations.
That case didn’t deter others. Damien Charlotin, a researcher at HEC Paris who maintains a worldwide tally of courts sanctioning people for AI “hallucinations,” says he recently logged 10 such cases from 10 different courts in a single day. His count has topped 1,200 incidents overall, about 800 of them in U.S. courts, and both the number of cases and the penalties continue to rise. A federal court in Oregon recently ordered an attorney to pay roughly $109,700 in sanctions and costs for filing documents with AI‑generated errors.
State supreme courts have also confronted the problem. Nebraska’s high court questioned Omaha attorney Greg Lake about a brief that cited fictitious cases; he blamed a malfunctioning computer and denied using AI, but the court referred him for discipline. A similar episode occurred at the Georgia Supreme Court.
Libraries and law schools are responding. Carla Wale, associate dean of information and technology and director of the Gallagher Law Library at the University of Washington School of Law, is developing optional AI ethics training for students. She stresses that current ethical rules remain clear on one point: lawyers are responsible for the accuracy of their filings no matter how the material was produced. “Whatever the generative AI tool gives you — you have to read those cases,” she says.
Some courts have gone further by imposing rules that require lawyers to label anything produced with AI, including specifics about how the tool was used, to help identify filings that need extra scrutiny. Critics worry labeling will quickly become ineffective. Joe Patrice, senior editor at Above the Law, argues AI will be so integrated into legal software that constant disclaimers like “AI assisted” will be meaningless. He cautions especially against “agentic” AI systems that perform end‑to‑end legal tasks and obscure the steps taken, which can hide errors even from diligent users.
AI also threatens the traditional billable‑hour model by speeding up time‑consuming tasks such as document review or contract work. That could push firms to change billing practices or create pressure to accept the first AI draft rather than take the time for careful review, potentially eroding lawyers’ analytical habits. Wale worries about that erosion but rejects visions of fully automated lawyering: “Lawyers who understand how to effectively and ethically use generative AI replace lawyers who don’t,” she says.
AI has even become a defendant. In March, Nippon Life Insurance Company of America sued OpenAI in federal court in Illinois, alleging the company’s ChatGPT provided bad legal advice that led to frivolous litigation and accusing OpenAI of practicing law without a license. OpenAI said the complaint lacks merit.
For now, the profession’s message is consistent: AI can be a powerful tool, but lawyers remain accountable for the accuracy and ethics of what they file.