Damien Charlotin, from Science Po Law School and HEC Paris, has “a database [that] tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments.” You can find the database at this link. The database was recently highlighted in a story by NPR.
Last year saw a rapid increase in court sanctions against attorneys for filing briefs containing errors generated by artificial intelligence tools. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn’t seem to have had much effect.
“Recently we had 10 cases from 10 different courts on a single day,” says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI.
The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts. Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.
Martin Kaste, Penalties Stack Up as AI Spreads Through the Legal System, NPR.org, April 3, 2026.






