最新AI幻觉案例敲响警钟

最新AI幻觉案例敲响警钟

2025-07-12Technology
--:--
--:--
1
早上好国荣,我是纪飞,欢迎收听专为您打造的 Goose Pod。今天是7月13日,星期日,早上7点03分。
2
我是国荣。今天我们来聊一个非常及时的话题:最新AI幻觉案例敲响警钟。
1
我们开始吧。最近一个案子引起了很大关注:知名枕头公司MyPillow的首席执行官,他的两名律师因为使用人工智能准备法庭文件,结果被联邦法官每人罚款3000美元。文件里错误百出。
2
等一下,因为用了AI就被罚款?我以为AI是来提高效率的。最关键的是,文件里到底出了什么错,能让法官这么生气?难道是拼写错误吗?那也太小题大做了。
1
远不止拼写错误。那份文件里引用了超过二十几个完全不存在的“幽灵案件”。这就是所谓的“AI幻觉”——AI一本正经地编造出了虚假信息,而律师没有核实就直接提交给了法庭。
2
天哪,AI还会“撒谎”?这听起来太不可思议了。这种事是个例吗?还是说,现在律师行业已经普遍在用AI了,只是他们没被发现而已?这听起来风险也太大了。
1
这绝非个例。根据调查,超过70%的法律专业人士计划在一年内整合生成式AI。其实早在2023年6月,就有律师因为使用ChatGPT编造虚假案例而受到谴责。AI进入法律行业已是不可逆转的趋势。
2
哇,70%!看来大家都在拥抱新技术。但既然有前车之鉴,那肯定有相关的规定吧?比如,美国律师协会之类的机构,难道对使用AI没有任何指导意见吗?感觉这像是在技术狂野的西部拓荒。
1
当然有。美国律师协会的《职业行为示范规则》早就强调了律师必须提供“称职的代理服务”。其中一条注释明确指出,律师的“称职”义务,延伸到了解和掌握相关技术。也就是说,懂技术、懂风险,是基本要求。
2
这就更矛盾了。既然规则要求律师必须懂技术、有能力,那MyPillow的律师为什么还会犯这种低级错误?是为了省时间抄近路,结果翻车了吗?还是他们真的就这么轻信AI?
1
这正是问题的核心冲突:效率的诱惑与职业责任的博弈。律师承认用了AI,但给出的解释是“网络服务不好”,无法核实案例。法官显然不接受这个说法,并指出他在被直接质问前,并未坦白承认使用了AI。
2
“网络不好”?这借口听起来就像学生告诉老师“我的狗吃了我的作业”一样。这根本不是工具的问题,而是人的问题。AI就像一个超级实习生,能帮你搜集资料,但最终审核签字的还是得律师本人啊。
1
你说得很对。法律界普遍认为,这属于严重的职业失当。AI可以分析信息,但它没有人类的常识和批判性思维。将核实的责任完全推给AI,就是放弃了作为律师最根本的职责和判断力。
2
那除了每人3000美元的罚款,这对他们还有什么更深远的影响吗?我猜,这种事一旦曝光,对律师的职业声誉打击是巨大的吧?毕竟,谁敢雇一个会提交“虚构证据”的律师呢?
1
没错。专家评论说,3000美元的罚款算是“相当轻”的了。真正的惩罚是声誉上的损害。这个案例为整个法律行业敲响了警钟,它不仅关乎个人职业生涯,更关乎公众对司法系统公正性的信任。
1
展望未来,AI在法律领域的应用只会增多,不会减少。因此,关键在于如何负责任地使用。现在,已经有法院要求律师必须声明是否在文件中使用了生成式AI,以增加透明度。
2
所以,最后的黄金法则是?对于所有使用AI的专业人士来说,最应该记住的一句话是什么?
1
计算机科学教授莫拉·格罗斯曼给出了最好的建议:“不信赖任何事,核实一切。” (Trust nothing — verify everything.)
1
今天关于AI幻觉的讨论就到这里。感谢您收听Goose Pod。我们明天再见。
2
明天见!

Here's a comprehensive summary of the provided news article: ## AI Hallucinations in Court Lead to Sanctions for MyPillow CEO's Lawyers **News Title/Type:** Article detailing the consequences of using AI-generated legal filings with errors. **Report Provider/Author:** NPR, reported by Jaclyn Diaz. **Date/Time Period Covered:** The incident occurred in February 2025, with the judge's ruling on Monday (prior to the July 10, 2025 publication date). The article references events from April 4, 2023, and mentions a previous ruling last month. **Relevant News Identifiers:** The article discusses a defamation case involving MyPillow CEO Mike Lindell. ### Main Findings and Conclusions: * **Sanctions for AI-Generated Mistakes:** A federal judge, Nina Y. Wang of the U.S. District Court in Denver, ordered two attorneys representing MyPillow CEO Mike Lindell to pay **$3,000 each** in fines. * **Reason for Sanctions:** The attorneys, Christopher Kachouroff and Jennifer DeMaster, violated court rules by submitting a legal filing in February that was prepared using artificial intelligence and contained numerous mistakes, including **more than two dozen errors** and **hallucinated cases** (fake cases created by AI). * **Violation of Court Rules:** The judge ruled that the lawyers violated a federal rule requiring them to certify that their claims are "well grounded" in the law, as fake cases do not meet this standard. * **Lack of Transparency:** Judge Wang was not persuaded by the attorneys' explanations for the errors. Mr. Kachouroff initially claimed a draft version was mistakenly filed, but later admitted to using generative AI only after being directly questioned by the court. * **Deterrent Measure:** The $3,000 fines were described by the judge as "the least severe sanction adequate to deter and punish defense counsel." * **Broader Trend:** This case is highlighted as a stark reminder for attorneys about the risks of using AI without proper verification, a trend that is becoming increasingly common in courtrooms across the U.S. ### Key Statistics and Metrics: * **Fines:** **$3,000 per attorney**, totaling **$6,000** in sanctions. * **Number of Mistakes:** The filing contained **more than two dozen mistakes**, including hallucinated cases. * **Tracking of AI Hallucination Cases:** Damien Charlotin tracks cases where generative AI has produced hallucinated content and led to court warnings or punishments. As of Thursday (prior to July 10, 2025), he identified **206 such cases** since the spring, with cases "popping up every day." ### Important Recommendations: * **Trust Nothing, Verify Everything:** This is the key advice from experts like Maura Grossman for anyone using AI. * **Be Honest with the Court:** Experts advise lawyers to be upfront with judges about the use of AI and any errors to avoid harsher penalties. ### Significant Trends or Changes: * **Increasing Use of AI in Law:** Lawyers are increasingly using AI tools in their work, and this trend is expected to continue and grow in courtrooms. * **Emergence of AI Hallucinations:** The phenomenon of AI generating fake cases and legal arguments is a growing concern. * **Varied Regulatory Responses:** Rules and guidelines for AI use in courts differ by jurisdiction. Some judges require AI disclosures, while others have implemented prohibitions on its use. * **Ethical Guidance:** The American Bar Association issued its first ethical guidance on AI last year, warning against uncritical reliance on AI output. ### Notable Risks or Concerns: * **Inaccurate Legal Advice and Misleading Representations:** Uncritical reliance on AI can lead to incorrect legal advice to clients or misleading information presented to courts. * **Reputational Damage:** Lawyers face reputational damage when AI use leads to errors. * **Embarrassment for Courts and Parties:** Many courts and parties prefer not to address AI hallucination issues publicly due to embarrassment. * **Pace of AI Development vs. Regulation:** The use of AI is advancing faster than authorities can establish appropriate guardrails. ### Material Financial Data: * **Fines:** **$3,000 per attorney** (totaling $6,000). * **Previous Ruling:** In the same defamation case, Mike Lindell was ordered to pay Eric Coomer **more than $2 million**. ### Context and Interpretation: The article highlights a critical juncture in the legal profession's adoption of artificial intelligence. The case involving Mike Lindell's lawyers serves as a prominent example of the dangers of **AI hallucination**, where AI tools generate fabricated information, such as non-existent case citations. The **$3,000 fines** are presented as a relatively light sanction, intended to deter future misconduct, especially given the lawyers' experience. Experts emphasize that while AI can be a powerful tool for lawyers, **independent verification and review are paramount** to ensure accuracy and uphold professional responsibility. The increasing frequency of such incidents underscores the urgent need for clearer guidelines and robust verification processes as AI becomes more integrated into legal practice. The article also points to the broader challenge of regulating rapidly evolving technology within established legal frameworks.

A recent high-profile case of AI hallucination serves as a stark warning

Read original at NPR

MyPillow CEO Mike Lindell arrives at a gathering of Donald Trump supporters near Trump's residence in Palm Beach, Fla., on April 4, 2023. On July 7, 2025, Lindell's lawyers were fined thousands of dollars for submitting a legal filing riddled with AI-generated mistakes.Octavio Jones/Getty Imageshide captiontoggle captionOctavio Jones/Getty ImagesA federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn't exist.

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday. "Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it," Wang wrote in her decision.

"Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice." The use of AI by lawyers in court is not itself illegal. But Wang found that the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are "well grounded" in the law.

Turns out, fake cases don't meet that bar. Kachouroff and DeMaster didn't respond to NPR's request for comment. The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election.

Last month, Lindell lost this case, which was being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to Joe Biden. The financial sanctions, as well as reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo's David R.

Cheriton School of Computer Science and an adjunct law professor at York University's Osgoode Hall Law School. Grossman said the $3,000 fines, "in the scheme of things, was reasonably light, given these were not unsophisticated lawyers who just really wouldn't know better. The kind of errors that were made here … were egregious."

There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It has become a familiar trend in courtrooms across the U.S.: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and were created by generative AI.

Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that's only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases "popping up every day."

Charlotin's database doesn't cover every single case where there is a hallucination. But he said, "I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it's very embarrassing for everyone involved." What went wrong in the MyPillow filing The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is "the least severe sanction adequate to deter and punish defense counsel in this instance."

The judge wrote that the two attorneys didn't provide any proper explanation of how these mistakes — "most egregiously, citation of cases that do not exist" — happened. Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.

Kachouroff, in response, said in court documents that it was DeMaster who "mistakenly filed" a draft version of this filing rather than the right copy, which was more carefully edited and didn't include hallucinated cases. But Wang wasn't persuaded that the submission of the filing was an "inadvertent error."

In fact, she called out Kachouroff for not being honest when she questioned him. "Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence," Wang wrote.

Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up and to fess up to the judge as soon as possible. "You are likely to get a harsher penalty if you don't come clean," she said. An illustration picture shows ChatGPT artificial intelligence software, which generates humanlike conversation, in February 2023 in Lierde, Belgium.

Experts say AI can be incredibly useful for lawyers — they just have to verify their work.Nicolas Maeterlinck/BELGA MAG/AFP via Getty Imageshide captiontoggle captionNicolas Maeterlinck/BELGA MAG/AFP via Getty ImagesTrust and verify Charlotin has found three main issues when lawyers or others use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.

The second is that AI creates a fake quote from a real case. The third is harder to spot, he said. That's when the citation and case name are correct, but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said. This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court.

The use of AI is growing faster than authorities can make guardrails around its use. It's even being used to present evidence in court, Grossman said, and to provide victim impact statements. This year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported.

That was swiftly shut down. Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less. Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it has been used.

In a few instances, judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray. The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year.

The organization warned that because these tools "are subject to mistakes, lawyers' uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties." It continued, "Therefore, a lawyer's reliance on, or submission of, a GAI tool's output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation."

The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence. In the meantime, Grossman has this suggestion for anyone who uses AI: "Trust nothing — verify everything."

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts