Business
OpenAI Sued for Defamation over False ChatGPT-Generated Accusations
In what might be a historic case, OpenAI, the firm behind the AI model ChatGPT, is being sued for defamation owing to misleading information created by their system. The chatbot falsely accused a radio broadcaster of embezzlement and charitable fraud.
According to The Verge, the firm behind ChatGPT, OpenAI, is being sued for defamation as a result of misleading information provided by their AI system. Mark Walters, a radio broadcaster in Georgia, has filed a complaint against ChatGPT after the firm wrongly accused him of stealing and embezzling money from a non-profit organization.
The first-of-its-kind lawsuit, filed on June 5 in Georgia's Superior Court of Gwinnett County, highlights the growing issue of AI systems producing false information. AI chatbots such as ChatGPT have been known to make up dates, facts, and figures, a practice known in the industry as “hallucinating,” prompting numerous complaints.
In recent months, AI-generated fake information has caused major harm, ranging from a professor threatening to fail his class owing to bogus accusations of AI-assisted cheating to a lawyer facing possible fines after using ChatGPT to study non-existent legal cases.
The case also raises the question of whether organizations are legally accountable for misleading or defamatory content generated by their AI systems. In the United States, Section 230 of the Communications Decency Act (CDA) has generally shielded internet corporations from legal liability for third-party information posted on their platforms. The question of whether these safeguards apply to AI systems that generate data from scratch rather than simply connecting to data sources remains unsolved.
ChatGPT was asked to give an online PDF summary of an actual federal court case in the matter of Walters. As a result, the AI-generated a bogus case report, including fictitious charges leveled against Walters. The journalist who requested the summary chose not to publish it and instead double-checked the facts, revealing the fabricated material. It is still unclear how Walters became aware of this erroneous information.
Eugene Volokh, a law professor who has written extensively on the legal culpability of AI systems, shared his opinion on the case. While “such libel claims [against AI companies] are in principle legally viable,” Volokh stated that this particular lawsuit “should be hard to maintain,” pointing out that there have been no actual damages as a result of ChatGPT's output and that Walters failed to notify OpenAI about these false statements, giving them a chance to remove them.
Up Next: