Connect with us

Business

OpenAI Sued for Defamation over False ChatGPT-Generated Accusations

Published

on

openai sued for defamation

In what might be a historic case, OpenAI, the firm behind the AI model ChatGPT, is being sued for defamation owing to misleading information created by their system. The chatbot falsely accused a radio broadcaster of embezzlement and charitable fraud.

According to The Verge, the firm behind ChatGPT, OpenAI, is being sued for defamation as a result of misleading information provided by their AI system. Mark Walters, a radio broadcaster in Georgia, has filed a complaint against ChatGPT after the firm wrongly accused him of stealing and embezzling money from a non-profit organization.

The first-of-its-kind lawsuit, filed on June 5 in Georgia's Superior Court of Gwinnett County, highlights the growing issue of AI systems producing false information. AI chatbots such as ChatGPT have been known to make up dates, facts, and figures, a practice known in the industry as “hallucinating,” prompting numerous complaints.

In recent months, AI-generated fake information has caused major harm, ranging from a professor threatening to fail his class owing to bogus accusations of AI-assisted cheating to a lawyer facing possible fines after using ChatGPT to study non-existent legal cases.

The case also raises the question of whether organizations are legally accountable for misleading or defamatory content generated by their AI systems. In the United States, Section 230 of the Communications Decency Act (CDA) has generally shielded internet corporations from legal liability for third-party information posted on their platforms. The question of whether these safeguards apply to AI systems that generate data from scratch rather than simply connecting to data sources remains unsolved.

ChatGPT was asked to give an online PDF summary of an actual federal court case in the matter of Walters. As a result, the AI-generated a bogus case report, including fictitious charges leveled against Walters. The journalist who requested the summary chose not to publish it and instead double-checked the facts, revealing the fabricated material. It is still unclear how Walters became aware of this erroneous information.

Eugene Volokh, a law professor who has written extensively on the legal culpability of AI systems, shared his opinion on the case. While “such libel claims [against AI companies] are in principle legally viable,” Volokh stated that this particular lawsuit “should be hard to maintain,” pointing out that there have been no actual damages as a result of ChatGPT's output and that Walters failed to notify OpenAI about these false statements, giving them a chance to remove them.

Up Next:

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Continue Reading

Copyright © 2023 The Capitalist. his copyrighted material may not be republished without express permission. The information presented here is for general educational purposes only. MATERIAL CONNECTION DISCLOSURE: You should assume that this website has an affiliate relationship and/or another material connection to the persons or businesses mentioned in or linked to from this page and may receive commissions from purchases you make on subsequent web sites. You should not rely solely on information contained in this email to evaluate the product or service being endorsed. Always exercise due diligence before purchasing any product or service. This website contains advertisements.

Is THE newsletter for…

INVESTORS TRADERS OWNERS

Stay up-to-date with the latest kick-ass interviews, podcasts, and more as we cover a wide range of topics, in the world of finance and technology. Don't miss out on our exclusive content featuring expert opinions and market insights delivered to your inbox 100% FREE!