Is Perplexity’s AI Hallucinating Fake News? Lawsuit Alleges So.

All copyrighted images used with permission of the respective copyright holders.

The Copyright Quagmire: Generative AI, Hallucinations, and the Fight for Intellectual Property

The rapid advancement of generative artificial intelligence (AI) has ushered in an era of unprecedented innovation, but also a wave of legal uncertainty. At the heart of the storm lies the question of copyright and intellectual property (IP) protection in the age of AI-generated content. High-profile lawsuits involving major news organizations like News Corp and The New York Times against companies such as Perplexity, OpenAI, and Microsoft are highlighting the complex challenges facing publishers and AI developers alike. These cases revolve around a crucial issue: the use of copyrighted material to train and operate AI models, and the phenomenon of AI hallucinations, where the AI fabricates information.

News Corp’s Aggressive Stance Against Perplexity:

News Corp’s legal action against Perplexity AI underscores the growing tensions. In a statement to WIRED, News Corp CEO Robert Thomson didn’t mince words, leveling a strong accusation: "Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy." This statement encapsulates the core concern: the unauthorized use of journalistic content to fuel commercial AI products, a practice viewed as blatant theft of intellectual property. The term "content kleptocracy" vividly illustrates the perceived gravity of the situation. While Perplexity has remained silent, the strong words from News Corp suggest a significant legal battle is on the horizon. The choice of the word "woo" before "sue" hints at the company’s prior attempts at negotiation, further demonstrating their dissatisfaction with the current situation.

The New York Times vs. OpenAI and Microsoft: Hallucinations and Trademark Dilution:

The New York Times is also embroiled in litigation, suing OpenAI and Microsoft for allegedly causing trademark dilution through AI hallucinations. The lawsuit highlights instances where ChatGPT and Bing Chat attribute fabricated quotes to the Times, directly damaging its reputation. One example cited is Bing Chat’s assertion that the Times endorsed red wine as a "heart-healthy" food—a claim directly contradicted by the Times’ actual reporting. This case introduces a novel legal challenge: can hallucinations, the AI’s generation of false information, constitute a violation of trademark law?

Charlie Stadtlander, director of external communications at the NYT, stated that, "Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI." This statement emphasizes the NYT’s firm belief that the unauthorized use of its content is illegal, regardless of whether the AI creates accurate or fabricated information. The lawsuit highlights the inherent risks associated with deploying large language models (LLMs) without robust fact-checking mechanisms and the potential for reputational damage caused by AI-generated misinformation. The NYT’s support for News Corp’s lawsuit against Perplexity underscores the growing industry consensus on the need for stronger IP protections in the AI landscape.

The Legal Landscape of AI Hallucinations:

The legal implications of AI hallucinations are far-reaching and largely uncharted territory. Matthew Sag, a professor of law and artificial intelligence at Emory University, notes that "It is absolutely impossible to guarantee that a language model will not hallucinate." He argues that the very nature of language models, predicting words based on statistical probabilities, inherently involves a form of hallucination. The difference, he emphasizes, lies in the plausibility of the output: "We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not." This perspective suggests that enforcing a strict ban on hallucinations might be practically impossible, leaving AI companies facing a significant challenge in balancing innovation with legal compliance.

Professor Sag’s observation raises crucial questions. If the process of language model generation always involves some degree of "hallucination," how can we legally distinguish between acceptable outputs and those breaching copyright or causing trademark dilution? This ambiguity creates a fertile ground for legal disputes, and the outcomes of these cases will set important precedents. A ruling in favor of publishers could force a significant restructuring of the AI industry, as developers would grapple with the immense challenge of mitigating hallucinations to avoid legal repercussions.

The Broader Implications:

The legal battles surrounding AI and intellectual property are far more than just individual disputes; they represent a fundamental clash between the potential of AI and the rights of content creators. The question at stake is not merely about financial compensation but also about the integrity of information and the future of journalistic integrity. If AI companies can freely utilize copyrighted material to train their models and generate content, the economic viability of original content creation could be significantly undermined. This could lead to a chilling effect on journalism and other creative industries, hindering innovation and limiting access to diverse perspectives.

The current legal challenges are forcing a critical conversation about responsible AI development. While these lawsuits are raising important issues, ultimately, a concerted effort is needed to establish clear guidelines regarding the use of copyrighted material, appropriate attribution of sources, and mechanisms to mitigate AI hallucinations. The development of effective AI watermarking technologies might offer one solution, enabling the tracing of AI-generated content back to its origins. This could offer a way to deter misuse and provide a basis for assigning responsibility.

Moreover, the dialogue needs to extend beyond purely legal considerations to encompass ethical discussions about the societal impact of AI. These technologies hold immense transformative potential, but their deployment needs careful consideration to avoid unintended consequences, particularly regarding misinformation, intellectual property rights, and the preservation of journalistic ethics. The successful navigation of this complicated landscape requires collaboration between AI developers, policymakers, and content creators to forge a path that balances innovation with ethical and legal responsibility. Otherwise, the potential benefits of AI could be overshadowed by a protracted and damaging legal battle with far-reaching consequences. The outcome of these lawsuits will influence the future development and use of generative AI technologies, shaping how we navigate the complex relationship between artificial intelligence and human creativity.

Article Reference

Sarah Mitchell
Sarah Mitchell
Sarah Mitchell is a versatile journalist with expertise in various fields including science, business, design, and politics. Her comprehensive approach and ability to connect diverse topics make her articles insightful and thought-provoking.