Perplexity Hit by Lawsuit Over Alleged Verbatim Content Theft
Imagine having a super-powered research assistant that sifts through mountains of information in seconds. That’s AI search. Instead of hopping from website to website, AI distills insights from countless sources, delivering a concise summary right to you. Sounds amazing, right? Publishers might disagree. The Chicago Tribune, for one, recently took legal action against Perplexity, alleging the AI platform is lifting their content word-for-word. The game is changing and sparking a debate.
Perplexity hit with lawsuit by Chicago Tribune
The Chicago Tribune has thrown down the legal gauntlet, accusing AI upstart Perplexity of brazen copyright infringement. TechCrunch obtained the lawsuit, which reveals the Tribune’s lawyers cornered Perplexity in mid-October, demanding answers: Was the AI titan feasting on the Tribune’s content to fuel its learning models? Perplexity deflected, denying direct usage but admitting they "may receive non-verbatim factual summaries." The Tribune, it seems, isn’t buying it.
The Tribune isn’t buying Perplexity’s excuses. Their legal team alleges blatant content theft, claiming Perplexity is lifting articles word-for-word. The finger is pointed squarely at Perplexity’s retrieval augmented generation tech, ironically a feature touted for minimizing AI’s tendency to "hallucinate" a problem Perplexity usually avoids better than its rivals.
Perplexity stands apart by anchoring its answers to verified sources through Retrieval Augmented Generation. Forget outdated information or fabricated details – with Perplexity, you get trustworthy insights, a stark contrast to models prone to quoting dusty data or outright inventing facts.
This popular feature faces a legal firestorm. Attorneys for the newspaper allege unauthorized scraping of content. The Perplexity’s Comet browser stands accused of dodging paywalls, serving up unauthorized article summaries.
Not the first lawsuit
AI companies are finding themselves in legal hot water, and it’s not the first time. The secret sauce of any impressive AI model? Data, mountains of it. And the easiest place to mine that data? The internet, naturally. But here’s where the sparks fly: the internet is a patchwork of content, including carefully crafted articles that companies have poured money into. Now, those companies are questioning whether that freely accessible information should be used to train AI at all.
Imagine building a skyscraper, brick by painstaking brick. Research dollars, staff hours, server farms – all feeding into a grand, informative structure. Now picture a sleek drone swooping in, instantly scanning the blueprint, and handing it out on the street corner,gratis. That’s AI. It’s siphoning traffic away from the very websites that painstakingly curated the data it now freely dispenses, potentially leaving those digital skyscrapers deserted. Why browse when instant answers are just a prompt away?
Perplexity’s legal woes are mounting, echoing OpenAI’s battle with The New York Times. Just months after Reddit filed suit, the AI search engine finds itself in the crosshairs. Meanwhile, a chorus of publishers is rising against Google’s AI Overviews, accusing the feature of siphoning off vital traffic and revenue – a digital David versus Goliath showdown.
Thanks for reading Perplexity Hit by Lawsuit Over Alleged Verbatim Content Theft