The BBC has issued a legal warning to US-based artificial intelligence company Perplexity, accusing it of reproducing BBC content without permission and demanding that the company stop using its material, delete existing data, and propose financial compensation.
BBC vs. Perplexity: A Legal Showdown That’s Been Missing a Few Time‑Zones!
Lights, Camera, Legal Drama
The British Broadcasting Corporation has, for the first time, threatened court action against an AI startup. In a letter to Perplexity’s chief tech wizard, Aravind Srinivas, the BBC slammed the chatbot for handing out full BBC news clips to users – a direct hit on UK copyright laws and the broadcaster’s own rules.
“It’s severely damaging our reputation with audiences and eroding trust,” the BBC wrote. The allegation is that the AI’s supposedly “real‑time” answers are basically recycling original content without giving credit where it’s due.
Perplexity’s Response: A Classic ‘We’re Not the Culprit’ Defense
Quickly on the back of the notice, Perplexity fired back with a terse statement that felt oddly reminiscent of a smug email reply: “The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.” No clarity was offered on why Google is relevant here, leaving readers guessing.
The Core of the Dispute: Web Scraping Gone Rogue
At the heart of this spat is the practice of web scraping—bots that coil up footage from sites and feed it into AI models. While many sites use robots.txt files to tell bots “nope” to certain content, compliance is voluntary. The BBC claims it specifically blocked two of Perplexity’s crawlers, yet the AI company allegedly keeps crawling their pages.
Perplexity’s CEO had previously claimed the bots honour robots.txt and do not use data to train large language models. Instead, the platform positions itself as a “real‑time answer engine” that pulls living info from the web.
Industry Allies Get Loud
The Professional Publishers Association (PPA), which represents more than 300 UK media brands, joined the chorus of concern:
- No authorization or compensation for reusing publishers’ content.
- Threats to the UK’s £4.4 billion publishing industry.
- Assault on jobs supporting the sector – about 55,000 people.
- A call for the government to beef up copyright protection for AI usage.
Why This Matters: A Broader Fight Between Newsrooms and AI
Consider the surge of AI assistants like ChatGPT, Google Gemini, and Perplexity’s own chatbot. While they’re handy for quick answers, critics press them for:
- Misleading or incomplete summaries.
- No clear attribution of original sources.
- Diverting traffic away from the news organizations that created the content.
In January, Apple even pulled an AI feature that generated bogus BBC headlines on iPhones after the broadcaster complained.
Industry Voices: The Stakes for Journalism
Quentin Willson, a former Top Gear host and FairCharge campaign founder, warned: “If AI is allowed to scrape and regurgitate verified journalism without consent or compensation, the business model for serious news collapses.”
While some outlets have negotiated licensing deals—AP, Axel Springer, News Corp, and the like—others are hitting the legal road. The New York Times is already suing OpenAI and Microsoft, and further lawsuits loom as AI advances.
What’s Next? Will the BBC Follow Through?
For now, the BBC demands that Perplexity stops any unauthorized use, deletes all scraped data, and pays damages. If the broadcaster pushes ahead with formal litigation, it could set a huge precedent in the global tussle over AI and journalism.

