Stay connected with BizTech Community—follow us on Instagram and Facebook for the latest news and reviews delivered straight to you.
Apple Intelligence, the tech giant’s generative AI tool, is under scrutiny after creating misleading headlines, prompting a formal complaint from the BBC and a public call for its removal by Reporters Without Borders (RSF). The controversy has sparked a heated debate about the maturity and reliability of AI-driven news summarization tools.
Apple Intelligence, launched to streamline grouped notifications on iPhones and other devices, misrepresented a BBC News story about Luigi Mangione, a murder suspect in the killing of healthcare CEO Brian Thompson. The AI-generated headline falsely suggested that Mangione had shot himself—a claim that was entirely untrue.
The BBC confirmed it had contacted Apple to address the issue. However, Apple has yet to respond publicly or indicate corrective measures.
The BBC incident is not isolated. On November 21, Apple Intelligence wrongly grouped three New York Times articles with a headline reading “Netanyahu arrested”, inaccurately summarizing an ICC arrest warrant report. This misstep was highlighted by journalist Ken Schwencke of ProPublica.
RSF, an influential journalism advocacy group, has called on Apple to halt its generative AI feature.
- Vincent Berthier, RSF’s head of technology and journalism, said:
“Generative AI services are still too immature to produce reliable information for the public. The automated production of false information attributed to media outlets undermines their credibility and endangers public trust.”
The organization warned that such inaccuracies threaten the public’s right to factual reporting and urged Apple to act responsibly by suspending the feature.
Apple Intelligence uses generative AI to summarize and group notifications, aiming to minimize user interruptions. However, the feature is only available on select devices running iOS 18.1 or later. While designed to enhance usability, critics argue that the technology’s errors undermine its utility and trustworthiness.
Users can report inaccuracies via the app, but Apple has not disclosed the volume or nature of complaints received since the tool’s launch.
Media organizations are wary of how generative AI misrepresents their content. The New York Times declined to comment, while the BBC has yet to confirm whether Apple has addressed its concerns.
The backlash underscores broader concerns about generative AI in journalism, where errors can have far-reaching consequences.
Apple’s silence on these issues has fueled speculation about whether it will revise or retract the feature. The company faces mounting pressure from journalists and advocacy groups to improve the tool’s accuracy or halt its rollout until it can ensure reliability.
As generative AI tools like Apple Intelligence become more prevalent, the stakes for factual accuracy in news summarization grow higher. For now, Apple’s response—or lack thereof—could shape the future of AI integration in journalism.