Wednesday, March 18, 2026

Study Finds AI News Assistants Often Provide Inaccurate Information

Related

Country Singer Charley Crockett Cancels Canadian Tour

Texas country music artist Charley Crockett has called off...

“Glare Monsters: Concerns Rise Over Bright LED Headlights”

LISTEN | Shedding Light on Intensely Bright Headlights:The Current24:42Shedding...

Stellantis Considers Selling EV Battery Plant Stake

Stellantis, a major global car manufacturer, is considering selling...

Study Finds AI News Assistants Often Provide Inaccurate Information

A recent study by the European Broadcasting Union (EBU)...

“E. Prince County Arson Spree: RCMP Investigation Intensifies”

A string of five suspicious fires in eastern Prince...

Share

A recent study by the European Broadcasting Union (EBU) and the BBC revealed that AI assistants frequently provide inaccurate information in almost half of their responses to news-related queries. The international research, which analyzed 3,000 responses from top artificial intelligence assistants, assessed accuracy, sourcing, and the ability to differentiate between opinion and fact across 14 languages, including platforms like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

According to the findings, 45% of the AI responses examined contained significant issues, with 81% displaying some form of problem. A notable percentage of online news consumers, particularly those under 25, rely on AI assistants for news consumption. The study also highlighted that AI assistants often make sourcing errors, with Gemini showing the highest percentage of significant sourcing issues.

Leading AI companies, such as Google’s Gemini, OpenAI, and Microsoft, have expressed their commitment to addressing these challenges and improving the accuracy of their platforms. Despite claims of high factual accuracy by some AI assistants, the study found instances of misinformation, including outdated information and incorrect attributions.

The study pointed out specific examples of misinformation, such as Gemini inaccurately reporting changes to a law and ChatGPT mistakenly identifying Pope Francis as the current Pope posthumously. To address these issues and maintain public trust in AI-assisted news consumption, the EBU emphasized the importance of accountability and accuracy in how AI assistants handle news-related inquiries.