The Guardian published an investigation claiming well being consultants discovered inaccurate or deceptive steering in some AI Overview responses for medical queries. Google disputes the reporting and says many examples have been primarily based on incomplete screenshots.
The Guardian mentioned it examined health-related searches and shared AI Overview responses with charities, medical consultants, and affected person info teams. Google instructed The Guardian the “overwhelming majority” of AI Overviews are factual and useful.
What The Guardian Reported Discovering
The Guardian mentioned it examined a spread of well being queries and requested well being organizations to assessment the AI-generated summaries. A number of reviewers mentioned the summaries included deceptive or incorrect steering.
One instance concerned pancreatic most cancers. Anna Jewell, director of help, analysis and influencing at Pancreatic Most cancers UK, mentioned advising sufferers to keep away from high-fat meals was “fully incorrect.” She added that following that steering “may very well be actually harmful and jeopardise an individual’s probabilities of being properly sufficient to have remedy.”
The reporting additionally highlighted psychological well being queries. Stephen Buckley, head of knowledge at Thoughts, mentioned some AI summaries for circumstances akin to psychosis and consuming issues provided “very harmful recommendation” and have been “incorrect, dangerous or could lead on individuals to keep away from looking for assist.”
The Guardian cited a most cancers screening instance too. Athena Lamnisos, chief government of the Eve Attraction most cancers charity, mentioned a pap check being listed as a check for vaginal most cancers was “fully flawed info.”
Sophie Randall, director of the Affected person Data Discussion board, mentioned the examples confirmed “Google’s AI Overviews can put inaccurate well being info on the prime of on-line searches, presenting a danger to individuals’s well being.”
The Guardian additionally reported that repeating the identical search may produce completely different AI summaries at completely different occasions, pulling from completely different sources.
Google’s Response
Google disputed each the examples and the conclusions.
A spokesperson instructed The Guardian that lots of the well being examples shared have been “incomplete screenshots,” however from what the corporate may assess they linked “to well-known, respected sources and advocate looking for out skilled recommendation.”
Google instructed The Guardian the “overwhelming majority” of AI Overviews are “factual and useful,” and that it “repeatedly” makes high quality enhancements. The corporate additionally argued that AI Overviews’ accuracy is “on a par” with different Search options, together with featured snippets.
Google added that when AI Overviews misread net content material or miss context, it’s going to take motion beneath its insurance policies.
See additionally: Google AI Overviews Impact On Publishers & How To Adapt Into 2026
The Broader Accuracy Context
This investigation lands in the course of a debate that’s been operating since AI Overviews expanded in 2024.
Through the preliminary rollout, AI Overviews drew consideration for weird outcomes, together with strategies involving glue on pizza and consuming rocks. Google later mentioned it could reduce the scope of queries that set off AI-written summaries and refine how the function works.
I covered that launch, and the early accuracy issues shortly grew to become a part of the general public narrative round AI summaries. The query then was whether or not the problems have been edge instances or one thing extra structural.
Extra not too long ago, data from Ahrefs suggests medical YMYL queries are extra doubtless than common to set off AI Overviews. In its evaluation of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That’s greater than double the general baseline charge within the dataset.
Separate analysis on medical Q&A in LLMs has pointed to citation-support gaps in AI-generated solutions. One analysis framework, SourceCheckup, discovered that many responses weren’t totally supported by the sources they cited, even when methods supplied hyperlinks.
Why This Issues
AI Overviews seem above ranked outcomes. When the subject is well being, errors carry extra weight.
Publishers have spent years investing in documented medical experience to satisfy. This investigation places the identical highlight on Google’s personal summaries once they seem on the prime of outcomes.
The Guardian’s reporting additionally highlights a sensible drawback. The identical question can produce completely different summaries at completely different occasions, making it more durable to confirm what you noticed by operating the search once more.
Trying Forward
Google has beforehand adjusted AI Overviews after viral criticism. Its response to The Guardian signifies it expects AI Overviews to be judged like different Search options, not held to a separate customary.
Source link


