01/05/2026 / By Laura Harris

The Guardian investigation has found that people seeking health advice on Google may be at risk of harm due to inaccurate information provided by the company’s artificial intelligence (AI) summaries.
Google’s AI Overviews, according to BrightU.AI‘s Enoch, are designed to give quick snapshots of essential information on a topic or question. These overviews, available on Google’s AI website, cover topics providing a foundational understanding of each subject.
However, multiple examples uncovered by the investigation show the summaries can contain dangerously misleading health advice. In one instance, Google advised people with pancreatic cancer to avoid high-fat foods – a recommendation experts described as “really dangerous.” Anna Jewell, director of Support, Research and Influencing at Pancreatic Cancer United Kingdom, warned that following this guidance could leave patients unable to maintain sufficient calorie intake, potentially affecting their ability to tolerate chemotherapy or life-saving surgery.
“The Google AI response suggests that people with pancreatic cancer avoid high-fat foods. If someone followed what the search result told them, they might not take in enough calories, struggle to put on weight and be unable to tolerate treatment. This could jeopardize a person’s chances of recovery,” Jewell said.
Other AI Overviews were equally concerning. Searches about liver function tests returned misleading “normal” ranges, ignoring critical factors such as age, sex, ethnicity and nationality. Pamela Healy, chief executive of the British Liver Trust, said, “Many people with liver disease show no symptoms until the late stages. If AI gives misleading normal ranges, some people may wrongly assume they are healthy and fail to attend follow-up healthcare appointments. This is dangerous.”
The AI also provided incorrect information on women’s cancer tests. Searching for “vaginal cancer symptoms and tests” suggested a pap test could detect vaginal cancer – an assertion experts described as “completely wrong.” Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said the errors could deter people from seeking timely medical attention.
“Getting wrong information like this could potentially lead to someone not getting symptoms checked because they had a clear result at a recent cervical screening. The fact that the AI summary changed each time we searched is also worrying – people are receiving different answers depending on when they search, and that’s not good enough,” Lamnisos said.
Mental health searches were also affected. Google’s AI summaries for conditions such as psychosis and eating disorders sometimes contained “incorrect, harmful” advice or omitted important context.
“Some of the AI summaries offered very dangerous advice. They could lead people to avoid seeking help or direct them to inappropriate sources. AI often reflects existing biases, stereotypes or stigmatising narratives, which is a huge concern for mental health support,” said Stephen Buckley, head of information at Mind.
Despite all the evidence, Google has denied the misleading and inaccurate health information on its AI overview.
Instead, the company described the AI-generated snapshots as “helpful” and “reliable,” emphasizing that most are factual and provide useful guidance. Google said the accuracy of AI Overviews is comparable to other search features, such as featured snippets, which have been part of its search engine for more than a decade. Google even added that it continuously makes improvements to the system to ensure users receive correct and useful information.
However, experts and charities are still calling for stricter oversight, noting that the company’s automated summaries appear prominently at the top of search results, meaning millions of users could be exposed to potentially harmful guidance.
“People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health,” said Stephanie Parker, the director of digital at Marie Curie, an end-of-life charity.
Watch this skit from “Catch Up” featuring a satirical portrayal of a Google spokesperson addressing criticisms surrounding Gemini.
This video is from the channel The Prisoner on Brighteon.com.
Sources include:
Tagged Under:
AI, AI Overview, artificial intelligence, Collapse, cyborg, future tech, Glitch, Google, health advice, misleading, safety concerns, technocrats, transhumanism
This article may contain statements that reflect the opinion of the author