At a Glance
- Google has removed AI-generated health summaries for liver test queries after accuracy concerns
- The summaries lacked context for nationality, sex, ethnicity, or age of patients
- Why it matters: Patients relying on flawed data could delay critical medical care
Google has quietly removed AI-generated health summaries for specific liver-related searches after The Guardian revealed the information was misleading. The move signals potential risks as tech companies race to integrate AI into healthcare.
The Guardian reported Sunday that searches for “what is the normal range for liver blood tests” and “what is the normal range for liver function tests” no longer display AI Overviews at the top of results pages. Instead, users see excerpts pulled from traditional search results.
Accuracy Issues Trigger Removal
The AI Overviews for these liver-related searches contained “masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients,” according to The Guardian. The publication noted that experts warned such oversimplified summaries could be dangerous.

Someone with liver disease might delay follow-up care if they rely on an AI-generated definition of what’s normal, The Guardian reported. The lack of personalized context in the summaries created potential for patient harm.
Google Responds To Criticism
A Google spokesperson defended the company’s AI systems in a statement to News Of Fort Worth. “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information,” the spokesperson said.
The spokesperson added that Google’s internal team of clinicians reviewed the flagged content and “found that in many instances, the information was not inaccurate and was also supported by high quality websites.”
The company stated it works “to make broad improvements” when AI Overviews miss context and “take[s] action under our policies where appropriate.”
AI Healthcare Race Intensifies
The removals occur as more people turn to AI for health answers. OpenAI reported last week that roughly 25% of its 800 million regular users submit healthcare-related prompts weekly, with over 40 million doing so daily.
OpenAI subsequently launched ChatGPT Health, which connects with users’ medical records, wellness apps, and wearable devices. The company also announced its acquisition of healthcare startup Torch, which tracks medical records including lab results, doctor visit recordings, and medications.
Rival AI company Anthropic announced Monday new AI tools allowing healthcare providers, insurers, and patients to use its Claude chatbot for medical purposes. Anthropic claims these tools can streamline prior authorization requests and patient communications for hospitals and insurance companies.
For patients, Claude can access lab results and health records to generate summaries and explanations in plain language.
Stakes Rise For Patient Safety
As AI advances in healthcare, even minor errors or missing context carry significant consequences for patients. The Google removal highlights the challenges tech companies face when deploying AI systems for medical information.
The incident raises questions about whether these companies are adequately prepared for the responsibility of providing health information, where accuracy gaps could impact patient decisions and outcomes.
Google’s quiet removal of the problematic summaries suggests the company recognizes these risks, even as it continues developing AI healthcare applications. The move demonstrates the ongoing tension between rapid AI deployment and the need for accuracy in medical contexts.

