top of page

The Death of Trust: A GP's View on Medical Misinformation

We live in the age of information — which, unfortunately, means we also live in the age of misinformation.


As a GP, I’ve had consultations that felt more like debates. Not because the patient was hostile — but because the internet had already offered them a diagnosis, a treatment plan, and usually, a conspiracy theory for dessert.


It’s tempting to sigh, dismiss, or double down. But the truth is, this challenge isn’t new. It’s just evolved — faster than our frameworks, and louder than our guidelines.


Misinformation Is Old Medicine in a New Bottle


The problem isn’t new. People have always shared bad health advice. Bloodletting, mercury pills, radioactive tonics — once cutting-edge, now cringeworthy. And organised resistance to public health? That too. Anti-vaccine riots broke out in Milwaukee in 1894 over smallpox mandates.


So the idea that public health guidance is some elite overreach? Not exactly a TikTok invention.


What’s changed is speed.


A claim that used to spread by pamphlet now goes global in 30 seconds.



There’s a key distinction to be made at this point:


  • Misinformation = false info shared without intent to harm.

    • e.g., A well-meaning Instagram account shares a fake vaccine stat from a parody site.


  • Disinformation = false info shared with intent to mislead or harm.

    • e.g., Coordinated campaigns undermining vaccine uptake for political or financial gain.


The line between them is fuzzy — but both erode trust and distort patient understanding in the same way.


Why Smart People Still Share Bad Info


The spread of misinformation isn’t about IQ. It’s about human psychology:


  • Illusory truth effect: Repeat a falsehood enough, and it starts to feel true.


  • Confirmation bias: We trust information that matches our existing beliefs


  • Social proof: If our peers share it, we assume it’s reliable — or at least not dangerous.


Even when we know something’s dodgy, if it confirms our worldview, we’re more likely to click, comment, and share.


Dr Google Isn’t Just Popular — He’s Ubiquitous


72% of people search online for health info — and most of them start with a search engine.


But Google isn’t a librarian. It’s a popularity algorithm. It prioritises clicks, not clinical accuracy.



During COVID-19, physicians themselves contributed to misinformation online, from vaccine hesitancy to promoting debunked cures.


If even the professionals are misstepping, how can patients navigate the noise?


Social Media: Friend and Foe


Social platforms have undeniably helped empower patients:


But there’s a catch. These platforms amplify both wisdom and nonsense. Emotional, simplified narratives are more likely to go viral than dry scientific explanations.


That’s a dangerous algorithm.


Real-World Impact in Clinic


I see the effects every week:

  • Patients arrive with pre-loaded diagnoses from influencers and forums.

  • Consultations derail because of TikTok claims or miracle cures.

  • Trust fractures when the evidence contradicts what they’ve already internalised.


The worst part? That quiet moment when you see doubt flicker in someone’s eyes — not about the illness, not about the TikTok influencer, but about you.


It Hits the Vulnerable Hardest


Not everyone has the tools to decode digital health info.

  • Low health literacy increases risk of falling for falsehoods.

  • Marginalised groups — especially those with historic mistrust of healthcare — are disproportionately targeted or affected.

  • In maternal care, misinformation contributes to higher risk, lower-quality care, and poorer outcomes.


Cyberchondria — anxiety from online health searches — is now recognised as a real condition, exacerbating distress rather than offering clarity.


What Can We Actually Do?


No, we can’t ban Google. But we can help patients navigate the noise:


  • Debunk - Offer clear, non-judgmental corrections — but avoid jargon or condescension (Phillips, 2023).

  • Prebunk - Flag common myths before they arise — especially around vaccinations or chronic conditions

  • Model Professionalism Online - Physicians’ online behaviour shapes patient trust, even outside clinical hours (I know, glass houses)

  • Champion Digital Health Literacy - Support tools or plug-ins that teach patients to vet sources in real time.

  • Push Platforms to Step Up - Tech companies must do more than disclaimers. AI-driven moderation and authoritative content promotion are possible — but they need pressure and partnership.


Final Thoughts — Rebuilding Trust in the Age of Clicks


This isn’t about patients being naïve. It’s about systems failing to keep pace with information velocity.


As clinicians, we’re asked to compete not just with disease, but with content — some of it slicker, louder, and more appealing than evidence-based truth.


We need to:

  • Educate without patronising

  • Empathise without enabling

  • And speak louder — but not shoutier — than the misinformation machine


If we don’t, someone else will fill that space. And they may not care whether their advice helps, harms, or kills.


This blog is based on a Paper I wrote for the Journal “Medicine”. If you’re interested, the link to the paper is available here


Stay Bunked

—DW

Comments


bottom of page