top of page

AI and the Modern Consultation: A Doctor’s Eye View

Updated: Jul 15

I’m an early adopter. Have been for a while - hence the “early” bit.


I already use AI all the time — transcribing my dictated rants from my car on voice notes into blogs like this one, brainstorming MCQs for teaching sessions, writing case studies for students (not for CPD nor with identifiable shit - I don't need that heat on me).


I still do the final pass, still lace it with whatever brand of dry sarcasm and existential weariness makes it mine — but it’s in the workflow now.


And if you’re a clinician who says you’ve never tried ChatGPT to word that angry reply in a non-GMC worthy way - I straight up don’t believe you.


But AI isn’t just a toy. It’s here. And it’s already slithering into the consultation room — whether we like it or not.


The Google Generation Is Already Here


We’ve been in the “Google-first, doctor-second” era for a while now. The deference is dead. Patients don’t turn up to ask what’s wrong anymore — they come to verify what they’ve already decided.


Sometimes they’re right. Sometimes they’ve Googled “IBS” and landed on a Lyme mold toxicity subreddit run by a woman selling fermented celery enemas. And that’s before AI even enters the picture.


Right now, search engines optimise for clicks, not truth. Engagement wins. Outrage wins. Confidence wins. And if you’re already nervous, in pain, or pissed off? You’ll click the scariest, most emotionally validating thing you see. That’s not empowerment. That’s algorithm-assisted anxiety.


AI Doesn't Fix That — Yet


Current AI models hallucinate like they’ve taken a fistful of benzos and watched too many TED Talks.They make up facts with unsettling confidence. They blend plausible-sounding nonsense with occasional pearls of accuracy.And most people — including clinicians — don’t always spot it.


Sure, it’ll get better. Eventually, we’ll probably see models that can summarise NICE guidance, flag red flags, and cite actual references. Maybe even appraise papers. Imagine a chatbot that explains IBS and tells you why your symptom doesn’t warrant a colonoscopy without being a dick about it.


But right now? No. It can’t reliably do that. Not without oversight. Not without serious curation.And definitely not in a consultation where someone’s bringing in 12 tabs open and asking why I haven’t ordered a cortisol test for their “adrenal crash.”


The Danger Isn’t AI — It’s Us


What worries me isn’t the tech. It’s how people are already treating it like gospel. If AI tells them something — especially if it sounds smart and scary and fits their vibe — they believe it. Even when it’s crap. And now I’ve got to refute it. Calmly. Compassionately. Without sounding dismissive, even when it’s nonsense dressed in scrubs.


The risk is especially high in “grey area” medicine — chronic fatigue, pain syndromes, functional symptoms. AI can look like it’s validating you, when actually it’s just parroting the most-searched fluff. That damages trust. In me. In the system. In evidence.


And if you don’t believe me, go read Guo & Chen on framing bias in AI responses. Or Verma’s work on AI’s complete lack of empathy and context. AI doesn’t know your trauma. It doesn’t know how you experience your body. It sure as hell doesn’t know why you're scared to take a pill.


We’re even seeing early signs of patients using AI and FOI requests to audit their car journey in its entirety. Without context of changing medicine, changing access to service, and the ever present issue of personality clash.


It’s a looming monster.


Meanwhile, in the Real World...


There are places AI already works.

  • Transcribing letters.

  • Flagging drug interactions.

  • Triaging referrals.

  • It can help with counselling — in a low-risk, low-stakes, repeatable kind of way.

  • It’s being trialled in chronic disease check-ins and mental health chatbots.


But it’s not replacing us. Not soon. Maybe ever.


Because medicine isn’t data crunching. It’s storytelling. Pattern recognition. It’s breaking bad news at 3 a.m.


It’s knowing that the man asking about joint pain really just misses his wife. It’s reading body language and dodging landmines in a five-minute chat with someone who’s been suffering for five years.


There is no model that can do that - that takes empathy. If it ever could then is it really AI any more?


Meta.


What We Need to Do


We don’t need to panic. We do need to engage.


Now. Not after breakfast. Now.


Clinicians have to be part of the AI conversation. Not just the techies and academics. We need to help build it, test it, critique it.


We need to define what “trusted sources” are before the chatbots start citing Gwyneth Paltrow.


We need to teach patients how to appraise information again — the way we should’ve done with social media, before it swallowed the culture whole.


Because like it or not, patients trust this stuff already.


So let’s not get replaced by it. Let’s out-human it. Let’s use it — but better. Let’s keep medicine, well... medicine.


And as always: if in doubt, start with a chat. Not a chatbot.


Stay Human

—DW

Comments


bottom of page