top of page

The Cult of QIP (and Why I’m Sick of It)

Updated: Jul 6

Or: How I Learned to Stop Worrying and Despise Audits


Let’s start with something blunt:


Quality Improvement, in its current state, is a parody of itself.


Not because change is bad. Not because safety isn’t important. But because the entire process has become a bureaucratic ouroboros—a snake eating its own tail made entirely of stickers, checklists, and PDSA cycles nobody finishes.


And research? Don’t get me started. Actually, do. You’ve come this far.


We’ve Made Research Impossible


I’ve tried—twice—to get benign, sensible, patient-safety-centric studies off the ground. I wasn’t trying to inject orphans with uranium or ask ICU patients what their favourite Teletubby was. I wanted to look at real things—interruptions on ward rounds, communication failures, Consultation bounce rates.


And still?


- Three-hour meetings.

- Seven-hour online modules reminding me not to be a Nazi.


Eventually, it collapses under the weight of its own sanctimony.


And the kicker? If you’re lucky enough to get past that research Olympus, your project then gets blocked because it might offend someone. You know, data. Someone might be seen to be underperforming. And we can’t have that, can we?


The Cult of QIP: All Sticker, No Substance


QIPs are everywhere.


They’re the glitter of medical training: looks shiny, but gets everywhere and is functionally useless within 24 hours.


“Do your QIP.”

“Re-audit.”

“PDSA cycles.”

“Maybe add a sticker.”

“Add another sticker.”

“Wait, are the stickers laminated?”


If I had a pound for every DVT prophylaxis audit done by a reluctant FY2 staring into their own mortality via an Excel spreadsheet, I’d have enough money to pay for a single consultant hour in private care.


Let’s be honest:


QIPs are not learning opportunities. They are bureaucratic compliance exercises wrapped in clinical drag.


And if you want real change? It won’t come from a sticker.


Auditing Things That Piss Me Off Shouldn’t Be This Hard


The best audits I’ve ever seen - ever - started with someone being genuinely pissed off.


- Sick of handover being a mess?

- Annoyed patients keep getting admitted without social histories?

- Tired of the same stupid interruptions mid-ward-round?


That’s where quality starts - from friction.


But when you try to audit real friction, suddenly you’re in hot water because it might make someone uncomfortable.


I once tried to audit how many times doctors were interrupted vs nurses during ward rounds. I didn’t name names. I didn’t even highlight trends. But as soon as word got out that someone was watching, behaviour changed. The Hawthorne Effect kicked in.


The audit became useless. Not because it wasn’t valuable, but because we can’t handle reflection unless it’s pre-approved, pre-polished, and pre-neutralised.


We Can’t Audit What Actually Matters


You want to audit safety netting?


You’ll be told no.

Might expose variability between doctors.


You want to see how many referrals bounce back because someone didn’t include a haemoglobin from 2017?


Not politically tenable.


Want to anonymously measure how often someone’s overbooking because they’re terrified of complaints?


Forget it.


But here’s the thing:


Data should be uncomfortable sometimes.


It should spark conversation, introspection, maybe even argument. But instead, we create safe, soft, neutered QIPs.


Stickers.

Stamps.

Training reminders.

“Did you put the chart in the chart?”


Tokenism and Tickboxes: A Culture of Managerial Theatre


Somehow, QIPs have become currency.


Not currency that buys change, but currency that buys approval.


- Do one for your ePortfolio.

- Do one for ARCP.

- Do one because your supervisor keeps nagging you.

- Do one because you want a fellowship.

- Do one because… you have to.


But rarely - rarely - do we do one because it will actually lead to something getting better.


And when we do? It gets shut down, diluted, or slapped with so many conditions it forgets what it was supposed to measure.


The result?


We aren’t improving anything. We’re performing improvement.


If We Really Cared About Quality, We’d Build It Into the System


Here’s the dream:


- Centralised, live QIP dashboards.

- System-level analytics run by AI.

- A network of shared audits across sites.

- Trainees choose one and contribute meaningfully, not just scramble to fill a PDSA box.


One national DVT audit. One antimicrobial audit. One mental health liaison audit. Not 400 mediocre ones.


QIPs are ripe for automation.


But no—because God forbid we improve the infrastructure.


It’s easier to make the FY2 audit codeine prescribing again while working a full rota, preparing for exams, and getting shouted at for “not being a team player.”


Research Should Be a Career. Not a Hobby.


And finally:


Medical researchers should be researchers. Not doctors doing 11 jobs.


Let them publish. Let them teach if they can teach.


But keep them the hell away from overworked juniors trying to survive the ward.


Don’t make me take a four-hour “how not to experiment on prisoners” module to do a patient satisfaction survey.


Let researchers research.

Let clinicians flag issues.

And let managers stop pretending stickers change culture.


TL;DR


- Most QIPs are shite. I’ll die on that hill.

- The System doesn’t want the uncomfortable data—the real stuff.

- And we keep pretending this is about learning. It’s not.


If we want real improvement, it won’t come from another cycle on whether we’ve ticked the right box.


It’ll come from letting the right people audit the right things, with the right support—without the fucking stickers.


Stay Cyclical

—DW

Comments


bottom of page