We all know that one atom of experience isn’t enough to spot a pattern: but when you put lots of experiences together and process that data, you get new knowledge. This might sound obvious, but following it through – watching patterns emerge from the noise – still gives me a sense of beauty and awe.
A paper in the British Medical Journal this week is a perfect example. Medicine is an imperfect art, so it’s inevitable that healthcare workers will make some suboptimal decisions: not so much the dramatic stuff – injecting people with the wrong drug – but more the marginal decisions, at the edges of the tweaks in a patient’s journey, affecting outcomes in ways that are harder to predict.
These kinds of complex decisions will inevitably be affected by context, and one example of that context is the franticness of A&E. Waiting times are a problem in a lot of countries. In the UK we introduced a four-hour ceiling as our target, and most hospitals met it. Abolishing that four-hour target was one of the coalition government’s first NHS reforms. But do waiting times matter?
Some researchers in Canada decided to find out. They gathered data from all the people who visited any A&E department in Ontario over a five-year period: this gave them data on a dizzying 22 million visits. Of these, 14 million resulted in the patient being seen and then sent home. Then they followed these patients up to see what happened, and specifically, to see if they died.
They also had another piece of information: for each patient they knew, from internal hospital data, what the average waiting time in A&E was at the time they arrived. This means that they were able to compare the odds of death for patients discharged when the average wait in A&E was less than four hours (or more) against the odds of death for patients discharged when the wait was less than one hour. Remember, this isn’t the time that individual patient waited, it’s the average wait in the department, as a proxy for how frantic things were.
The results were as you might fear. For patients sent home who attended an A&E department when the average wait there was more than six hours, the odds of death were almost twice those of patients sent home when the wait was less than one hour. This odds ratio was similar for patients measured as high or low urgency at triage, so it’s true for patients with both serious and less serious presentations.
Even more starkly, there’s a very clear trend in the data, where each step up in waiting time results in a higher risk of death. This becomes statistically significant when average waits reach just three hours. For those who care about saving money, the odds of being admitted – and so taking up an expensive hospital bed – also rose dramatically as average wait time increased.
However important you might find those specific results, the methodological issues are much more interesting, and they all arise because of the big numbers involved. We would never have discovered any of this without huge numbers of patients’ records, because the outcomes involved are rare: you only see a handful of deaths out of every 10,000 people sent home from A&E.
What’s more, because they had so many patients’ data, the researchers were able to see an effect even within hospitals, over time: so it wasn’t just that crap hospitals overall had longer waits, and higher death rates. What’s more, amazingly, they didn’t lose a single patient during follow-up: the death – or otherwise – of every single patient who was sent home from A&E could be tracked through their notes.
No individual patient or doctor could possibly have shown with any certainty, from their own personal experience of any one adverse outcome, that long waiting times in A&E are dangerous. This study is a remarkable testament to the power of good-quality computerised health records, and the kinds of new knowledge you can generate from interrogating them. It’s also, I’ll agree, a pretty frightening result.