The challenge of predicting readmissions

It would be convenient if, along with their white coats, hospitalists were issued crystal balls.


It would be convenient if, along with their white coats, hospitalists were issued crystal balls. Thanks to performance measures and nonpayment policies, the question of whether a discharged patient will be readmitted within 30 days is becoming increasingly important in hospital practice.

Lacking any magical method for divining readmissions, a group of hospitalist researchers recently tried to determine whether physicians could rely on their instincts to predict which patients would come back, thereby making it easier to target readmission-prevention efforts.

The results of their experiment, conducted at the University of California, San Francisco Medical Center and published in the July Journal of General Internal Medicine, were disheartening. All of the studied providers, including physicians, case managers and nurses, showed poor ability to determine which patients would be readmitted. The study also tested one prediction algorithm, which performed no better than the humans in picking out the 33% of the 159 patients who returned within 30 days.

Lead author Nazima Allaudeen, MD, a hospitalist at UCSF, recently spoke to ACP Hospitalist about physicians' inability to predict readmissions and the necessity of preventing them nonetheless.

Q: What motivated you to investigate this topic?

A: The topic itself of readmissions is incredibly interesting to me and has become the hot topic. The work that you do for a patient you hope is maintained after they leave the hospital. I think all of us, personally and anecdotally, have seen that fall apart. People get readmitted sometimes with exactly the same thing.

As far as the ‘Can we predict?’ part, we talk a lot about trying to prevent readmissions. We talk about, ‘This patient's high risk’ and ‘This patient's low risk’ and it's often a subjective assessment. Some of what we do is dependent on that. If you think there is a good chance they're going to come back, maybe you do some things differently. It made me think, ‘Do we really know who's going to come back with all these subjective assessments?’ Most people would think that they are pretty good at it, so it was really interesting to find out that across the board, we really weren't as smart as we thought we [were].

Q: Your results showed that a prediction algorithm, the Probability of Repeat Admission (Pra), wasn't as smart as one might hope, either. Was that a surprise?

A: Not really. The algorithm contains things that are, for the most part, normally in a chart: age, do they have diabetes, do they have heart disease. I wanted to choose something simple. The one thing that's not usually in the chart that's part of that algorithm is the person's self-assessment of their health status. We needed something practical because some of these assessments take things into account that aren't normally in the chart.

The second reason [I chose the algorithm] is that it's the best that's out there. The fact that the Pra didn't do that well wasn't a huge surprise, because it didn't do great even in the initial studies. It had an AUC [area under the curve] of 0.6. Flipping a coin is 0.5, [so the algorithm is] better but not great. My own hypothesis was that human beings who are seeing the patient and taking into account a lot more than six variables would do better, particularly case managers, who really look at the medical and the social. That did not turn out.

Q: What do these results mean for physicians trying to prevent readmissions?

A: The take-home point from all of this is that you really need to treat all of your medicine patients as if they may come back. We need to ask ourselves what could go wrong. I think we need to do that for every patient, since at this time, no tools—whether our own subjective assessment or even algorithms—are that good. Recently there are maybe a couple things that have come out that are better, but really none of them are great. Until we have a better tool, we really need to treat everybody as a potential readmission.

Q: Is there potential for better tools or prediction methods to be developed?

A: It's possible, but it gets back to the heterogeneity of medicine patients, and this interface between both clinical and social issues. I think that might be at the heart of why it is so difficult to predict. We're talking about a pretty broad range of diagnoses as well as a broad range of comorbidities. Then you add on the psychosocial piece: Do they have social support at home that may keep them out of the ER? Do they have social support that may help them be compliant with the medications or other recommendations like diet? Do they have a way to get to their appointments and follow-up labs? All of these things play a role. There are so many factors involved. I think that's why the many different people who have tried to come up with algorithms have been modestly successful at best.

Q: Have the results caused you to change your practice individually?

A: Yes, I do think about everything that could fall apart when a patient leaves. I pay more attention to the social piece. That has an effect on how long I may keep the patient in the hospital to think about what kind of supports they have at home. I'm definitely looking at it from the eye of what could go wrong.