In the face of growing pressure to reduce readmissions, more hospitals are implementing prediction models that ideally will alert them to patients at risk of returning.
But not every prediction model works equally well for every hospital, say readmission experts, who encourage hospitalists and other clinicians to do their homework before choosing or developing the best model for their facility.
“I see a lot of places reach for off-the-shelf models,” said Devan Kansagara, MD, FACP, an assistant professor of medicine at Oregon Health & Science University and director of Portland VA Medical Center's Evidence-based Synthesis Program. “With clinical models, you need to understand who your local population is and what the important drivers of readmission are. These are all steps that can be taken by hospitals and practices that come before the incorporation” of a readmission risk prediction model.
Stepping stones to a system
Readmission prediction models vary vastly by complexity, ease of implementation, and risk criteria. To get started, clinicians should first determine their needs, collect and analyze relevant data, and evaluate how the model will be used, said Craig A. Umscheid, MD, FACP, a hospitalist and director of the Penn Medicine Center for Evidence-based Practice in Philadelphia. He and his team recently developed and integrated an automated readmissions prediction model into an existing electronic health record (EHR) at The University of Pennsylvania Health System.
Most important, strong interventions should be created and ready to use prior to activating models, Dr. Umscheid said.
“Predicting the risk of readmissions is only going to change readmission rates if that prediction is used in routine clinical practice and used by the right players on the right interdisciplinary team using the right interventions for those populations that are flagged as high risk,” he said.
Hospitals may want to conduct a case review of monthly readmissions to identify gaps in care that might be causing readmissions, Dr. Kansagara said. They should also assess health system resources, as well as how staff workload may be affected by predictive models, to determine the feasibility of various models.
One factor to consider is that some models require manual data collection, while others solely rely on computerized systems to access patient data and calculate scores. For example, data to calculate the Acute Physiology and Chronic Health Evaluation (APACHE) III score is manually collected by trained bedside nurses and requires more than half an hour per patient, according to data cited in a 2011 Journal of Critical Care study. APACHE III is a commercial clinical prediction tool widely used in ICUs to predict mortality.
The time and energy associated with data collection and calculation of risk prediction scores are common barriers to model adoption, said Vitaly Herasevich, MD, PhD, assistant professor of medicine and anesthesiology in the department of anesthesiology at Mayo Clinic in Rochester Minn. He and his colleagues in the the Multidisciplinary Epidemiology and Translational Research in Intensive Care (METRIC) research group developed the Stability and Workload Index For Transfer (SWIFT), a score to help facilitate patient discharge decisions from ICUs that is integrated into the EHR or can be calculated manually.
In the era of EHRs, “[a] manual collection approach is not sustainable,” said Dr. Herasevich, who co-authored the 2011 JCC study. “It is [a] waste of resources and time on [a] task computers do better—routine numbers crunching. Collection of data for prediction models is not [an] efficient usage of physicians' time.”
Dr. Herasevich's study evaluated the use of an automatic calculation tool that can be embedded into a comprehensive EHR to quantify risk of unplanned ICU readmissions. Researchers compared the gold-standard manual calculation of SWIFT with the performance of the automatic SWIFT calculation tool. Results showed that using an electronic tool that automatically generates scores for prediction models such as SWIFT decreases clinician workload and improves consistency of data collection.
Decide on your data
Hospitals must decide what type of data they want their prediction model to measure. Some systems operate on a more simplistic level, while others take into account multiple variables and more complex information, said Jeffrey Schnipper, MD, ACP Member, a hospitalist at Brigham and Women's Hospital in Boston and an associate professor of medicine at Harvard Medical School.
For example, the generic model LACE identifies risk factors that are relatively easy to evaluate, Dr. Schnipper said. LACE considers Length of stay, Acuity of the admission, Comorbidity of the patient, and Emergency department use in the last 6 months. An April 2010 study in the Canadian Medical Association Journal that analyzed the use of LACE by 11 Ontario hospitals found the model had a high degree of success in predicting the risk of early death and unplanned patient readmission.
Other models, such as the commercially available Striving for Quality Level and Analyzing of Patient Expenditures (SQLape), use a more complex hierarchical diagnosis and procedures-based approach. SQLape combines data on comorbidity, age, and medical services use into 49 risk categories.
“You could probably increase the accuracy of your model by including lots of different variables,” said Dr. Kansagara, who co-authored a 2011 review published in JAMA of 26 unique hospital readmission prediction risk models. “But you have to ask yourself if you can extract the data from your EHR, which wouldn't take clinical time, but it does involve a team [and the development of] software to pull data from the EHR.”
In addition, some prediction models are based on insurance claims data, while others center on clinical data. Claims data can take significantly longer to collect and analyze, limiting the time window of intervention, said Curt Sellke, vice president of analytics for the Indiana Health Information Exchange (IHIE) in Indianapolis.
“By that point, patients may have already been readmitted once, twice, three times,” he said.
Commercial vs. custom
Mr. Sellke and his team at IHIE are developing their own 30-day readmission risk prediction model based on more than 100 distinctive variables and powered by clinical data. The goal is to ultimately deploy the model to locations across the state.
Developing a custom archetype is no simple task, Mr. Sellke said. It takes a multi-skilled group of leaders, including a data guru, a statistical specialist, and an expert with strong clinical knowledge, such as a physician, he said.
“This is a fairly new field, especially in health care, so the first [question] is, do you have the staff necessary if you're going to build it from scratch?” Mr. Sellke said. “If not, is there a model that you can purchase? If this is the first time you're doing this, it helps to have a good partner to work with.”
Identifying patients at higher risk for readmission before discharge was the impetus behind a predictive score developed at Brigham and Women's Hospital. The HOSPITAL score is based on 7 independent factors, including Hemoglobin at discharge, discharge from an Oncology service, Sodium level, Procedure during the index admission, Index type of admission, Total Admissions during the last 12 months, and Length of stay. A 2013 study in JAMA Internal Medicine that analyzed use of the score on patient discharges found that 27% of patients were classified as high risk, with an estimated potentially avoidable readmission risk of 18%.
Despite their advantages, not every hospital and health system can afford the time, resources, and research it requires to design a custom prediction model. In that case, physicians and hospitals should thoroughly research available commercial products and ensure the systems are based on characteristics similar to their own facility, Dr. Umscheid said.
“The advantage is that you could plug and play, integrating a commercial model into your own EHR as long as you have the staff who can do that,” he said. “The disadvantage is that the external model wasn't validated in your own hospital population, so if your hospital population is very different from the population for which the model was derived and validated, then it may inaccurately predict.”
Interventions are key
Risk predictive models may be helpful, but following up with intensive interventions is the heart of reducing readmissions, Dr. Schnipper said.
“A lot of it should be about coaching patients about managing conditions after leaving the hospital,” he said. Discuss potential “faster follow-up appointments and have honest conversations with the family,” he recommended.
Telemonitoring may be another option, as well as partnering with community health workers or visiting nurses who can assist patients in the home. Other interventions include making follow-up phone calls to patients or having unit-based pharmacists who dispense medication and provide medication reconciliations and education at discharge to high-risk patients and their caregivers.
Clinicians concerned about finding the time to identify the correct intervention can consider purchasing software that not only alerts hospitals to patients at higher risk of readmission but suggests the type of post-hospital intervention that's most appropriate and refers to the agency best equipped to provide it.
“Hospitalists have a lead role in this,” Dr. Umscheid said. “Their partners should include their primary care counterparts, the pharmacists they work with in the inpatient setting, the social workers and clinical resource coordinators, nursing staff, and community groups.”