EHR-based trigger detects preventable adverse events

A trigger flagged admissions, leading reviewers to detect preventable adverse events including diagnostic errors and care management-related events.


Background

Most health systems cannot reliably determine which patient safety events may have been preventable. Identifying missed diagnoses and other events in the hospital is an important first step in preventing future errors, but it can be hard to do, given the quantity of clinical data to be reviewed. Hospitals and quality improvement experts need electronic tools to narrow the number of medical records they have to comb through to identify preventable events, said Hardeep Singh, MD, MPH.

“Your other option is you review thousands of records manually,” said Dr. Singh, who is chief of the Health Policy, Quality, and Informatics Program at the Michael E. DeBakey VA Medical Center and professor of medicine at Baylor College of Medicine, both in Houston.

In recent years, he and his research team have built such tools, called triggers, to detect diagnostic errors and delays in follow-up of patients with abnormal clinical findings in the primary care setting. They expanded their efforts to the inpatient setting by modifying an existing tool, the Institute for Healthcare Improvement's (IHI) Global Trigger Tool.

While the IHI's tool is typically used to track event rates, “We wanted to see if the change of the methodology and the new trigger is able to identify diagnostic errors, which is an area that we think is very understudied,” said Dr. Singh.

How it works

The trigger's algorithm selected hospitalizations that were associated with care escalation (e.g., ICU transfers and initiation of rapid response teams) within 15 days of admission and flagged cases that raised suspicion for preventable adverse events, effectively creating a much smaller crop of medical records to review. “The best analogy of electronic triggers that I describe is it is like picking needles in a haystack by making the haystack smaller and using a magnet to pull out the needles,” said Dr. Singh.

Results

Between March 2010 and August 2015, 887 of 88,428 hospitalizations were associated with care escalation, and the trigger flagged 92 of those admissions, according to results published in February 2018 by BMJ Quality & Safety. The algorithm limited eligible patients to those at lower risk for care escalation, based on younger age and the presence of minimal comorbid conditions. Reviewers detected preventable adverse events in 41 of the cases: seven (7.6%) diagnostic errors and 34 (37.0%) care management-related events, such as falls and hospital-associated infections. All the missed diagnoses, which included deep venous thrombosis, hemothorax, sepsis, and alcohol withdrawal, posed potential for serious harm.

That works out to a positive predictive value of 44.6%, which may seem low, but Dr. Singh pointed out that the field of diagnosis involves much uncertainty and disagreement, and these gray areas make it difficult to define and capture errors. “For safety work, the 45% predictive value is pretty high. Very few triggers are higher,” he said.

Challenges

A major challenge of using triggers is that many health systems do not have a robust electronic data warehouse, the home for all electronic health record data, Dr. Singh noted. A second challenge is that patients often go to different institutions for, say, bloodwork, specialty care, and imaging, which fragments their data. “There's so much fragmentation of data, you can't really plot the diagnostic journey of the patient to understand where the errors are,” he said. A third quandary is what to actually do with the information once it's captured. “If you don't have a team of people in the hospital who do safety work and look at this data and use it for making meaningful improvement, it's no good,” said Dr. Singh.

Words of wisdom

Dr. Singh foresees that the inpatient trigger's utility will be in identifying opportunities for learning and improvement. “Validating this algorithm and publishing it is only the first step,” he said. “We think health systems should take information from our paper, build the algorithm, and use it to identify events. This should take just a few days if they have an IT programmer who can do it.”

Next steps

In 2017, Dr. Singh and his team received a three-year, $3.5-million grant to work with Geisinger Health System in Pennsylvania to develop a framework for assessing patient safety events as part of a new “Safer Dx Learning Lab.” The goal is to build a portfolio of multiple triggers that provide a hospital's quality and safety team with regular reports of diagnostic safety events in an automated fashion, he said. “Right now, no hospitals have this type of an advanced safety intelligence framework where you can harvest the data from the electronic health records and try to study the identified events and learn from them,” Dr. Singh said. “Creating a portfolio of electronic triggers could be a way forward.”