Recent Research

Early statin use in stroke, acute lung injury, and more.


Early statin use in stroke hospitalization improves survival

Using statins early in a patient's hospitalization for stroke is strongly associated with improved survival, a recent study found.

In an observational study, researchers analyzed records from 12,689 patients from 17 hospitals in the Kaiser Permanente Northern California System (KPNC) who were admitted with ischemic stroke between January 2000 and December 2007. Patients were included for analysis if they were older than 50 years and had no prior stroke during the search period. They were excluded if they had not been a member of the KPNC Health Plan for one year before the date of stroke admission, if the stroke occurred while the patient was hospitalized for another reason, or if prestroke outpatient statin use could not be determined. Outpatient statin use was considered positive if the patient had an active statin prescription at the time of admission, at least one statin prescription filled at a Kaiser pharmacy, and sufficient supply remaining to cover the time between when the prescription was filled and the date of hospital admission. Statin use during admission was determined via inpatient pharmacy databases. Patients were followed for a year from admission. The researchers used multivariable survival analysis and grouped-treatment analysis in order to avoid individual patient-level confounding. Results were published in the January 2012 Stroke.

Using statins before stroke hospitalization was associated with improved survival (hazard ratio [HR], 0.85; 95% CI, 0.79 to 0.93; P<0.001), as was using statins before and during hospitalization (HR, 0.59; 95% CI, 0.53 to 0.65; P<0.001). Patients who took a statin before their stroke and then underwent statin withdrawal in the hospital had a much higher risk of death (HR, 2.5; 95% CI, 2.1 to 2.9; P<0.001). Patients taking higher doses of statins (>60 mg/day) had a greater mortality benefit (HR, 0.43; 95% CI, 0.34 to 0.53; P<0.001) than those taking less than 60 mg/day (HR, 0.60; 95% CI, 0.54 to 0.67; P<0.001; test for trend P<0.001). Patients who started treatment with statins earlier in their hospitalization had better survival than those who started later in the hospitalization. According to grouped-treatment analysis, the association between statin use and survival can't be explained by patient-level confounding.

The hazard ratio for the cumulative one-year risk of death among patients with statin initiation in the hospital (0.55) was similar to that of patients taking a statin before and during hospitalization (0.59), which suggests the association between statin use and poststroke survival is mostly explained by use during hospitalization, the authors noted.

While current guidelines recommend statins for secondary stroke prevention, they don't address statin use during hospitalization of ischemic stroke patients, the authors noted. Given the strong association this study found between early hospital statin use and long-term survival, “it seems clinically prudent to treat patients with ischemic stroke with a statin from the beginning of stroke hospitalization,” the authors noted. Further, clinicians should take care to avoid interruption of statins among patients taking them before hospitalization, they wrote.

Tool helps differentiate acute lung injury, cardiogenic pulmonary edema

A simple prediction score may help distinguish patients with acute lung injury from those with cardiogenic pulmonary edema in the early stages of illness.

Researchers at a tertiary care hospital in Rochester, Minn., created a model using a population-based, retrospective development cohort of 332 critically ill patients with acute pulmonary edema. Predictor variables available within six hours of onset of acute pulmonary edema were collected from electronic records by blinded investigators and tested. The variables included in the final score were: age younger than 45 years, history of heart failure, history of coronary artery disease, new ST segment changes/left bundle branch block, pneumonia, sepsis or pancreatitis, aspiration, alcohol abuse, chemotherapy, and SpO2/FiO2 ratio at six hours less than 235. Expert reviewers (blinded to the model) classified patients as having acute lung injury (ALI) or cardiogenic pulmonary edema (CPE) using all information available at the time of hospital discharge. Score performance was validated in an independent cohort of 161 patients. Results were published in the January 2012 CHEST.

In the development cohort of 332 patients, expert reviewers classified 156 as ALI and 176 as CPE. In the validation cohort, 113 were ALI and 48 were CPE. The score demonstrated good discrimination (area under curve [AUC]=0.81; 95% CI, 0.77 to 0.86) and calibration (Hosmer-Lemeshow [HL], P=0.16). In the validation cohort, AUC was 0.80 with 95% CI, 0.72 to 0.88, and HL was P=0.13. The model performance was the same when excluding patients with both ALI and CPE from logistic regression analysis (AUC=0.83; 95% CI, 0.78 to 0.88). A score calculator is available for download online but the authors cautioned it is only preliminary, hasn't been validated externally, and shouldn't be used in patient care at this time.

If successfully validated, the prediction tool tested in this study can help improve diagnostic assessment of patients with acute pulmonary edema early in the course of illness by using readily available clinical information, the authors noted. The tool is simple, quick, inexpensive and noninvasive; can be easily programmed within electronic medical records; and may “help acute care providers at the bedside, especially those who are at an earlier stage in their career,” the authors wrote.

SMRT-CO predicts early ICU transfer in CAP patients

The SMRT-CO score outperformed the CURB-65 as a method for predicting which pneumonia patients will require early transfer to the intensive care unit (ICU), according to a recent study.

The retrospective, case-control study involved 460 patients admitted to the general ward of a hospital between 2003 and 2009 with community-acquired pneumonia (CAP). The groups had similar baseline characteristics, but 115 of the patients required transfer to the ICU within 48 hours of admission. The patients' scores on the CURB-65 (confusion, urea level, respiratory rate, blood pressure and age ≥65) and SMRT-CO (systolic blood pressure, multilobar chest radiography involvement, respiratory rate, tachycardia, confusion and oxygenation) were compared. The study results appeared in the November/December 2011 Journal of Hospital Medicine.

Overall, the researchers found that composite scores combining data from the emergency department and the general ward had better sensitivity and area under the curve (AUC) for predicting ICU transfer than scores using data from only one setting (P<0.001). The composite SMRT-CO score performed the best; a score of at least 2 had 76.5% sensitivity, 67.5% specificity and 0.81 AUC. In contrast, a CURB-65 score of at least 3 had 36.5% sensitivity, 86.3% specificity and 0.66 AUC (difference between the scores on sensitivity and AUC, P<0.001).

Based on the results, study authors concluded that a composite SMRT-CO score is a useful clinical tool for predicting early ICU transfers and intensive respiratory or vasopressor support in patients with CAP. The SMRT-CO is also easier to use than other severity scores, since it requires only chest radiography and pulse oximetry in addition to basic clinical information. If the score had been used to determine placement of the studied patients, three-quarters of those eventually transferred to the ICU would have been moved 24 to 28 hours earlier, the authors calculated.

However, 112 patients would have been transferred unnecessarily and 27 of the patients who did end up requiring transfer would not have been transferred based on the SMRT-CO score. The high frequency of false-positive and false-negative results might limit the clinical utility of the score, the study authors noted. But as of now, no tools with very high positive or negative likelihood ratios to predict ICU transfer have been developed. The authors concluded that while validation of their results in a prospective cohort is needed, a high composite SMRT-CO score might be used to alert clinicians to closely monitor patients for the next 48 hours or to support a clinician's decision to transfer a patient to the ICU.

CABG guideline updates perioperative medication recommendations

A recently issued guideline for the management of patients undergoing coronary artery bypass graft surgery (CABG) address patient selection, the role of CABG versus percutaneous coronary interventions (PCI), and the use of aspirin and other platelet therapies before and after surgery.

The guideline writers, representing the American College of Cardiology Foundation and the American Heart Association, noted that use of PCI has expanded and physicians have become more skilled at it, driving changes in the recommendations. The 2011 guidelines note that CABG to improve survival is recommended for patients with significant (≥50% diameter stenosis) left main coronary artery stenosis. However PCI is a “reasonable alternative” to CABG in selected stable patients with left main CAD who have a low risk of PCI complications and an increased risk of adverse surgical outcomes. The guidelines also note that CABG to improve survival is beneficial in patients with significant (≥70% diameter) stenoses in three major coronary arteries (with or without involvement of the proximal LAD artery) or in the proximal LAD plus one other major coronary artery. Regarding management of angina, the guidelines state that CABG or PCI to improve symptoms is beneficial in patients with one or more significant (≥70% diameter) coronary artery stenoses amenable to revascularization and unacceptable angina despite guideline-directed medical therapy.

Preoperative and postoperative antiplatelet therapy were reexamined because platelet aggregation has become much better, with more drugs available since the last set of guidelines were developed. Specifically, the 2011 guideline notes that aspirin should be administered to CABG patients preoperatively, and that in patients receiving elective CABG, clopidogrel and ticagrelor should be discontinued for at least five days before elective surgery (or at least 24 hours, if possible, for patients needing urgent CABG). Postoperatively, aspirin should be given within six hours of surgery (if it was not initiated preoperatively) and then continued indefinitely. Clopidogrel is a “reasonable alternative” in patients who are allergic to aspirin.

The guideline addresses numerous other issues, such as the appropriate choice of bypass graft conduit; the use of off-pump CABG versus traditional on-pump CABG; and CABG in specific patient subsets, such as those with diabetes.

The guideline recommends using a “heart team” approach in which the interventional cardiologist and the cardiac surgeon review the patient's condition, determine the pros and cons of each treatment option, and then present this information to the patient, allowing him or her to make a more informed decision.

The revised guideline was based on a formal literature review of studies published in the past 10 years. It appeared in the Dec. 6, 2011 issues of the Journal of the American College of Cardiology and Circulation: Journal of the American Heart Association.

Higher CHADS2 scores can indicate higher risk in patients on anticoagulants

Higher CHADS2 scores pointed to higher risk for adverse outcomes, including stroke, bleeding and death, in patients with atrial fibrillation receiving anticoagulation, according to a recent study.

Researchers performed a subgroup analysis of data from the RE-LY study, a randomized, controlled trial comparing dabigatran and warfarin in patients with atrial fibrillation, to determine the prognostic importance of CHADS2 for predicting thrombotic and bleeding complications in this population. The 18,112 study patients were receiving dabigatran, 150 mg or 110 mg twice daily, or open-label warfarin. CHADS2 score, which assigns 1 point each for congestive heart failure, hypertension, age at least 75 years and diabetes and 2 points for stroke, was assessed at baseline. The primary outcomes were stroke or systemic embolism, and the primary safety outcome was major bleeding. Secondary outcomes included intracranial hemorrhage, vascular death and total death. The results of the study, which was funded by Boehringer Ingelheim, appeared in the Nov. 15, 2011 Annals of Internal Medicine.

Overall, 5,775 patients had a CHADS2 score of 0 to 1, 6,455 patients had a score of 2, and 5,882 patients had a score of 3 to 6. The annual rate of stroke or systemic embolism was 0.93% in patients with a CHADS2 score of 0 to 1, 1.22% in patients with a score of 2, and 2.24% in patients with a score of 3 to 6. Patients with higher CHADS2 scores also had higher annual rates of major bleeding (2.26%, 3.11%, and 4.42%, respectively) and vascular mortality, including death from bleeding (1.35%, 2.39%, and 3.68%, respectively) (P<0.001 for all comparisons). Rates of all adverse outcomes increased in both the warfarin and dabigatran groups as CHADS2 scores increased. Rates of stroke or systemic embolism were lower in patients taking 150 mg of dabigatran twice daily than in those taking warfarin, while rates of intracranial bleeding were lower in patients taking either dose of dabigatran than in those taking warfarin.

The authors cautioned that their study involved post hoc, exploratory analyses and that the levels of statistical significance must be interpreted carefully, among other limitations. However, they concluded that increasing CHADS2 scores were associated with increased risk for adverse outcomes, including death, in patients with atrial fibrillation receiving anticoagulation.

ACP guideline calls for assessment before VTE prophylaxis

Prophylaxis against venous thromboembolism (VTE) should be given to medical inpatients only after an assessment of the benefits and risks of treatment, according to a recent guideline from ACP.

The guideline is based on a systematic evidence review commissioned by the College. The review included only randomized trials of VTE prophylaxis in hospitalized adult medical and stroke patients. The primary outcome was total mortality up to 120 days after randomization. Reviewers concluded that heparin prophylaxis had no significant effect on mortality, and although it reduced pulmonary embolism in medical patients and all patients combined, it led to more bleeding and major bleeding events, resulting in little or no net benefit. They found no differences in benefits or harms according to type of heparin used, and mechanical prophylaxis provided no benefit and resulted in clinically important harm to patients with stroke.

Based on the evidence, the guideline included three components, all strong recommendations based on moderate-quality evidence. First, the College recommends assessment of the risk for thromboembolism and bleeding in medical (including stroke) patients prior to initiation of prophylaxis. Then, unless the assessed risk for bleeding outweighs the likely benefits, pharmacologic prophylaxis should be provided with heparin or a related drug. Finally, the guideline recommends against the use of mechanical prophylaxis with graduated compression stockings for prevention of VTE.

As a policy implication of the recommendations, the guideline notes that ACP does not support the application of performance measures that promote universal VTE prophylaxis regardless of risk in medical (including stroke) patients. An editorial, published with the guideline in the Nov. 1, 2011 Annals of Internal Medicine, notes that The Joint Commission's current performance measure for VTE prophylaxis could implicitly encourage prophylaxis for all patients, in contrast with not only the ACP guideline, but also recommendations from the American College of Chest Physicians and the National Quality Forum, which specify that risk assessment should guide decisions regarding prophylaxis.

The editorialists call for all national performance measures to be based on high-quality evidence and to be applied only to patient groups in which benefit is clear. Measures should also allow for exceptions and multiple pathways to fulfillment, and should have the capability to be programmed into flexible, electronic, clinical decision-support rules, the editorial said.

Comprehensive geriatric assessment improves survival in older adults, study finds

Comprehensive geriatric assessment of older adults appears to improve hospital survival, a recent study indicates.

Researchers performed a review and meta-analysis of randomized, controlled trials comparing comprehensive geriatric assessment with usual care of adults at least 65 years of age who were admitted to the hospital because of an emergency. Comprehensive geriatric assessment was defined as “a multidimensional interdisciplinary diagnostic process used to determine the medical, psychological, and functional capabilities of a frail elderly person to develop a coordinated and integrated plan for treatment and long term follow-up.” It could be performed by mobile teams in general medicine units or in units designated for the purpose. The primary outcome measure was living at home, while secondary outcome measures included death, dependence, deterioration, residential care, activities of daily living, cognitive status, length of stay, readmissions and resource use. The study results were published online Oct. 27 by BMJ.

The analysis included 22 trials involving 10,315 participants from six countries. Patients who received comprehensive geriatric assessment were more likely than those who received usual care to be living at home after scheduled follow-up. Their odds ratios were 1.16 (P=0.003) at a median of 12 months and 1.25 (P<0.001) at a median of six months. Patients who received comprehensive geriatric assessment were also less likely to be living in residential care (odds ratio, 0.78; P<0.001) and to die or have a deteriorating condition (odds ratio, 0.76; P=0.001), and were more likely to have an improvement in cognition (standardized mean difference, 0.08; P=0.02). Benefit seemed to be derived from geriatric units rather than from geriatric teams, the authors said.

The authors noted that they were unable to compare different geriatric assessment models or care provided in acute and post-acute units, among other limitations. However, they concluded that more older patients are likely to survive hospital admission and return to home if they receive comprehensive geriatric assessment during their hospital stay. They noted that the benefits found in their analysis seemed to arise solely from trials of geriatric wards and were not seen for geriatric consult teams on general wards. The authors called for further evaluation of which patients would benefit most and which types of geriatric assessment are most effective.

The authors of an accompanying editorial also called for more research but said the current study clearly indicates that comprehensive geriatric assessment should be standard practice for hospitalized older adults. Systems of care will need to be redesigned to accommodate such assessments, they said, and costs might initially increase. “However, in the longer term comprehensive geriatric assessment not only improves patient outcomes but may save costs by reducing hospital readmissions and lowering the need for long term nursing home care,” the editorialists wrote.

Patient satisfaction with hospitalists matches results for PCPs

Patients were about equally satisfied when they were cared for by hospitalists or primary care physicians, a recent study found.

The study used data from random telephone interviews of medicine service patients discharged from three Massachusetts hospitals between 2003 and 2009. More than 8,000 patients were surveyed about their satisfaction—more than half cared for by primary care physicians (PCPs) and the rest by hospitalists. The results were released early online by the Journal of Hospital Medicine on Oct. 31.

Overall, satisfaction scores were slightly higher for PCPs than hospitalists (4.24 out of 5 vs. 4.20; P=0.04), but the difference cannot be considered clinically significant, the study authors concluded. When results were separated by individual hospital and hospitalist group, no statistical differences were found. On the subcategories of satisfaction with physician behavior, pain management and communication, hospitalists and PCPs scored about the same (all P values >0.23). In addition, similar numbers of hospitalists and PCPs scored in the highest satisfaction categories (79.2% vs. 80.5%; P=0.17) and lowest (5.1% vs. 4.5%; P=0.19). Satisfaction with both groups improved significantly and equivalently over the course of the study.

The findings are reassuring, the authors concluded, since previous studies have shown that patients value continuity of care highly. However, few previous studies have examined whether the discontinuity of hospitalist care negatively affects patient satisfaction. The results of this first investigation (involving 59 hospitalists from five programs) may indicate that other aspects of care—such as good communication—may actually be more important to patient satisfaction than continuity. The study was limited by its single geographic area, low rate of patient response (similar to other post-discharge surveying) and the possibility of confounding by an unidentified factor, the authors acknowledged.

Studies like this one may no longer be possible given the growth of the hospitalist industry, the authors noted. They suggested that future research in this area could survey patients who have been cared for by hospitalists to identify specific methods to maximize their satisfaction.