Adding antiplatelets to dabigatran, warfarin elevates major bleeding risk
The risk of major bleeding is increased when antiplatelets are added to either dabigatran or warfarin, a subgroup analysis confirmed.
Researchers performed a post-hoc analysis of an earlier trial showing that a 150-mg dose of dabigatran was superior and a 110-mg dose was non-inferior to warfarin for preventing stroke and systemic embolism in atrial fibrillation patients. In the subgroup analysis, researchers compared the safety and efficacy of the 110-mg and 150-mg doses of dabigatran to warfarin in subgroups of patients with and without concomitant aspirin or clopidogrel treatment. Results were published in the Feb. 5 Circulation.
Nearly 7,000 of the original study's 18,113 patients received an antiplatelet at some point during the study. Eight hundred and twelve patients took both aspirin and clopidogrel, 5,789 took aspirin alone, and 351 took clopidogrel alone. A history of prior myocardial infarction or coronary artery disease, hypertension, paroxysmal atrial fibrillation, male sex and diabetes were more common among patients who took antiplatelets.
Use of concomitant antiplatelets was associated with a higher rate of major bleeding (4.4% vs. 2.6%), regardless of whether patients took warfarin or either dose of dabigatran (overall hazard ratio [HR], 2.01; 95% CI, 1.79 to 2.25). The absolute risk of bleeding was lowest with the 110-mg dose of dabigatran (3.9% per year), followed by the 150-mg dose of dabigatran (4.4% per year) and warfarin (4.8% per year) in those who also took antiplatelets (P=0.05 for 110-mg dabigatran vs. warfarin; P=0.38 for 150-mg dabigatran vs. warfarin).
Risk of major bleeding was higher among patients taking dual antiplatelets vs. those taking single antiplatelets (HR, 2.31 vs. 1.60; P<0.001 for trend in all treatment groups), with absolute risk lowest with 110-mg dabigatran. Similar trends were seen with major, minor and extracranial bleeding, but not intracerebral hemorrhage, which had a low event rate.
Tandem use of antiplatelets and anticoagulants is common, the authors noted, and this analysis suggests the relative risk of bleeding is similar for dabigatran and warfarin. The risk rose 50% with one antiplatelet and doubled with two, they noted. Because absolute (not relative) rates of bleeding seemed lower with 110 mg of dabigatran, however, this dose may be preferable “in patients in whom bleeding risk is of concern, such as those requiring dual antiplatelet therapy,” the authors wrote.
Editorialists wrote that the 110-mg dose of dabigatran might be a safer alternative in patients who require low-dose aspirin in particular. Regardless, treatment with both agents “needs to be highly personalized, taking into account the thrombotic and bleeding risk of each individual patient,” the editorialists wrote.
Separately, the U.S. Food and Drug Administration warned in late December 2012 that patients with mechanical heart valves should not use dabigatran to prevent major stroke or blood clots. Researchers recently stopped a trial of mechanical heart valve patients in Europe because the patients taking dabigatran were more likely than those taking warfarin to experience strokes, heart attacks and blood clots that formed on the valves. They also had more bleeding after valve surgery. Physicians should transition all patients with mechanical heart values who take dabigatran to another medication, the alert said.
Selective use of D-dimer identified DVTs with less testing
More selective use of D-dimer testing allowed physicians to safely and efficiently diagnose first episodes of deep venous thrombosis (DVT), a study found.
The randomized, controlled trial included more than 1,500 patients who presented to Canadian hospitals with symptoms of DVT. Physicians used the nine-point Wells clinical prediction rule to assess whether patients' clinical pretest probability of DVT was low, moderate or high. Then patients were randomized to one group in which all patients were uniformly given D-dimer tests (negative D-dimer defined as <0.5 µg/mL) plus ultrasonography or one in which pretest probability determined testing.
In the latter group, outpatients who had a low pretest probability and a D-dimer level below 1.0 µg/mL had DVT excluded as their diagnosis. For outpatients with a moderate pretest probability, the D-dimer cutoff was 0.5 µg/mL. Patients who scored below either of these levels did not receive ultrasonography. Outpatients with high pretest probability and all inpatients were not given D-dimer tests and instead all received ultrasonography. Patients were followed for three months, and results were published in the Jan. 15 Annals of Internal Medicine.
At three months, the selective and uniform testing groups had equal incidence of symptomatic VTE: 0.5% (difference between groups, 0 percentage points; 95% CI, −0.8 to 0.8 percentage point). Selective testing reduced the proportion of patients getting D-dimer tests by 21.8 percentage points (95% CI, 19.1 to 24.8 percentage points) and ultrasonography by 7.6 percentage points (95% CI, 2.9 to 12.2 percentage points). Outpatients with low pretest probability had a particularly steep drop in ultrasonography: 21 percentage points (95% CI, 14.2 to 27.6 percentage points).
Study authors concluded that the selective testing was as safe as and more efficient than uniform testing and resulted in a similar number of patients being diagnosed with VTE during testing. They noted that none of the patients with D-dimer levels between 0.5 and 1.0 µg/mL were diagnosed with VTE during the study and that a very small percentage of the high-risk patients in the control group had DVT excluded by D-dimer testing (15% of outpatients with high pretest probability and 2% of inpatients).
They cautioned that the results may not be generalizable to patients with a history of DVT or to other D-dimer tests but called for research on using selective testing in patients who present with suspected recurrences. For first suspected episodes of DVT, the results support basing testing choices on pretest probability, they concluded, and that “D-dimer testing should be avoided in outpatients with high pretest probability and in all inpatients.”
Score identifies patients who might benefit from acid-suppressive medication to prevent nosocomial GI bleeding
Researchers have created a scoring system, based on risk factors for nosocomial gastrointestinal (GI) bleeding, to identify non-critically ill hospitalized patients who may benefit from acid-suppressive medication.
Researchers conducted a cohort study using adult patients admitted to an academic medical center from 2004 through 2007 to determine the incidence of nosocomial GI bleeding occurring outside of the intensive care unit. Of the 75,723 patients in the cohort, 80% were randomly assigned to a derivation set (n=60,578) and 20% were randomly assigned to a validation set (n=15,145). Results appeared in the Jan. 13 Journal of General Internal Medicine.
Nosocomial GI bleeding occurred in 203 (0.2%) admissions. Independent risk factors for bleeding included age older than 60 years (odds ratio [OR], 2.2; 95% CI, 1.5 to 3.2), male sex (OR, 1.6; 95% CI, 1.2 to 2.2), liver disease (OR, 2.1; 95% CI, 1.3 to 3.3), acute renal failure (OR, 1.9; 95% CI, 1.3 to 2.7), sepsis (OR, 1.6; 95% CI, 1.03 to 2.4), being on a medicine service (OR, 2.7; 95% CI, 1.8 to 4.1), prophylactic anticoagulants (OR,1.7; 95% CI, 1.2 to 2.4), and combinations of coagulopathy (varied).
A risk score for each patient was derived by totaling the points assigned to each risk factor. Risk factors included:
- age more than 60, 2 points;
- male, 2 points;
- acute renal failure, 2 points;
- liver disease, 2 points;
- sepsis, 2 points;
- prophylactic anticoagulation, 2 points;
- coagulopathy, 3 points and
- medicine service, 3 points.
The risk scoring system identified a high-risk group of patients (score ≥12) in which the number-needed-to-treat score with acid-suppressive medication to prevent one bleeding event was 48. The researchers noted that the risk model derived from these factors may help clinicians direct acid-suppressive medication to those most likely to benefit.
“Rather than employing a one-size-fits-all approach, our study provides guidance for clinicians in targeting acid-suppressive therapy to those non-ICU-based patients who stand to benefit most, while avoiding the unnecessary cost and risk associated with this therapy in those with extremely low risk of bleeding,” the researchers wrote.
More information on medication changes may improve post-discharge adherence in patients with stroke
Providing more details to primary care physicians (PCPs) about medication changes during hospitalization can help improve adherence after discharge, according to a recent study.
Researchers performed an open, prospective, interventional two-phase study at a clinic in Germany to examine adherence to discharge medication in patients with ischemic stroke. Adherence was evaluated before and after implementation of a systematic intervention administered by a clinical pharmacist. Patients were included in the study if they had a transient ischemic attack or ischemic stroke and were taking at least two drugs during their hospital stay and at discharge. The first phase of the study, involving the control group, took place from January 2011 to June 2011. The second phase, involving the intervention group, took place from October 2011 to March 2012.
Patients in the control group received a letter at discharge meant to inform their PCP about their main diagnosis, any diagnostic findings, laboratory test results, complications and medications. Patients in the intervention group received a letter in which a clinical pharmacist listed medications at both admission and discharge and detailed the reasons behind all changes that occurred during the hospital stay, including reasons for any new drugs, discontinued drugs, and modifications, particularly antithrombotic drugs and simvastatin. After three months, patients' PCPs were interviewed about patients' current medication lists to evaluate adherence to the medications included in the discharge letter, defined as continued therapy from discharge to three months. The study results were published in the February Stroke.
A total of 312 patients, 156 in each group, were included in the study. The mean age was 70.7 in the control group and 72.3 in the intervention group, and slightly over half of the patients in each group were men. Overall adherence to the medications in the discharge letter increased from 83.3% in the control group to 90.9% in the intervention group (P=0.01). Adherence to antithrombotic drugs and statin therapy both differed significantly between the control and intervention groups (83.8% vs. 91.9% and 69.8% vs. 87.7%; P=0.033 and P<0.001, respectively).
The authors stated that medication adherence after discharge appears to be better when more information about medication changes is provided. They speculated that PCPs' adherence to discharge medications was better because they were given the rationale behind the changes made during hospitalization. They specifically noted the difference in statin therapy between groups, with fewer discontinuations or dosage reductions, writing that the higher adherence rate in the intervention group reflected physicians' improved awareness of the benefits of statins after a cerebrovascular event. “Providing detailed information on medication changes can lead to substantially improved adherence to discharge medication, probably resulting in better secondary stroke prevention,” the authors concluded.
More restrictive blood transfusion rates may be better for upper GI bleeds
A blood transfusion threshold of 7 g/dL of hemoglobin significantly improved outcomes in patients with acute upper gastrointestinal bleeding compared to 9 g/dL, a study found.
Researchers randomly assigned 444 patients with severe acute upper gastrointestinal bleeding to a restrictive transfusion strategy (transfusion when hemoglobin fell below 7 g/dL with a target range post-transfusion of 7 to 9 g/dL) and 445 patients to a liberal strategy (transfusion when hemoglobin fell below 9 g/dL with a target range post-transfusion of 9 to 11 g/dL). Safety and efficacy of both strategies were compared.
Results appeared in the Jan. 3 New England Journal of Medicine.
A total of 225 patients assigned to the restrictive strategy did not receive transfusions compared with 65 assigned to the liberal strategy (51% vs. 15%; P<0.001). The restrictive-strategy group had a higher survival rate at six weeks compared to the liberal-strategy group (95% vs. 91%; hazard ratio [HR] for death with restrictive strategy, 0.55; 95% CI, 0.33 to 0.92; P=0.02).
Deaths attributed to unsuccessfully controlled bleeding occurred in three patients in the restrictive-strategy group and in 14 patients in the liberal-strategy group (0.7% vs. 3.1%; P=0.01). Complications of treatment were the cause of death in one patient in the restrictive-strategy group and two in the liberal-strategy group. Hemorrhage was controlled and death was due to associated diseases in 19 patients in the restrictive-strategy group and 25 in the liberal-strategy group.
Less bleeding occurred in the restrictive-strategy group compared with the liberal-strategy group (10% vs. 16%; P=0.01), and there were fewer adverse events (40% vs. 48%; P=0.02). Further bleeding was significantly lower with the restrictive strategy group after adjustment for baseline risk factors (HR, 0.68; 95% CI, 0.47 to 0.98). Length of hospital stay was shorter in the restrictive-strategy group than in the liberal-strategy group.
Patients who had bleeding associated with a peptic ulcer had a slightly higher probability of survival with the restrictive strategy than with the liberal strategy (HR, 0.70; 95% CI, 0.26 to 1.25). Patients with cirrhosis and Child-Pugh class A or B disease in the restrictive-strategy group had a significantly higher probability of survival (HR, 0.30; 95% CI, 0.11 to 0.85), but those with cirrhosis and Child-Pugh class C disease did not (HR, 1.04; 95% CI, 0.45 to 2.37).
Researchers noted that improvement in survival rates observed with the restrictive transfusion strategy “was probably related to a better control of factors contributing to death, such as further bleeding, the need for rescue therapy, and serious adverse events. All these factors were significantly reduced with the restrictive strategy.”
Combining diuretics, antihypertensives and NSAIDs may pose risk of kidney injury
Combining diuretics with angiotensin-converting enzyme (ACE) inhibitors or angiotensin-receptor blockers (ARBs) and nonsteroidal anti-inflammatory drugs (NSAIDs) increased risk of acute kidney injury, a study found.
Researchers conducted a nested, case-control study of data from primary care records in the U.K. that identified 487,372 people who received antihypertensive drugs from 1997 through 2008. Patients were tracked for a mean of 5.9 ± 3.4 years, generating more than 3 million person-years of follow up. During this time, 2,215 were diagnosed with acute kidney injury that prompted hospital admission or dialysis (7 in 10,000 person-years).
Study results were published Jan. 8 by BMJ.
Taking a double-therapy combination of diuretics or ACE inhibitors or ARBs with NSAIDs was not associated with an increased rate of acute kidney injury. However, a combination of a diuretic with an ACE inhibitor or ARB and an NSAID was associated with a higher rate of kidney injury (rate ratio [RR], 1.31; 95% CI, 1.12 to 1.53). The risk was particularly elevated in the first 30 days of treatment (RR, 1.82; 95% CI, 1.35 to 2.46) and progressively decreased, becoming insignificant after more than 90 days of use (RR, 1.01; 95% CI, 0.84 to 1.23; P<0.001 for interaction).
The authors wrote, “Given that NSAIDs are widely used (40-60% as lifetime prevalence in the general population) and that a greater incidence rate of acute kidney injury was estimated among antihypertensive drugs users than in the general population, increased vigilance may be warranted when diuretics and angiotensin converting enzyme inhibitors or angiotensin receptor blockers are used concurrently with NSAIDs. In particular, major attention should be paid early in the course of treatment, and a more appropriate use and choice among the available anti-inflammatory or analgesic drugs could therefore be applied in clinical practice.”
An accompanying editorial noted that the study's confidence intervals were wide, that over-the-counter NSAID use could be unreported, that doctors who monitored for this effect may have stopped treatment before kidney injury occurred, and that drug-associated acute kidney injury is often a complication of other illnesses. Clinicians should talk to patients about risks and be vigilant for drug-associated acute kidney injury, the editorial stated, because “The jury is still out on whether double drug combinations are indeed safe.”
For TBI, intracranial-pressure monitoring shows no benefit over imaging, clinical exam
Intracranial-pressure monitoring in patients with severe traumatic brain injury (TBI) isn't superior to treatment based on clinical examination and imaging, a study found.
Researchers randomized 324 patients with severe TBI who were in intensive care units in Ecuador or Bolivia to one of two protocols. The first was guidelines-based management using a protocol for monitoring intraparenchymal intracranial pressure (pressure-monitoring group) and the other was care based on imaging and clinical examination (imaging-exam group). Patients were aged 13 years old and up. The main composite outcome comprised survival time, impaired consciousness, and functional status at three and six months plus neuropsychological status at six months. Researchers measured performance across 21 measures of functional and cognitive status and calculated a percentile (with 0 as worst and 100 as best performance) to determine the outcome.
There was no significant difference between groups in the main outcome (score of 56 in the pressure-monitoring group and 53 in the imaging-exam group; P=0.49). There was also no difference in mortality at six months (39% in the pressure-monitoring group and 41% in the imaging-exam group) or in median length of stay in the ICU (12 days in the pressure- monitoring group vs. nine in the other; P=0.25). The number of days of brain-specific treatments, such as use of hyperventilation, in the ICU was higher in the imaging-exam group (4.8 vs. 3.4, P=0.002). The distribution of serious events was similar between groups. Results were published in the Dec. 27, 2012, New England Journal of Medicine.
Because the study was done in Bolivia and Ecuador, where prehospital resuscitation is less developed, more severely injured patients may not have survived in time to reach the hospital—thus the study's ICU patients may have had less severe injury than comparable ICU patients in wealthier countries, the authors noted. Yet, early outcome curves in the study appeared consistent with what would be expected in ICUs in wealthier countries, the authors noted.
The authors stressed they were not questioning the value of knowing precise intracranial pressure or of treating severe TBI aggressively; rather, the methods of monitoring and treatment need to be reassessed, they wrote. On the whole, the results don't support the theory that management of severe TBI patients by intracranial-pressure monitoring is superior to management by neurologic examination and serial CT imaging, they concluded.
Acute kidney injury requiring dialysis has increased in the U.S.
Acute kidney injury (AKI) requiring dialysis has rapidly increased in the U.S. over the past decade, a study recently reported.
Researchers used ICD-9 codes from the Nationwide Inpatient Sample to find cases of dialysis-requiring AKI occurring from 2000 to 2009. The goal of the study was to estimate the incidence of this disorder in the U.S. population and determine whether specific subgroups were at higher risk. The researchers also wanted to see whether a change in rates of severe AKI could be related to changes in demographics or to predisposing conditions and interventions. The study results were published Dec. 6, 2012, by the Journal of the American Society of Nephrology.
Overall, the number of hospitalizations involving AKI with dialysis increased from 63,000 in 2000 to almost 164,000 in 2009. Incidence of AKI requiring dialysis increased from 222 to 533 cases per million person-years in the same time period, translating to an average increase of 10% per year (incidence rate ratio, 1.10; 95% CI, 1.10 to 1.11 per year). Incidence of dialysis-requiring AKI appeared to be higher in older patients, men, and African-Americans. In hospitalized patients, approximately one-third of the increase over time was related to temporal changes in the population distribution of race, age and sex and to trends in sepsis, acute heart failure, and use of cardiac catheterization and mechanical ventilation. In 2000, 18,000 deaths were associated with dialysis-requiring AKI; in 2009, this number increased to almost 39,000.
The authors noted that they were not able to completely explain why the incidence of AKI requiring dialysis increased over the study period, and they acknowledged that factors not accounted for in their analysis may have been partially responsible. More liberal use of acute dialysis over time may have also played a role, they wrote. They concluded that according to their findings, AKI requiring dialysis and associated deaths increased substantially in the U.S. from 2000 to 2009. “More research is needed to address reasons for underlying disparities among sex, age, and racial groups and causes behind the rapid increase in the incidence of dialysis-requiring AKI,” they wrote.
High-frequency oscillatory ventilation may increase death in ARDS patients
Acute respiratory distress syndrome (ARDS) patients were more likely to die if they underwent high-frequency oscillatory ventilation (HFOV) than a control ventilation strategy, a new study found.
Previous trials of HFOV, which delivers very small tidal volumes at very high rates, suggested it improved survival and oxygenation, but the trials were small and the control group strategies were outdated. For the current study, researchers randomly assigned moderate-to-severe ARDS patients at 39 ICUs in five countries to receive either HFOV or a control ventilation strategy. Both strategies targeted lung recruitment; the control strategy used low tidal volumes and high positive end-expiratory pressure (PEEP) toward this aim. The primary study outcome was in-hospital death. Results were published in the Jan. 22 New England Journal of Medicine.
The trial was stopped after 548 of a planned 1,200 patients underwent randomization due to a strong signal for increased mortality with HFOV. In-hospital death was 47% in the HFOV group and 35% in the control group (relative risk, 1.33; P=0.005), a finding independent of baseline abnormalities in oxygenation or respiratory compliance. The intervention group had undergone HFOV for a median of three days, and 12% of control group patients received HFOV for refractory hypoxemia.
HFOV patients got higher doses of midazolam (199 mg/d) than those in the control group (141 mg/d, P<0.001). As well, more patients in the HFOV group received neuromuscular blockers (83% vs. 68%, P<0.001) and vasoactive drugs (91% vs. 84%, P=0.01). The HFOV group also received vasoactive drugs for a longer time than control patients (five days vs. three days, P=0.01).
It's possible the results differ from previous studies because the older studies used control strategies that are now known to be harmful, the authors wrote. On the whole, the study results “raise serious concerns” about using HFOV to manage ARDS and “increase the uncertainty about possible benefits of HFOV even when applied in patients with life-threatening refractory hypoxemia,” they wrote.
A second, multicenter study in the Jan. 22 NEJM found no significant different in 30-day mortality between patients who used HFOV (41.7% death rate) or usual ventilator care (41.1%) per local practice. All patients had a Pao2/Fio2 of ≤200 mm Hg and expected ventilation duration of at least two days.
The difference in study outcomes may be due to the fact that the “the hemodynamic compromise associated with HFOV that was induced by high mean airway pressures was minimal in the [second] trial as compared to the [first], perhaps owing to lower applied ventilator pressures in the [second],” editorialists wrote. While the results may apply more to the specific HFOV protocols studied in these trials than HFOV as a whole, clinicians should still be cautious about using HFOV routinely in ARDS patients until more research can be done, they concluded.
Interrupting rivaroxaban or warfarin confers similar stroke risk in afib patients
Stroke risk in atrial fibrillation patients after temporary interruption of anticoagulation was similar whether rivaroxaban or warfarin was used, a study found.
Researchers performed a post hoc analysis of data from the ROCKET AF (Rivaroxaban Once-Daily, Oral, Direct Factor Xa Inhibition Compared With Vitamin K Antagonism for Prevention of Stroke and Embolism Trial in Atrial Fibrillation) trial. They examined the study's 14,143 patients for stroke or non-central nervous system (CNS) embolism within 30 days of temporary interruptions of rivaroxaban or warfarin lasting three days or more. Researchers also looked at outcomes after early permanent drug discontinuation (for inability to tolerate anticoagulation drugs) and after end-of-study transition to open-label therapy. All patients had nonvalvular atrial fibrillation (AF) and an elevated risk of stroke.
For temporary interruptions, the investigators were told to stop the warfarin or placebo tablets four days before elective procedures and the rivaroxaban or placebo tablets two days before elective procedures. Temporary interruption patients were evaluated for events that occurred from three days after drug interruption to three days after drug resumption. Patients with early permanent drug discontinuation, as well as end-of-study patients, were evaluated for events from three to 30 days after discontinuation. Results were published in the Feb. 12 Journal of the American College of Cardiology.
Rates of stroke and non-CNS embolism were similar after temporary interruptions and early discontinuation of rivaroxaban and warfarin. Patients who took rivaroxaban during the study and then transitioned to open-label therapy at study's end had more strokes than end-of-study patients who took warfarin during the study (6.42 vs. 1.73 per 100 patient-years; hazard ratio, 3.72; P=0.004) and also took longer to reach a therapeutic international normalized ratio (INR). Rates of all thrombotic events (stroke, non-CNS embolism, myocardial infarction and vascular death) within 30 days of stopping any drug were similar.
Clinicians and patients should be aware of the risks when rivaroxaban or warfarin is stopped, either temporarily or permanently, the authors wrote. For temporary interruptions, the study results “support careful assessment of continued anticoagulation coverage in these moderate-to-high risk AF patients,” they wrote. It's not clear whether bridging anticoagulation is beneficial, they continued, but “it seems wise to minimize the period of discontinuation.” Also, the results suggest paying careful attention to timely anticoagulation coverage in patients transitioning from one anticoagulant to another, they wrote.
Though the authors imply the post-study excess risk in rivaroxaban patients is the result of inadequate vitamin K antagonist therapy, evidence of this “remains circumstantial,” an editorialist wrote. INRs weren't collected as carefully post-trial as pre-trial, and the authors didn't provide information on bridging therapies, he noted. Also, these patients had higher bleeding risk, which is inconsistent with undercoagulation, he wrote. What is clear, he concluded, is “bad things will happen to high-risk [atrial fibrillation] patients if they are left untreated with effective anticoagulant therapy for sustained periods…and it does not take much time for those events to begin to accumulate.”
Quadruple dose of influenza vaccine may offer HIV patients better protection
HIV-infected patients who received a quadruple dose of seasonal influenza vaccine had a higher antibody response and greater seroconversion rate than did those who received a standard dose, with similar rates of adverse events, a study found.
Researchers conducted a randomized, double-blind, controlled trial at the Hospital of the University of Pennsylvania in Philadelphia from October 2010 to March 2011. In the study, 190 adults with HIV were randomly assigned to receive either a standard dose (15 µg of antigen per strain, n=93) or a high dose (60 µg/strain, n=97) of the influenza trivalent vaccine. Seroprotection was defined as antibody titers of 1:40 or greater on the hemag- glutination inhibition assay 21 to 28 days after vaccination.
Results appeared in the Jan. 1 Annals of Internal Medicine.
Seroprotection rates after vaccination were higher in the high-dose group for three strains of flu: H1N1 (96% vs. 87%; treatment difference, 9 percentage points; 95% CI, 1 to 17 percentage points; P=0.029), H3N2 (96% vs. 92%; treatment difference, 3 percentage points; 95% CI, −3 to 10 percentage points; P=0.32), and influenza B (91% vs. 80%; treatment difference, 11 percentage points; 95% CI, 1 to 21 percentage points; P=0.030).
There was no significant difference in the local or systemic reactions between the two groups and no serious adverse events related to vaccine administration. The most frequent local adverse events were pain and tenderness at the injection site. The most frequent systemic adverse effect was myalgia, followed by malaise and headache. However, researchers noted that their study did not assess the effectiveness of the vaccine in preventing clinical influenza, which would require more study participants.
“A strategy with a single HD [high-dose] immunization is much easier to implement than a multiple-dose schedule,” the authors wrote. “Although a higher dose is 1 route to the protection of this vulnerable population, other strategies may also be explored in the future, such as alternative vaccines, the use of adjuvants, or new schedule strategies.”
Pneumonia inpatients may have higher risk of cardiac arrhythmia
About 12% of hospitalized pneumonia patients had a new diagnosis of cardiac arrhythmia within 90 days of admission, a new study found.
Researchers conducted a cohort study using national Veterans Affairs (VA) data from 32,689 patients who were at least 65 years old and had been hospitalized with pneumonia in fiscal years 2002-2007. All patients had received antibiotics within 48 hours of admission, had no prior diagnosis of a cardiac arrhythmia, and had at least a year of VA care. Researchers followed up for 90 days after admission and identified arrhythmias using ICD-9 discharge codes for atrial fibrillation, cardiac arrest, ventricular fibrillation or tachycardia, and symptomatic bradycardia. They performed multilevel regression analysis, with the admitting hospital as a random effect. Results were published in the January American Journal of Medicine.
Twelve percent (n=3,919) of patients had a new diagnosis of cardiac arrhythmia within 90 days of admission. There were 8% (n=2,625) with new-onset atrial fibrillation; 3.4% (n=1,105) with bradycardia and other arrhythmias, including multifocal atrial tachycardia; 1% (n=323) with cardiac arrest; and 0.3% (n=105) with ventricular fibrillation or tachycardia. Increased risk of arrhythmia was seen in patients who were older, had a history of congestive heart failure, and had a need for mechanical ventilation or vasopressors during hospitalization. Lower risk was associated with use of beta-blockers before admission. Patients who had an arrhythmic event had a significantly higher 30-day mortality rate (18.4% vs. 13.1%; P<0.01) and 90-day mortality rate (31.0% vs. 20.8%; P<0.01).
Risk for cardiac arrhythmia may be elevated around the time of pneumonia due to an increase in serum inflammatory cytokines, or due to disturbed hemodynamic homeostasis, prothrombotic conditions, and increased catecholamine release, the authors wrote. Acute infections also may have a direct inflammatory effect on arteries, myocardium and pericardium, which then leads to developing arrhythmias, they said.
Study limitations include that the VA population is predominantly male, which makes the results poorly generalizable to women, and that it is difficult to determine whether the arrhythmias contributed directly to higher mortality, the authors wrote. More research needs to be done to determine precisely who is at risk and how long they remain at risk. For now, clinicians should bear in mind that older patients, patients with congestive heart failure and patients who had septic shock appear to be at higher risk of developing arrhythmias if they have pneumonia, the authors wrote.