Evidence that's not what it seems

The determination of whether a treatment would be futile is one of the toughest decisions a hospitalist can face.

The determination of whether a treatment would be futile for a patient is one of the toughest decisions a hospitalist, as well as patients and family, can face.

Any evidence on which to base such a difficult choice seems like a help. But what if the research that appears to provide evidence is actually based on insufficient data and the researchers' own opinions?

This problematic possibility was raised by a recent review of studies related to medical futility. Nephrologist Ezra Gabbay, MD, and colleagues analyzed almost 100 studies published on the topic between 1980 and 2008, and found substantial limitations. For example, 88% of the studies didn't set an explicit threshold for futility (i.e., how often a treatment must fail to be considered futile).

The analysis included studies in settings from cancer care to cardiac arrest to the intensive care unit, and it was published by the Journal of General Internal Medicine in July. Dr. Gabbay recently discussed his findings, and their meaning for practicing physicians, with ACP Hospitalist.

Q: How are most physicians making these decisions about futility? Are their conclusions based on the available evidence?

A: My impression is that physicians making these decisions, to varying degrees, mix personal experience and subjective impressions with some objective outcome data that they come across. The source of such data is usually the medical literature. What we wanted to look at was what quality of evidence physicians who did look there could find.

Q: How did your findings compare with your expectations?

A: We were surprised to find that the degree of statistical confidence in this kind of data is quite low, because most of these studies are actually done on very small groups of patients. Most of them are done on less than 100 patients. That's the number that was cited by Dr. Lawrence Schneiderman and his colleagues in the early 1990s as a threshold for determining that something is futile. [In an article in Annals of Internal Medicinein 1990, Dr. Schneiderman and colleagues proposed that a treatment could be considered futile after it failed in 100 consecutive cases.] So one of the surprising findings was that very, very few studies actually meet this criterion.

Another significant finding was that [when] studies do meet this criterion, it's almost exclusively in the context of CPR. If somebody's already in cardiac arrest, then yes, certain types of cardiac arrest can be identified where resuscitation is futile. But if somebody is critically ill but not in cardiac arrest, it's very difficult to identify large groups of patients where treatment would be futile.

The third surprising finding is that people often draw disparate conclusions based on similar results. One researcher could have 98% failure rate in 100 patients and call that futile and another researcher can look at a 2% success rate in 100 patients and say, ‘That's evidence that it's not futile.’ It is therefore important to go beyond the authors' conclusions, look directly at the raw data and draw one's own conclusion.

Q: Do you propose any solutions to this problem?

A: There really should be a concerted effort at various levels of the medical/health care system to collect data into larger pools that would allow us to identify larger groups of patients where treatment was not successful.

The other thing that I would encourage clinicians to do is to keep updated, because it's often the case where a treatment is found to be futile at certain points in time, and then a couple of years down the road somebody comes out with very different results.

Finally, I think the most important thing that physicians need to understand is that this is not exact science. They can be wrong when determining futility. They need to present this uncertainty honestly to patients and to families and make this more of a qualitative decision, taking into account the patient's wishes, values, quality of life and their functional status, to make the right decision. People should not think that there is enough evidence to make these decisions on purely empirical grounds. In most circumstances, as we found, the data simply isn't very strong.

Q: Do you see any progress toward these goals?

A: On a societal level, there's greater consciousness of the need to better define outcomes, to do things in an evidence-based way. There is also increasing understanding that for some patients, aggressive treatments are painful and expensive and offer no substantial benefit. If government and medical institutions allocated resources into studying these issues in greater depth and gathering more data, then doctors could stand on firmer empirical ground as they approach these tough decisions.

This is one of the areas where it's up to public institutions, medical organizations and foundations that support research to do this. This is not something that the pharmaceutical industry would necessarily support or fund. Public awareness of the importance of data and evidence can make a real difference in the way we, as a society, deal with this problem.