Today’s blog post is courtesy of Dr. Maryna Zilberberg from Healthcare, etc.. We recently discovered her blog and found the following article to be especially relevant. The article explains how to use cost-effectiveness propositions to avoid several fallacies that interfere with decision-making in drug development.
Today I was going to tell you the tale of my son’s broken wrist (he is fine now, this happened in January, but the insurance issues are fascinating), but I got distracted thinking about another fascinating subject that many do not understand well: confounding by indication. I especially started thinking about it in the context of how decisions and policies are made, and how not having the right data at the right time leads to this “Titanic effect” for a technology. What do I mean by this? Well, let me explain.
Some say the Titanic sank simply because of poor preparation — not enough life boats, not enough training on the evacuation procedure, in other words “not enough imagination” to plan for a catastrophe. It was derailed in its course by an entirely predictable natural calamity that had not been planned for adequately, even though the risk was obvious in retrospect. Was this just on of those “unintended consequences” that could have been avoided with more clear vision? Perhaps, but the Titanic is, ahem, water under the bridge. But we can focus on some more mundane and current potential missteps and make some guesses.
Let’s talk about medical technologies, and drugs in particular. Let us say that there is a new sepsis drug that has been tested among patients with sepsis but without organ failure. This drug appears to prevent organ failure in a fraction of the treated patients, and also reduces mortality by 6%. The only obstacle to widespread use of this drug is its acquisition cost, which is much higher than what the hospital’s critical care pharmacist is used to paying for other drugs. Because of this high cost, the drug, despite being on the formulary, gets administered only to those patients who have developed not one, but two organ failures. The savvy pharmacist looks at the outcomes of these patients and, after comparing them to those of the patients who did not receive the drug, concludes that the new sepsis drug, instead of saving lives, actually kills. The P&T committee discusses this, dumps the drug from the formulary and other hospitals follow suit. What’s wrong with this picture?
Several fallacies are at work here, including an overly broad inference of causality and bias. But the most important lesson is to do with confounding: because of its apparent expense, the drug has been niched into a population of patients who a). were not the ones that exhibited the evidence of benefit in the trials, and b). have a very high risk of mortality at baseline. So, not only is it not valid to conclude that the drug killed these patients, but it is not even valid to say that the drug does not work — it may well work in the populations that it was shown to work in, but not in this, much more ill, population. You see the difference? It is like saying that you umbrella failed to keep you dry when you opened it only after you already got soaked.
So confounding by indication is one reason that drugs “fail” - they are given to people who are by definition not going to do well, and the confirmation bias pushes us to say see, it’s expensive and doesn’t work. So how do we overcome this phenomenon and make sure that appropriate patients get access to useful technologies? I believe I have a very simple answer: don’t squeeze the toothpaste out of the tube if you don’t want to have to cram it back in. Huh?
In other words, do what I always advocate: be ready with the relevant data before the train leaves the station, before the cat gets out of the bag, before the horse gets out of the barn. It is very well known that cognitive biases, once established, are difficult to overcome. The pharmacist’s first concern is for being able to use his very limited resources efficiently, and to guard from spending his monthly budget on a potentially useless intervention in a single patient only to be left with no resources to care for all of the other patients. Yet many manufacturers at launch send their reps to the pharmacist with two virtually unrelated stories: one about efficacy and the other about the acquisition price and its impact on his budget. When the drug is expensive, the efficacy pales in comparison to the price tag, and the pharmacist has no choice but to restrict the use of the drug, thereby consigning it to failure by confounding by indication. Sound familiar?
Is there a way to avoid this scenario? I think so. It is self-evident that you have to have good data. The surprising thing is that good data are necessary, but not sufficient: the timing of these data is critical as well. It is easier to help people form an opinion where none exists than to change one that is already there. So, to be successful, the manufacturer with a good technology must have a coherent effectiveness and cost-effectiveness proposition right out of the gate. Not only that, but it is imperative to help the clinician understand what patients might benefit from the technology (no, not all patients should be on your drug). This is the kind of a collaboration that will ultimately benefit all stake holders: 1). Appropriate patients will get the opportunity at better outcomes, 2). The pharmacist will understand up front the value proposition and the potential scope of use, and 3). The manufacturer will profit from providing a beneficial service. Isn’t this the intent of all this drug development?
If all this seems all too obvious, it is because this is not rocket science. But why, then, do I see so many companies get into trouble with this very scenario? Is it just the case of “best laid plans” or is it a real blind spot that needs to be illuminated? You tell me. Given the investment that goes into drug development, I think it makes sense to approach this gap earnestly, instead of just shuffling the deck chairs on the Titanic.