Merrill Goozner’s piece on the FDA’s decision to pull a stent from the market after it was shown “2 1/2 times (14.7%) more people either died or had a repeat stroke after receiving the stent than those who received drugs and counseling (5.8%).” shows how science can – and should – be brought to what is all-too-often the “art” of medicine.
The stent was approved based on a rather limited study by manufacturer Stryker, but fortunately only approved for use if it that use was as part of an evaluative study.
That study was stopped early due to the higher death/repeat stroke rate; unfortunately it appears that use of the stent may have played a role in patients dying and/or having more strokes.
The good news is the stent is, or soon will be, off the market. The bad news – outside of that delivered to the families of those who died possibly as a result of the stent – is that this is actually “news”.
The reason this device was pulled from the market is because it was only approved on a limited basis by the FDA, who could pull that approval relatively easily. For devices and drugs and treatments already approved by, or not subject to approval by, the FDA (or any other regulatory authority) it is much more difficult to get them off the market. And it’s impossible for Medicare to factor effectiveness into payment.
If we are to gain any measure of control over health care costs, we have to start by paying for performance – not just for docs, but for drugs and devices as well. One wouldn’t think that would require the proverbial “Act of Congress’, but it does.
Perhaps the Super-Committee can decide that one way to attack the deficit is to stop paying for unproven treatments, or at least stop paying so much if the treatments aren’t proven to be effective. Can you imagine what that would do to health care? Actually paying for good stuff rather than paying for anything that gets prescribed for/inserted into/done to a patient?
Insight, analysis & opinion from Joe Paduda
Joe, great blog post!
This is why is is so critical that efficiency and quality is viewed as independent variables. Additionally, clinical outcomes research are the underpinnings of comparative effectiveness. Efficiency is typically based on relative episode costs, or episode of care, depending on the payor. Measuring quality, depends on how you define quality. In my opinion, this is where comparative effectiveness comes in to play. True clinical outcomes research takes significant time and money to undertake, this is why “quality” is rarely defined in these terms.