The Study Was Stopped Early: This Drug Must Be Great!

The Study Was Stopped Early: This Drug Must Be Great! (Sarcasm!) Ever hear a news story about some new medical breakthrough or drug that was being studied and it turns out that this treatment or drug was so great the study was stopped early? Next the TV doctor will come on and say this treatment A or drug B is better than something else and pretty soon patients are calling their doctors demanding treatment A or drug B.

In my last blog post I mentioned that several of the studies of statins for primary prevention were stopped early. Let me explain how stopping a study early can lead to an increased risk of saying a treatment is effective when it is not, or that it results in finding an exaggerated effect. First a disclaimer, this doesn’t mean that interim analysis or stopping a study early is necessarily wrong. Studies are expensive and data must be looked at to find potential side effects. However, this increases the risk of a Type II error. A Type II error is finding an effect when none is present. I am sure that there are many physicians who are unaware of this fact and there is no way patients could possibly know this. It is also clear the "physician celebrity reporters" either don’t know or aren’t talking.

The easiest way to explain this is to start with an example. Say I flip a coin 5 times. Assuming the coin is true, meaning there is a 50% chance of getting a head or a tail on each flip, what are the chances of getting 3 heads or more? The answer is 50%. What are the chances of getting 2 heads or less? Again the answer is 50%.

Now let’s say I’m studying my hypothetical population of elderly, fat, diabetic smokers who have a 50% chance of having a heart attack in the next 5 years and I have a new awesome drug A that I think will substantially decrease that risk.

We study 10 patients, 5 get drug A and 5 get a placebo. We do our interim analysis and low and behold 60% of our placebo patients have had a heart attack and only 40% of our patients taking drug A have had a heart attack. That’s a difference of 60%-40%=20%. This is the risk difference, but wait there’s more. Our relative risk is 40/60 giving a relative risk of 0.67. This yields a relative risk reduction of 1-0.67=0.33 or a relative risk reduction of 33%. Quick! Call the Network News we have a drug that reduces the risk of a heart attack by 33%. What’s not to like?

Now we’ll go back to our flipping the coin example. If we flip a coin 5 times, half the time we get 3 out of 5 heads or greater. Let’s say its 3 out of 5 or 3/5 which is .6 or 60%. About half the time we get 2 out of 5 heads or less. Let’s make it 2 out of 5 or .4 which is 40%. Sound familiar? The 60% and 40% numbers are the same as we got for our fantastic new drug A that prevents heart attacks by 33%. Yet we got that number flipping a true coin that half the time comes up heads and half the time comes up tails.

Now if we are comparing 2 coins and flip one 5 times and the other 5 times, the chance of one coin coming up heads 3 times or more and the other one coming up heads 2 times or less is 25%. It’s easy to see if we are comparing 2 coins it’s possible that one appears to come up heads more often than the other just by chance. The same thing can happen if we are doing our study and periodically checking the data. It is entirely possible that at some point we could find data where by chance the drug we’re studying has a slightly larger effect and the placebo group has a slightly smaller effect. This could even lead us to find an effect where there is none or could show an exaggerated effect. If you look at the data often enough you are bound to examine it at one of these points. Researchers will try to minimize this risk by limiting the number of interim analysis, only doing them at predetermined times, and using lower P values. Nevertheless, this is something that needs to be taken into account when analyzing the medical literature.

In my blog about statins and primary prevention, I mentioned I had found around 27 Meta-analysis or probably about as many meta-analysis as there had been original studies. What could possibly go wrong here?

In the case of prevention of mortality by statins, some of the meta-analyses said there was a decrease in mortality and some that there was not. To explain this, let’s go back to our flipping coin example.
If I flip a coin 100 times, I would expect most of the time to get around 50 heads. But sometimes I would get 40 heads or 45 heads or maybe 55 heads etc. If I flip a coin 1000 times, I would expect around 500 heads, but sometimes I would get 475 heads or 537 heads. Again the point is that it’s possible to get a variable number of heads just by chance.

Now let’s say we do a few studies, then a meta-analysis. Now we do a few more studies and another meta-analysis or two. Similar to the coin flip example, sometimes we might see a bigger effect and sometimes a smaller effect. At times we might even be seeing an effect when in fact there is none. Now let’s throw in a few experts giving their opinions writing periodic review articles and its obvious how patients can become very confused. Dr. A can be telling his patient’s drug A works great and Dr. B can be reading the same literature and saying it doesn’t work at all and to top it off, they may be saying something different next year when the next meta-analysis comes out.

Treatment Scores can make this so much better. The Treatment Score for each study can be determined and things like stopping a study early can be taken into account. New studies can be incorporated almost instantaneously instead of waiting a few years for the next wave of meta-analyses. One Treatment Score will be determined for each treatment. The original studies will be transparent and the Treatment Scores will be transparent. One Treatment with one number everyone can understand.

Hasn’t the time come for something better for our patients? Let’s get Treatment Scores going and eliminate the mass confusion that is occurring in interpreting the medical literature.

Follow this Blog:
Follow this blog by entering your email address in the box at the top right. You MUST CONFIRM your subscription VIA EMAIL. Then, you will automatically receive all new posts. If you have any problems, search for "feedburner" to make sure the confirmation email did not go into your spam folder.

Follow Treatment Scores on Social Media:
Twitter:
https://Twitter.com/TreatmentScores
Facebook:
https://Facebook.com/TreatmentScores
AngelList:
https://angel.co/treatment-scores
Blog:
http://TreatmentScoresBlog.com
Website:
http://TreatmentScores.com

DISCLAIMERS:
You must consult your own licensed physician, or other licensed medical professional, for diagnosis, treatment, and for the interpretation of all medical statistics including Treatment Scores. Treatment Scores are for educational purposes only. Treatment Scores may be incomplete, inaccurate, harmful, or even cause death if used for treatment instead of consulting a licensed medical professional. No medical advice is being given. We DO NOT CLAIM to cure, treat, or prevent any illness or condition. Nor do our services provide medical advice or constitute a physician patient relationship. Contact a physician or other medical professional if you suspect that you are ill. Call emergency services (call 911 if available) or go to the nearest emergency room if an emergency is suspected. We are not responsible for any delays in care from using our website, our services, or for any other reason. We are not responsible for any consequential damages of any nature whatsoever. We make no warranties of any kind in connection with our writings or the use of TreatmentScoresBlog.com or TreatmentScores.com. Treatment Scores are about what happened to patients studied in the past; they do not predict the future.

COPYRIGHT
Copyright © 2016 Treatment Scores, Inc.

No comments:

Post a Comment