Statistics: Beware of Exaggerated Certainty

Beware of Exaggerated Certainty – Know Your Chances – NCBI Bookshelf

This is one chapter from an online “Book” from the NIH: Know Your Chances: Understanding Health Statistics.

Of course, the numbers you see in health messages are not the whole story. We’d now like to add another bit of advice: once you have the numbers, ask yourself whether or not you should believe them.

Unfortunately, many statistics should not be accepted at face value, because they convey a sense of exaggerated certainty.

There are at least two reasons why reported research findings might not be right:

  1. much research is based on weak science, and
  2. many results are disseminated too early 

What Kind of Science Is Behind the Numbers?

The first question to ask is, “Is there any science behind the numbers?”

Ideally, there would be. But sometimes there isn’t.

The second question is, “How good is the science?”

Some research makes only a weak case for the message; other studies make a strong case. In this section, we’ll help you think about how compelling the case is.

it’s important to be skeptical about treatments that have been proven only in animal or lab studies, since they may not turn out to be relevant for people. Even when we focus on the most promising animal studies, only about one-third of treatments proven helpful in animals have turned out to be helpful in people.

And not all human research studies are equally compelling, either.

In an observational uncontrolled study, researchers simply watch what happens to a series of people in one group.

Whenever you hear the results of a study about how well an intervention works, ask whether the study included a control group (a group of people who did not undergo the intervention). Without a control group, it’s impossible to know whether the intervention really accounts for the study findings.

Stronger scientific evidence comes from controlled studies, in which researchers watch what happens to different groups of people. The most basic kinds of controlled studies involve observational research, in which the researchers merely record what happens to people in different situations, without intervening.

Cohort and case-control studies are perhaps the best-known types of observational controlled research

Such research first linked cigarette smoking to lung cancer, and high cholesterol to heart disease. This is the only way to study dangerous exposures.

But these kinds of studies have important problems. Although they can show that an intervention is associated with a particular outcome, they cannot by themselves prove that the intervention causes the outcome.

It’s always possible that other factors not accounted for in the research are causing the outcome.

Whenever you hear about the results of observational controlled studies, we suggest being cautious about concluding that the lifestyle factor, environmental exposure, or drug being studied (like eating string beans or taking estrogen) actually causes the outcome (like heart disease).

In these types of studies, you simply cannot rule out the possibility that another characteristic of the participants in fact caused the difference—and that the original conclusion may therefore be wrong

The only way to reliably tell if the intervention causes the outcome is to conduct a true experiment—a randomized trial. In a randomized controlled trial, researchers construct two groups that are similar in every way except one: whether or not they get the intervention being studied.

Patients are assigned randomly (by chance) to one of the groups. It is then reasonable to assume that any differences observed in the trial must have been caused by the intervention (since it was the only difference between the groups)

In general, you can have the most faith in statistics resulting from large, randomized, controlled trials. Having a large number of study participants is important to make sure that the findings are not the result of chance

But even when results come from a randomized trial, they aren’t necessarily right. The results of randomized trials can also be misleading—particularly if the trial was small (for example, with fewer than thirty participants) or lasted only a short time (like a few months).

And, as we noted earlier, the benefit of any treatment should be weighed against its side effects or other downsides, such as inconvenience or cost—and these may not have been measured in the randomized trial.

Unfortunately, it’s not always possible to conduct a randomized trial. This is certainly the case for studying harmful exposures such as smoking. Because it’s unethical to deliberately expose people to harm, the best we can do in such cases is an observational study.

This is part of the problem with trying to study the effectiveness of opioids for pain: a control group would have to be left without pain relief for a long period and this is not ethical.

But when new interventions are proposed, it is critical to conduct randomized trials before these new strategies are introduced into widespread use.

…like studying the results of opioid restrictions before applying them to tens of millions of people.

 

Other thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.