Men and women doctors vs correlation and causation

Men and women doctors versus correlation and causation – KevinMD – by Ashish Jha – Jan 27, 2017

Our recent paper on differences in outcomes for Medicare patients cared for by male and female physicians created a stir.

Comparison of Hospital Mortality and Readmission Rates for Medicare Patients Treated by Male vs Female Physicians – JAMA Dec 19, 2016
Concusion: Elderly hospitalized patients treated by female internists have lower mortality and readmissions compared with those cared for by male internists.

It’s worth highlighting a few of the more common critiques that have been lobbed at the study to see whether they make sense and how we might move forward.

Correlation is not causation

We all know that correlation is not causation. It’s epidemiology 101.

People who carry matches are more likely to get lung cancer. Going to bed with your shoes on is associated with higher likelihood of waking up with a headache. No, matches don’t cause lung cancer any more than sleeping with your shoes on causes headaches. Correlation, not causation

The argument is that because we had an observational study — that is, not an experiment where we proactively, randomly assigned millions of Americans to male-versus-female doctors — all we have is an association study.

We often make causal inferences based on observational data — and here’s the kicker:

Sometimes, we should.

Think smoking and lung cancer. Remember the RCT that assigned people to smoke (versus not) to see if it really caused lung cancer? Me either, because it never happened.

So, if you are a strict “correlation is not causation” person who thinks observational data only create hypotheses that need to be tested using RCTs, you should only feel comfortable stating that smoking is associated with lung cancer, but it’s only a hypothesis for which we await an RCT. That’s silly — smoking causes lung cancer.

How can we be so certain that smoking causes lung cancer based on observational data alone?

Because there are several good frameworks that help us evaluate whether a correlation is likely to be causal.

They include the

  1. presence of a dose-response relationship,
  2. plausible mechanism, corroborating evidence and
  3. absence of alternative explanations, among others.

The final issue — alternative explanations — has been brought up by nearly every critic. There must be an alternative explanation!

Remember, a variable, in order to be a confounder, must be correlated both with the predictor (gender) and outcome (mortality).

We spent over a year working on this paper, trying to think of confounders that might explain our finding

But that confounder would have to be big enough to explain about a half a percentage point mortality difference, and that’s not trivial. So I ask the critics to help us identify this missing confounder that explains better outcomes for women physicians.

Statistical versus clinical significance

One more issue warrants a comment. Several critics have brought up the point that statistical significance and clinical significance are not the same things. This too is epidemiology 101.

Something can be statistically significant but clinically irrelevant. Is a 0.43 percentage point difference in mortality rate clinically important? This is not a scientific or a statistical question. This is a clinical question

And people can reasonably disagree. From a public health point of view, a 0.43 percentage point difference in mortality for Medicare beneficiaries admitted for medical conditions translates into potentially 32,000 additional deaths. You might decide that this is not clinically important. I think it is. It’s a judgment call and we can disagree.

This is one study — and the arc of science is such that no study gets it 100% right.

New data will emerge that will refine our estimates and of course, it’s possible that better data may even prove our study wrong. Smarter people than me — or even my very smart co-authors — will find flaws in our study and use empirical data to help us elucidate these issues further, and that will be good.

That’s how science progresses: through facts, data and specific critiques.

Correlation is not causation” might be epidemiology 101, but if we get stuck on epidemiology 101, we’d be unsure whether smoking causes lung cancer. We can do better.

We should look at the totality of the evidence.

We should think about plausibility.

And if we choose to reject clear results, such as women internists have better outcomes, we should have concrete and testable alternative hypotheses. That’s what we learn in epidemiology 102

Ashish Jha is an associate professor of health policy and management, Harvard School of Public Health, Boston, MA.  He blogs at An Ounce of Evidence and can be found on Twitter @ashishkjha.
Advertisements

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s