Often Wrong, Never in Doubt – Six Ways Assumptions Mislead Us – By Chuck Dinerstein — December 19, 2018
Facts are far harder to obtain than assumptions; they may require long periods of observations or expensive, sensitive measurement devices.
Assumptions can be made more easily, in the comfort of the office, frequently papering over or shaping missing data.
One of the unintended results of this approach is that given a limited set of facts, the strength of our conclusions is based upon our certainty in the strength of our assumptions.
Assumptions are just not as sexy as conclusions and are frequently overlooked in our haste to know or do – it is a variation of often wrong, never in doubt.
Here are six ways assumptions can lead us astray:
There are some models we trust, without knowing or understanding their assumptions. Congress in an infrequent bipartisan moment, considers the predictions of the Congressional Budget Office to be valid and quite specific, putting numbers on costs or benefits.
In reality, the CBO carefully explains the underlying assumptions and how variations on that assumption result in ranges of costs or benefits, not specific numbers.
In certain instances, changing from one equally valid assumption to another “flips the script” drastically changing the conclusion. The studies of sin taxes are good examples.
Taxes increase price which in turn results in the substitution of a cheaper product for the now more expensive one. We assume that faced with substitution people will make the rationale, healthy choice, water in place of soda.
But that assumption is frequently wrong, and people instead choose juice in place of soda. So that our model’s conclusion that raising the price of soda will mean we take in fewer calories turns out to be wrong, at least some of the time.
Most scientific articles look at how altering our assumptions may change our conclusions through “sensitivity” analysis. But again it is frequently more convenient to remember the conclusion and not the assumptions.
Illogical certainties – unfounded conclusions based on errors of thought.
This is a prevalent source of difficulties, with two intertwined approaches.
You can make the mistake that a statistical significance is actually significant, e.g., any of current GWAS genetic studies where a set of statistically significant genes are associated with this or that condition. In the small print you find that it is true 20 or 30% of the time, so how significant is it?
And of course, I was careful not to use the word cause, because the other illogical source of certainty is believing that correlation reflects causation.
And this happens ALL the time when humans notice that two occurrences are associated. If you look through enough data, you can find all kinds of absurd happenings that associate, yet are obviously NOT causative.
This kind of thinking is especially pernicious around the association of depression with chronic pain. Pain patients know that their constant pain and the life limits imposed by it eventually made them depressed, but anti-opioid zealots still insist that our depression is what’s causing our pain.
Conflating science and advocacy
In describing models there is an arrow, facts and assumptions lead us to conclusions; seems to be a one-way street, like the arrow of time going in only one direction.
But physics tells us that time can go forwards or back.
Similarly, advocates can begin with their conclusions to reverse engineer their assumptions, making everything fit together in a neat package. I am not suggesting that this is necessarily intentional, but we can pick our assumptions, not our facts making for an insidious problem.
When we talk about a paper that is data mining we are referring to this reverse engineering of assumptions. The partner in this error is:
Some data cannot be found, and we often extrapolate or estimate the missing components by drawing a trend line into this unknown area.
The entire discussion around radiation’s harmful effects at very, very low doses is a question of extrapolation.
Some believe the trend to be linear; others do not. But no one really knows, and each side can choose the method of extrapolation that serves their conclusion.
This is a great source of environmental pollution.
“If it bleeds, it leads” expresses our interest in things that make us fearful; it is, after all, an underlying Darwinian force. But the media in its search for attention is only too willing to repackage science and policy into sound bites and optics that create a response at little cognitive costs. Our transient emotional response is just as real whether we fully understand or are even told the underlying assumptions.
For your entertainment, here’s a fun site that has nothing but examples of bizarre statistical correlations that are obviously NOT causation: Spurious Correlations