In 1999 a pair of researchers published a paper called “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments (PDF).”
David Dunning and Justin Kruger (both at Cornell University’s Department of Psychology at the time) conducted a series of four studies showing that, in certain cases, people who are very bad at something think they are actually pretty good.
They showed that to assess your own expertise at something, you need to have a certain amount of expertise already.
It has been more than 10 years since Dunning and Kruger published their work. I suspect it has become required reading in psychology courses. It’s also a paper that has important implications for learning and communication, so what has happened since? Have the results held up? Are they universal? And what can we do to avoid falling victim to our own inabilities?
This paper has become a cult classic. It is well-written—humor interspersed with robust data, and conclusions that are discussed in a thorough and accessible way
Clearly, the paper struck a chord with many people outside of the field of psychology.
“I presume the paper gave voice to an observation that people make about their peers but that they don’t know how to express,” Dunning responded. If you have not read the paper already, I recommend doing so.
Unfortunately, in those places ruled by the smug and complacent, a classic paper has become a weapon.
The findings of Dunning and Kruger are being reduced to “Stupid people are so stupid that they don’t know they are stupid.”
Rather bluntly, Dunning himself said, “The presence of the Dunning-Kruger effect, as it’s been come to be called, is that one should pause to worry about one’s own certainty, not the certainty of others.”
In fact, Dunning-Kruger and follow-up papers give us cause for hope. They show that people are not usually irredeemably stupid.
You can teach people to accurately self-evaluate—though, in their specific examples, this also involved teaching them the very skill they were trying to evaluate.
Context is everything
It is important to realize that the Dunning-Kruger paper was not such a shocking finding. It was, for instance, already known that seemingly everyone evaluates themselves as above average in everything
A large pile of research on various groups of people, covering various skill sets, indicates that in the face of all evidence, humans are irredeemably optimistic about their own abilities
Being clueless about your own abilities is one thing. Misjudging other’s abilities is relatively more serious.
It’s not about stupidity, stupid
Here, the author gives an example from his own life:
I grew up speaking English. As an adult, I moved from New Zealand to Holland and have spent the last five years struggling to learn a new language. I am, by the very definition of Dunning and Kruger’s paper, incompetent.
When you consider that I am unlikely to be able to pick grammatical errors out of a Dutch sentence, it is impossible for me to evaluate my own, or anyone else’s, performance. I simply do not have the skills to do so.
And, since I can’t read or hear the errors of others, I cannot accurately place myself in the hierarchy of competence. Couple that with the fact that I think I am not stupid, and I am likely to severely overestimate my abilities.
At this level, the results seem to imply that if you can’t do, you can’t recognize the difference between doing well and doing poorly.
The point being that it is not just self-evaluation that is difficult. Evaluation of a skill set is, quite simply, very difficult to get right.
What this study outlines most starkly is what happens when someone is not just bad at something, but they are bad and do not possess the tools to assess their own performance.
These are two different skills: action and self-assessment.
Sometimes the two skill sets overlap so well that you have to be good at something to accurately know that you are good at it. In other cases, the two skill sets don’t overlap. In which case, maybe Dunning and Kruger need not apply.
Well, not so fast
if the skills required to self-evaluate and the skill under evaluation do not overlap, then performance in one should not predict performance in the other.
It seems to suggest that you really do need to have some skill in the area to evaluate what constitutes good performance
Education and work
One of the scary things about these findings is that we often use self-evaluation in education and work. What this tells us is that, for both the best performers and the worst performers—these are the people you really want to find out about—you are not going to get an accurate impression.
Things like management involve reading people. But this study shows that you have to be good at reading people in order to evaluate if you are good at reading people. What’s more, evaluating general performance is more about reading people and trying to figure out what they are capable of.
The results of research performed by Dunning, Kruger, Ames, and Kammrath tell us something that every one of us has expressed at some time or another. The incompetent are readily able to escape detection by those who count.
What about science communication?
the implications of the findings are far more insidious than the realization that, yes, we all have weak points.
the Dunning-Kruger paper shows that, with training, self-evaluation accuracy improves. If you teach people logical reasoning, they become better able to evaluate their own performance in logical reasoning. The critical message is that the right feedback at the right time has an impact.
If you are aware that everyone (including you and me) is likely to overestimate our abilities, does this have an influence?
Dunning believes there are two key issues: first, critical thinking skills, applied to your own knowledge, as well as everything else, are vital.
But, importantly, if you don’t exercise critical thinking skills, they will fade, leaving you with a false impression of your own abilities.
This piece seems to describe many so-called addiction experts campaigning for a virtual ban on opioids.
They manage to ignore all facts about the rising overdoses from illicitly manufactured fentanyl and heroin, clinging to their idea that Rx opioids are causing the problem.
As described in this article, they have no idea how incompetent they are and how little they know about the appropriate use of opioids as effective pain medication.