Skip to content

the Dunning-Kruger effect

It's always hilarious to me when people are so confidently and loudly wrong about what the Dunning-Kruger effect is.

The graph shown has nothing to do with the actual study performed by Dunning and Kruger -- or any study. It's simply a vibe someone decided to illustrate. https://t.co/8q1dJFzXIH Here's an actual graph from the study that spawned this common misconception that ignorant people think they are more competent than informed people.

The researchers gave participants a series of questions to answer, then sorted them into quartiles based on their performance. https://t.co/kFYOTOlBoD They then asked participants to estimate how well they did in terms of actual scores and compared to other participants. Notice that, although the bottom quartile does overestimate their performance, they don't overestimate to the point of believing they outperformed the top. In fact, the bottom quartile only believed that they did a bit better than the average. The better-than-average effect is a well-documented phenomenon, wherein a majority of people don't necessarily think they're great at a given task, but they do think they're above average. Typically, this is a consequence of a cognitive bias to underemphasize the impact of unknown factors and overemphasize what's in front of you. In other words, an individual's own performance is always front of mind, and easier to conceptualize than a nebulous "average ability." Getting back to Dunning-Kruger, the study itself has since been disputed, due to some glaring design flaws. The effect being demonstrated and presented as a "profound psychological truism" has been recapitulated using randomly generated data, a result of statistical artifacts. The actual empirical result of this and various follow-up studies is pretty simple: people tend to think they perform slightly better than a vague "average person" -- unless they know what the average is. The further away from the average, the greater the statistical disparity. Other studies have examined how people rate their performance on various tests in more objective contexts, using less statistically noisy analysis, and found that, although people tend to slightly overestimate their performance, they do assess their skill pretty well. For example, you might give participants a 20-question test, then ask them how they did. Someone who got 7 questions right might say "Hm... That was pretty hard. I think I got 8 or 9 questions right." You might then ask them how they think others did, and they might say, "Well, like I said, it was pretty hard. I'm guessing the average result was maybe 6 or 7, and the top score was probably 12."

They think they did better than average, but they don't think they did THE BEST. Of course, none of this actual science really matters to the liberals who smugly post the same picture of a misinterpreted conclusion from an iffy study. They would never let something as silly as material reality get in the way of a satisfying narrative.