Tetlock and Gardner after discussing the Wehrmacht training and tactics to illustrate distributed decision making:
“There is no divinely mandated link between morality and competence.”
This is awesome. It dovetails nicely with having to realize that ones enemies will attack you where you are weakest, not where you are strongest and where you’ve invested the most in defense.
“Forecasters who see illusory correlations and assume that moral and cognitive weakness run together will fail when we need them most. We don’t want intelligence analysts to assume jihadist groups must be inept or that vicious regimes can’t be creatively vicious.” —Tetlock/Gardner.
It worries me how many see their enemies as subhuman intelligence
“interstate wars have been declining since the 1950s and civil wars have been declining since the end of the Cold War in the early 1990s. This is reflected in the number of battle deaths per year, which, with a few blips, declined throughout the period.” —Tetlock/Gardner.
What absolute balderdash. Pure high-grade fertilizer 💩. Tetlock really should know better than incoming this pernicious Pinker thesis but alas, even a researcher on analytical thinking has to fail sometimes.
There’s been a few places in this book where I feel the authors forgot their own message. Did JFK’s team really improve between the Bay of Pigs catastrophe and its win during Cuban Missile Cdisis? Wouldn’t you need to test their performance over several geopolitical scares to establish that?
Did this forecaster really let his values misguide his prediction? Wouldn’t you need to run a specific experiment to verify it wasn’t any number of different things?
Well maybe we can forgive some lapses😒
Ah, Tetlock acknowledges this here!!:
‘There is not such well-developed research literature on how to measure the quality of questions.’
‘“What does a clash in the South China Sea between the Philippines and the Chinese Navy tell us about Chinese geopolitical intent?” Well, perhaps not that much. What you’re looking for is creating clusters of resolvable indicators, each of which makes a significant incremental contribution to the resolution of a bigger question.’
"That's a very interesting conundrum that we encounter, which is that some forecasters can be radically wrong in the short-term, but radically right in the long-term. You need mechanisms for factoring that possibility into your decision calculus if you're an organization relying on forecasting tournaments for probability inputs into decisions."
☝️ this. This ☝️ is key for fat-tailed Black Swan risks+predictions—earthquakes, nuclear accidents, pandemics, etc., anything with systemic roots.
A lot of times you can see a situation is bad, like a car going too fast in the snow starts spinning out of control—you don't know whether it'll crash through the guardrail and off the cliff, or if it'll just break the left headlight, or if absolutely nothing bad happens.
Or a dry forest. You know a fire is imminent. You don't know which tree will be hit first. You don't know which of the ~ten sources will start it. But you know the system has evolved to a catastrophic state.
“Tetlock: It is interesting how many of the superforecasters were quite public-spirited software engineers. Software engineers are quite overrepresented among superforecasters.”
Interesting indeed!
Jacquet: But is that just an opt-in bias? They’re on their computer all day & then they’re willing to click a bunch of questions.
Tetlock: Yes, I think that’s right. They also find it very interesting, this question here about the limits of probability. What are the limits of quantification? They find that an engaging question. They like to compete. They’re competitive. They’re curious about what the limits of quantification are…it becomes a kind of existential mission for some of them.
“Kamangar: Are superforecasters better at predicting how well they’re going to do?
“Mellers: No, they are terrible. That’s amazing.”
Y—yes, this is good, this is very good:
"Edge Master Class 2015 with Philip Tetlock—A Short Course in Superforecasting"
https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-i (1 of 5! very long transcripts)
I love Phil Tetlock's work because
1. it invites deep thinking, then
2. it invites deep action, then go to #1.
Tetlock's research and writings are making me start to make bets, set up news alerts, set up automatic trading algorithms, etc. Action! Too much reading makes you a dull couch potato, just like too much TV.