“I see Kahneman’s and Taleb’s critiques as the strongest challenges to the notion of superforecasting.” —Tetlock and Gardner, *Superforecasting: The Art and Science of Prediction*
Specifically, Kahneman’s doubt that cognitive biases can be reliably-enough overcome, and Taleb’s that only inconsequential predictions can be made with any accuracy.
If that’s it, superforecasting is in good shape, enough to make money :)
“Kamangar: Are superforecasters better at predicting how well they’re going to do?
“Mellers: No, they are terrible. That’s amazing.”
Jacquet: But is that just an opt-in bias? They’re on their computer all day & then they’re willing to click a bunch of questions.
Tetlock: Yes, I think that’s right. They also find it very interesting, this question here about the limits of probability. What are the limits of quantification? They find that an engaging question. They like to compete. They’re competitive. They’re curious about what the limits of quantification are…it becomes a kind of existential mission for some of them.
“Tetlock: It is interesting how many of the superforecasters were quite public-spirited software engineers. Software engineers are quite overrepresented among superforecasters.”
Interesting indeed!
Pinafore dev Show more
Of course all cultures all have ways of indicating honorable protagonists, and just by themselves, these just establish Hiyori and Bishamon as good people worth emulating.
But it's one of those things that, after it's pointed out to you (as Buruma's hard-to-find but 1000% *completely* worthwhile book does, showing how the pattern is older than Edo kabuki), it's hard to unsee. He also has awesome chapters on father figures (oyabun/daimyo/emperors), and much more.
One thing I don't super-appreciate about #Noragami is it's classic Japanese media representation of "good women" being "good mothers".
The high schooler Hiyori, a badass astral fighter, spends most her time as a caregiver—including sitting with the Yukine (the Yato god's weapon) as he does his homework: about as 母もの ("mother thing") as you can get.
Even the supreme war deity Bishamon has a huge spirit family because she cannot abandon a lost soul.
See Ian Buruma's *Behind the Mask* for 🤯.
Digital designers (of CPUs)—is Moore's Law really a self-fulfilling prophecy? Or is it just a business thing, where the only way to get people to buy a new computer is to make it faster (or "faster")?
Like, we don't have Moore's Law for cars—cars might have some dimension that could see exponential growth for decades (fuel economy? speed? um, size?) but probably not.
But is there a Moore's Law for financial transactions? Visa, bank wires, stock/futures/options exchanges, M-Pesa, etc.?
A lot of times you can see a situation is bad, like a car going too fast in the snow starts spinning out of control—you don't know whether it'll crash through the guardrail and off the cliff, or if it'll just break the left headlight, or if absolutely nothing bad happens.
Or a dry forest. You know a fire is imminent. You don't know which tree will be hit first. You don't know which of the ~ten sources will start it. But you know the system has evolved to a catastrophic state.
"That's a very interesting conundrum that we encounter, which is that some forecasters can be radically wrong in the short-term, but radically right in the long-term. You need mechanisms for factoring that possibility into your decision calculus if you're an organization relying on forecasting tournaments for probability inputs into decisions."
☝️ this. This ☝️ is key for fat-tailed Black Swan risks+predictions—earthquakes, nuclear accidents, pandemics, etc., anything with systemic roots.
Y—yes, this is good, this is very good:
"Edge Master Class 2015 with Philip Tetlock—A Short Course in Superforecasting"
https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-i (1 of 5! very long transcripts)
I love Phil Tetlock's work because
1. it invites deep thinking, then
2. it invites deep action, then go to #1.
Tetlock's research and writings are making me start to make bets, set up news alerts, set up automatic trading algorithms, etc. Action! Too much reading makes you a dull couch potato, just like too much TV.
Ah, Tetlock acknowledges this here!!:
‘There is not such well-developed research literature on how to measure the quality of questions.’
‘“What does a clash in the South China Sea between the Philippines and the Chinese Navy tell us about Chinese geopolitical intent?” Well, perhaps not that much. What you’re looking for is creating clusters of resolvable indicators, each of which makes a significant incremental contribution to the resolution of a bigger question.’
Going back to this—I think Tetlock falls for WYSIATI—What You See Is All There Is (via Kahneman)—because Good Judgement Project/IARPA Challenge have short timescales that completely disrupt fat-tailed risk predictions. "Will China have a military engagement in the South China Sea in 2013?" wound up being a "yes" in late December IIRC, but if it'd happened in early January, then how to evaluate those predictions? And the real question could be "Will China fight a Real War in the South China Sea?"
And those war “statistics” may ignore things like the Rwandan and Guatemalan civil wars (where hundreds of thousands of lives were lost). That might be why Tetlock cites civil wars separately from “regular” wars, the former having only started “declining” (presumably in a linear regression sense?) since the 1990s. The nineties! Thirty years of declining civil wars! They’re well on their way to extinction! 😖😫🙄
Oh about the stupid idea how war-related deaths have been decreasing for a handful of decades. I’m really thinking about publishing an alternative analysis every year: an event costing how many lives would have to occur this year to totally invert those silly linear regressions. I bet it wouldn’t be that big.
That is, I’d guess if 25k deaths happened in an official war in 2018, none of those reports would be able to say “war has declined since 1950s!” without serious data cherry picking 🍒.
There’s been a few places in this book where I feel the authors forgot their own message. Did JFK’s team really improve between the Bay of Pigs catastrophe and its win during Cuban Missile Cdisis? Wouldn’t you need to test their performance over several geopolitical scares to establish that?
Did this forecaster really let his values misguide his prediction? Wouldn’t you need to run a specific experiment to verify it wasn’t any number of different things?
Well maybe we can forgive some lapses😒
“interstate wars have been declining since the 1950s and civil wars have been declining since the end of the Cold War in the early 1990s. This is reflected in the number of battle deaths per year, which, with a few blips, declined throughout the period.” —Tetlock/Gardner.
What absolute balderdash. Pure high-grade fertilizer 💩. Tetlock really should know better than incoming this pernicious Pinker thesis but alas, even a researcher on analytical thinking has to fail sometimes.
“Forecasters who see illusory correlations and assume that moral and cognitive weakness run together will fail when we need them most. We don’t want intelligence analysts to assume jihadist groups must be inept or that vicious regimes can’t be creatively vicious.” —Tetlock/Gardner.
It worries me how many see their enemies as subhuman intelligence
Tetlock and Gardner after discussing the Wehrmacht training and tactics to illustrate distributed decision making:
“There is no divinely mandated link between morality and competence.”
This is awesome. It dovetails nicely with having to realize that ones enemies will attack you where you are weakest, not where you are strongest and where you’ve invested the most in defense.