The Scipionic Circle 83: Decision-Making, Why Reality Isn’t as Logical as We Think, and A Comprehensive Guide to Probabilistic Thinking
"There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward."
Hello, friend.
Welcome to another issue of The Scipionic Circle — I hope you find something of value.
In addition to this newsletter, I spend hours meticulously forging summaries for every great book I read to ensure I glean every nugget of wisdom concealed within their pages. If you would like access to these 'Compendiums' as I call them, please consider supporting this publication.
Food for Thought
I. Decision-Making: A Comparison Between Chess and Poker
"In The Ascent of Man, scientist Jacob Bronowski recounted how von Neumann described game theory during a London taxi ride. Bronowski was a chess enthusiast and asked him to clarify. 'You mean, the theory of games like chess?' Bronowski quoted von Neumann’s response: ‘No, no,’ he said. ‘Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games,’ he said, ‘are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’
"The decisions we make in our lives—in business, saving and spending, health and lifestyle choices, raising our children, and relationships—easily fit von Neumann’s definition of 'real games.' They involve uncertainty, risk, and occasional deception, prominent elements in poker. Trouble follows when we treat life decisions as if they were chess decisions. Chess contains no hidden information and very little luck. The pieces are all there for both players to see. Pieces can’t randomly appear or disappear from the board or get moved from one position to another by chance. No one rolls dice after which, if the roll goes against you, your bishop is taken off the board. If you lose at a game of chess, it must be because there were better moves that you didn’t make or didn’t see. You can theoretically go back and figure out exactly where you made mistakes. If one chess player is more than just a bit better than another, it is nearly inevitable the better player will win (if they are white) or, at least, draw (if they are black). On the rare occasions when a lower-ranked grand master beats a Garry Kasparov, Bobby Fischer, or Magnus Carlsen, it is because the higher-ranked player made identifiable, objective mistakes, allowing the other player to capitalize.
"Chess, for all its strategic complexity, isn’t a great model for decision-making in life, where most of our decisions involve hidden information and a much greater influence of luck. This creates a challenge that doesn’t exist in chess: identifying the relative contributions of the decisions we make versus luck in how things turn out. Poker, in contrast, is a game of incomplete information. It is a game of decision-making under conditions of uncertainty over time. (Not coincidentally, that is close to the definition of game theory.) Valuable information remains hidden. There is also an element of luck in any outcome. You could make the best possible decision at every point and still lose the hand, because you don’t know what new cards will be dealt and revealed. Once the game is finished and you try to learn from the results, separating the quality of your decisions from the influence of luck is difficult.
"In chess, outcomes correlate more tightly with decision quality. In poker, it is much easier to get lucky and win, or get unlucky and lose. If life were like chess, nearly every time you ran a red light you would get in an accident (or at least receive a ticket). If life were like chess, the Seahawks would win the Super Bowl every time Pete Carroll called that pass play. But life is more like poker. You could make the smartest, most careful decision in firing a company president and still have it blow up in your face. You could run a red light and get through the intersection safely—or follow all the traffic rules and signals and end up in an accident. You could teach someone the rules of poker in five minutes, put them at a table with a world champion player, deal a hand (or several), and the novice could beat the champion. That could never happen in chess.
"If we want to improve in any game—as well as in any aspect of our lives—we have to learn from the results of our decisions. The quality of our lives is the sum of decision quality plus luck. In chess, luck is limited in its influence, so it’s easier to read the results as a signal of decision quality. That more tightly tethers chess players to rationality. Make a mistake and your opponent’s play points it out, or it is capable of analysis afterward. There is always a theoretically right answer out there. If you lose, there is little room to off-load losing to any other explanation than your inferior decision-making. You’ll almost never hear a chess player say, 'I was robbed in that game!' or, 'I played perfectly and caught some terrible breaks.'
"That’s chess, but life doesn’t look like that. It looks more like poker, where all that uncertainty gives us the room to deceive ourselves and misinterpret the data. Poker gives us the leeway to make mistakes that we never spot because we win the hand anyway and so don’t go looking for them, or the leeway to do everything right, still lose, and treat the losing result as proof that we made a mistake. Resulting, assuming that our decision-making is good or bad based on a small set of outcomes, is a pretty reasonable strategy for learning in chess. But not in poker—or life." — From Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts by Annie Duke.
II. Crime, Fiction and Post-Rationalism: Or Why Reality Isn’t Nearly as Logical as We Think
"Think of life as like a criminal investigation: a beautifully linear and logical narrative when viewed in retrospect, but a fiendishly random, messy and wasteful process when experienced in real time. Crime fiction would be unreadably boring if it accurately depicted events, because the vast majority of it would involve enquiries that led nowhere. And that’s how it’s supposed to be – the single worst thing that can happen in a criminal investigation is for everyone involved to become fixated on the same theory, because one false assumption shared by everyone can undermine the entire investigation. There’s a name for this – it’s called ‘privileging the hypothesis’.
"A recent example of this phenomenon emerged during the bizarre trial of Amanda Knox and Raffaele Sollecito for the murder of Meredith Kercher in Perugia, Italy. It became impossible for the investigator and his team to see beyond their initial suspicion that, after Kercher had been killed, the perpetrator had staged a break-in to ‘make it look like a burglary gone wrong’. Since no burglar from outside would need to stage a break-in, their only conclusion was that the staging took place to divert attention from the other flatmates and to disguise the fact that it was an inside job. Unfortunately, the initial suspicion was incorrect.
"I sympathise a little with their attachment to the theory. After all, the break-in did, at first glance, look as though it might have been faked: there was some broken glass outside the window and an absence of footprints. But the theory of an inside job staged to look like a botched burglary was so doggedly held that all subsequent contradictory evidence was either suppressed or not shared with the press, and the result was a nonsense.
"The break-in did look rather absurd at first glance – why would you break into a flat from a relatively exposed upstairs window? – until you realise that the purpose of breaking a window was not to gain access to the house, but to make a hell of a lot of noise while standing in a place from which an easy escape was possible. It thus helped the perpetrator ascertain with some confidence that there was no one around; if you smash a window and nobody intervenes, you can be fairly sure no one is going to notice you climbing through the same window five minutes later, but if a light goes on and a dog starts barking, you can simply leg it.
"This example goes to the heart of how we see the world. Do we look at things from a single perspective, where you do one thing to achieve another, or do we accept that complex things are rather different? In a designed system, such as a machine, one thing does serve one narrow purpose, but in an evolved or complex system, or in human behaviour, things can have multiple uses depending on the context within which they are viewed.
"The human mouth allows you to eat, but if your nose is blocked, it also allows you to breathe. In a similar way, it seems illogical to break into a building using the noisiest means possible, until you understand the context in which the offender is operating. It is not appropriate to bring the same habits of thought that we use to deal with things that have been consciously designed to understanding complex and evolved systems, with second-order considerations." — From Alchemy: The Surprising Power of Ideas That Don't Make Sense by Rory Sutherland.
III. The Value and Practice of Quantifying Confidence in Beliefs: A Comprehensive Guide to Probabilistic Thinking
"When we express our beliefs (to others or just to ourselves as part of our internal decision-making dialogue), they don’t generally come with qualifications. What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten? Zero would mean we are certain a belief is not true. Ten would mean we are certain that our belief is true. A zero-to-ten scale translates directly to percentages. If you think the belief rates a three, that means you are 30% sure the belief is accurate. A nine means you are 90% sure. So instead of saying to ourselves, 'Citizen Kane won the Oscar for best picture,' we would say, 'I think Citizen Kane won the Oscar for best picture but I’m only a six on that.' Or 'I’m 60% that Citizen Kane won the Oscar for best picture.' That means your level of certainty is such that 40% of the time it will turn out that Citizen Kane did not win the best-picture Oscar. Forcing ourselves to express how sure we are of our beliefs brings to plain sight the probabilistic nature of those beliefs, that what we believe is almost never 100% or 0% accurate but, rather, somewhere in between.
"In a similar vein, the number can reflect several different kinds of uncertainty. 'I’m 60% confident that Citizen Kane won best picture' reflects that our knowledge of this past event is incomplete. 'I’m 60% confident the flight from Chicago will be late' incorporates a mix of our incomplete knowledge and the inherent uncertainty in predicting the future (e.g., the weather might intervene or there might be an unforeseen mechanical issue).
"Incorporating uncertainty into the way we think about our beliefs comes with many benefits. By expressing our level of confidence in what we believe, we are shifting our approach to how we view the world. Acknowledging uncertainty is the first step in measuring and narrowing it. Incorporating uncertainty in the way we think about what we believe creates open-mindedness, moving us closer to a more objective stance toward information that disagrees with us. We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from 'right' to 'wrong.' When confronted with new evidence, it is a very different narrative to say, 'I was 58% but now I’m 46%.' That doesn’t feel nearly as bad as 'I thought I was right but now I’m wrong.' Our narrative of being a knowledgeable, educated, intelligent person who holds quality opinions isn’t compromised when we use new information to calibrate our beliefs, compared with having to make a full-on reversal. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.
"When we work toward belief calibration, we become less judgmental of ourselves. Incorporating percentages or ranges of alternatives into the expression of our beliefs means that our personal narrative no longer hinges on whether we were wrong or right but on how well we incorporate new information to adjust the estimate of how accurate our beliefs are. There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward." — From Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts by Annie Duke.
Quotes to Ponder
I. Plato on understanding control:
"There are two things a person should never be angry at, what they can help, and what they cannot."
II. Friedrich Nietzsche on the necessity of adaptation:
"The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind."
Did you enjoy this email? Please consider buying me a coffee to caffeinate my reading sessions.
Thank you for reading,
Matthew Vere