The Axiomatic Fallacy Fallacy

A Commentary on Radical Uncertainty: Decision-Making Beyond the Numbers by John Kay & Mervyn King

by Dr. Sam L. Savage

Which one does not belong?

Airplane_RadicalUncertainty1.jpg
Airplane_RadicalUncertainty2.jpg.png
Airplane_RadicalUncertainty3.png

Answer: The one in the middle, because it does not fly

The Ludic Fallacy

In The Black Swan [i], author Nassim Nicholas Taleb describes seemingly implausible occurrences that are easy to explain after the fact. The classic is the black swan, assumed to be impossible by Europeans until one was discovered by explorers in Australia in 1697. In the book, Taleb defines the Ludic Fallacy as “the misuse of games to model real-life situations.” That is, “basing studies of chance on the narrow world of games and dice." Ludic is from the Latin, Ludus, a game or sport. And I agree that it is naïve to model complex phenomena like economies, weather, and cyber-attacks on such simple uncertainties, but … 

The Ludic Fallacy Fallacy

I define the Ludic Fallacy Fallacy as “attempting to model real-life situations without understanding the narrow world of games and dice." These teach us truths about actual economies, weather, and cyber-attacks just as paper airplanes teach us truths about aerodynamic forces. 

Radical Uncertainty

Radical Uncertainty: Decision-Making Beyond the Numbers by John Kay & Mervyn King [ii] is the Ludic Fallacy on steroids. It is a 500-page critique, not of “the narrow world of games and dice,” but of the narrow axiomatic approach to decision making under uncertainty, which has been widely adopted in economics, finance, and decision science. In keeping with Taleb, I will call this the Axiomatic Fallacy. Since my father, Leonard Jimmie Savage, was one of the founders of the approach, and proposed the pertinent axioms in his 1954 book, The Foundations of Statistics, I was eager to see what Kay and King had to say.

The Axiomatic Approach

My father framed the issue as follows:

The point of view under discussion may be symbolized by the proverb, “Look before you leap,” and the one to which it is opposed by the proverb, “You can cross that bridge when you come to it.”

Looking before leaping requires advanced planning in the face of uncertainty, for which my father sought a formal approach under idealized circumstances. Interestingly, Radical Uncertainty and my own book, The Flaw of Averages, both quote one of the same passages from my father’s work, in which he describes the application to practical problems:

It is even utterly beyond our power to plan a picnic or to play a game of chess according to this principle.

By this, my father meant that the axiomatic approach applied to making optimal choices only in what he called “small worlds,” in which you could enumerate all the bridges you might encounter along with the chances of encountering them. According to Kay and King, both my father and his Nobel Prize winning student, Harry Markowitz, who applied the theory to investments and invented Modern Portfolio Theory, were careful not to claim “large world” results. But the authors complain that for years, many economists and others have pushed the theory beyond its intended limits.

The book makes extensive use of the “small world” vs. “large world” motif. The authors blame the failures of macroeconomic models on “large world” radical uncertainties such as recessions, wars, technological breakthroughs, and things we have not dreamt of yet. These are the sorts of models that did NOT predict the personal computer revolution, recession of 2008, Brexit, Trump, etc. I myself would go further and argue that even in a perfectly deterministic world, many of the large models used in macroeconomics would collapse chaotically under their own weight due to their inherent non-linearity.

I agree that it is naïve to believe you can model “large worlds” in the same way that you can model “small worlds.” But that does not mean that small worlds are irrelevant. As the late energy economist Alan Manne said, “To get a big model to work, you must start with a little model that works, not a big model that doesn’t work.” Thus, to create an airliner, you are better off starting with a paper airplane than an attractive likeness made of plastic blocks.

My role model for bridging the “small world” of theory and the radical uncertainty of the “large world” is William J. Perry, former US Secretary of Defense. Here is a man with a Bachelors, Masters and PhD in Mathematics, who has nonetheless had a remarkably practical career devoted to preventing nuclear war. I once attended an after-dinner speech of his at which someone asked if he had ever built a mathematical model to solve a thorny problem while at the Pentagon. “No,” he responded, “There was never enough time or data to do that. But because of my training I think about things differently.” Amen. Some may see Radical Uncertainty as a refutation of probabilistic modeling. But I see it as an affirmation of Bill Perry’s approach of understanding probability and knowing when and when not to build a model.

The problem is that a book about unsuccessful mathematical modeling is a little like a book about bicycle crashes. If you don’t know how to ride a bicycle, you certainly won’t want to learn after reading about broken skulls, and you will not have learned about the joy and benefits of bicycles. If, on the other hand, you do ride, then you are already aware of the risks and rewards and are not likely to alter your behavior. In either case I believe the authors could have accomplished their goal in fewer than 500 pages.

I share many of the authors’ misgivings about large models, and in fact, similar concerns motivated the creation of the discipline of probability management as I will discuss below. But first I want to address the non-modelers, who may wrongly take the book as a call to just talk about problems through “Narratives,” as suggested by the authors, instead of analyzing them.

An example I use to make this distinction is basic arithmetic (small world) vs. accounting (large world). Just because you know that 1+1 equals 2 does not mean you can be an accountant. On the other hand, you could not be an accountant if you didn’t know that 1+1 equaled 2. But the accountant must also be aware of the radical uncertainties of fraud, money laundering, etc. that appear in the real world of accounting.

I was shocked years ago to discover how many statisticians and economists (and a much larger fraction of graduate students in analytical fields) do not know the equivalent of 1+1=2 in the arithmetic of uncertainty [iii]. That is, when asked to build the simplest of models in the smallest of worlds involving game-board spinners, they come off like accountants who can’t add 1+1. So radical uncertainty can’t take all the glory for bad models, with chaos theory running neck and neck, and ludic stupidity nipping at its heels.

I agree that Kay and King expose a number of valid shortcomings of complex stochastic models. But their main remedy appears to be the power of “Narrative.” As a storyteller I am all for narrative, but I will define the use of narratives that are not informed by probabilistic principles as the Axiomatic Fallacy Fallacy. Below are some concrete suggestions for some of the issues they raise.

What is Going On Here?

A repeated theme of the book is the inability of models based on past data to determine “What is going on here?” Several concepts are embedded in this theme, and the black swan is an example that comes to mind. No amount of data on white swans could ever be extrapolated to create a black one. But there is more to it than that. As a parable, in learning how to fly sailplanes, like most novice pilots, I focused on the “small world” measurements provided by the instruments. However, I was unable to control the plane until my instructor made me focus on the “big world” by looking out the windshield. Only then did I learn to fly by the seat of my pants and assess what was going on. Given the choice of either instruments or a windshield in an airplane, I would take the windshield hands down. But I prefer both, and coordinating them requires connecting the seat of the intellect to the seat of the pants.

Years later, when PCs became so fast that they could perform interactive simulation with thousands of calculations per keystroke, I discovered that “interactive” simulation could similarly provide a gut feel and view out the windshield for the underlying relationships being modeled. I refer to this approach as Limbic Analytics, because the limbic system is the structure that connects the reptilian brain (the seat of the pants) with the rest of the stuff between our ears (the seat of the intellect). John Sterman [iv] of MIT has also had great success in teaching managers how to make better decision in the face of uncertainty with interactive simulation.

The real issue is expecting models to tell you What is Going on Here in the first place. Successful modeling is not a destination, but a journey, in which an evolving family of models eventually “tell you something you didn’t tell them to tell you,” as consultant, Jerry Brashear, puts it. And at that point, if you are lucky, the modeling effort results in the right question, which may lead to What is Going on Here.

Decomposing large problems into smaller problems for which solutions are known or can be calculated

Kay and King contrast unsuccessful models in macroeconomics to the successful engineering models of aircraft and satellite trajectories. They describe how such models are solved through decomposition into smaller models. This is the issue that motivated the discipline of probability management. Deterministic models may be easily decomposed because the numeric results of sub models may simply be aggregated using arithmetical operations. This is not true of models of uncertainty. It is common practice to perform arithmetic on the “averages” of the uncertainties, which famously leads to the Flaw of Averages. In probability management, uncertainties are represented as data, which obeys both the laws of arithmetic and the laws of probability [v]. The data elements, called SIPs (Stochastic Information Packets), are essentially arrays of Monte Carlo realizations, which may be operated on with vector arithmetic to add or multiply uncertainties together. For example, we could subtract the SIP of costs from the SIP of revenue to get the SIP of profit. The result is another array upon which probabilistic operators may be applied, such as the average profit is $1 million, or chance that profit will be less than $800,000 is 30%.

The authors emphasize the need for a pluralism of models

Kay, King, and I completely agree on the impossibility of anyone building a macro model of the economy. Then again, no single person could build the real economy either. This explains the disaster of centrally planned economies and the success of decentralized ones. The authors call for a pluralism of models, which I refer to as decentralized modeling. Again, this is easy with deterministic models, but was nearly impossible with stochastic models before the open SIPmath Standard allowed SIP libraries generated by one model to be shared with many other models. Consider multiple business units of an integrated health care provider operating in the environment of an uncertain pandemic. One should be able to access SIP libraries of uncertain infection growth from any number of competing contagion models. These could then in theory drive the economic models of the business units, producing a second level of SIP libraries. Finally, these secondary libraries could feed a portfolio model that displayed the risks and returns of various combinations of business units.

RadicalUncertainty4.png
 

This allows multiple decentralized “small world” stochastic models developed independently to be aggregated into larger stochastic models. Today the only way to aggregate stochastic models is through large monolithic applications, which, like sandcastles, eventually collapse under their own weight. The decentralized approach is more like Lego blocks, in which individual blocks may be replaced as the world evolves. Will this approach take us all the way to “large world” models? I doubt it, but I have found that the narratives it drives are more compelling than the narratives not based on models.

All Models are Wrong

The statistician George Box said that “All models are wrong, but some are useful.” Dwight Eisenhower, supreme Allied Commander in WWII, said that “Plans are nothing; planning is everything.” I say that models are nothing; modeling is everything, because it will help you be more like William J. Perry and figure out what is going on here. 

On a final note, after my father introduced the two proverbs quoted above, he went on to write: 

When two proverbs conflict in this way, it is proverbially true that there is some truth in both of them, but rarely, if ever, can their common truth be captured by a single pat proverb.

In spite of my father’s warning, I will try, nonetheless. 

The more options you have in place for crossing bridges before you come to them, the less looking you need to do before you leap.

References

[i] Taleb, Nassim (2007). The Black Swan. New York: Random House. p. 309. ISBN 1-4000-63

[ii] Kay, John & Mervyn King. Radical Uncertainty: Decision-Making Beyond the Numbers (p. 399). W. W. Norton & Company. Kindle Edition.

[iii] Savage, Sam L. Statistical Analysis for the Masses in Statistics and Public Policy, Bruce D. Spencer (Ed.), Clarendon Press, Feb 13, 1997

[iv] http://jsterman.scripts.mit.edu/Management_Flight_Simulators_(MFS).html

[v] https://www.probabilitymanagement.org/s/Probability_Management_Part1s.pdf

© Copyright 2020, Sam L. Savage