Virtual Summit on Chance-Informed Readiness

in Defense, Pandemic Modeling, and Infrastructure

Wednesday March 30, 8:30 AM - 4:30 PM PDT
Just Added: Thursday March 31, 9:00 AM - 12:00 PM PDT

Description

Join the livestream as a select group of thought leaders gather in person to discuss chance-informed readiness across multiple disciplines. For a fee of $200, attendees will have the opportunity to participate in multiple Q&A sessions with speakers and will receive a copy of the Kindle version of Dr. Savage’s new book, Chancification: How to Fix the Flaw of Averages. The schedule, abstracts, and speaker bios are listed below.

Schedule (all times PDT)

Day 1: Wednesday March 30

Session 1: SIPmath Standard

8:30 AM - 9:00 AM: Chancification - Sam Savage
9:00 AM - 9:30 AM: Introduction to Metalog Distributions - Tom Keelin
9:30 AM - 10:00 AM: SIPmath Support in Analytic Solver - Dan Fylstra
10:00 AM - 10:15 AM: Q&A for Online Audience

Session 2: Defense

10:15 AM - 10:45 AM: From Ready or Not to How Ready for What - Connor McLemore
10:45 AM - 11:15 AM: Probability Management at Lockheed Martin - Phil Fahringer
11:15 AM - 11:45 AM: Chance Informed Aircraft Fleet Management - Steve Roemerman
11:45 AM - 12:00 PM: Q&A for Online Audience

12:00 PM - 1:00 PM: Lunch Break

Session 3: Expert Opinion / Healthcare

1:00 PM - 1:30 PM: The FrankenSME: Synthesizing Expert Opinion - Doug Hubbard
1:30 PM - 2:00 PM: Making CDC Forecasts Actionable - Eng-Wee Ethan Yeo
2:00 PM - 2:30 PM: Building Chance-Informed Capability in Healthcare - Justin Schell
2:30 PM - 2:45 PM: Q&A for Online Audience

Session 4: Infrastructure

2:45 PM -3:15 PM: Explaining Cyber Insurance to Municipalities - Shayne Kavanagh
3:15 PM - 3:45 PM: Stochastic Libraries in Infrastructure Planning and Risk Management - Sam Savage
3:45 PM - 4:15 PM: "Risk-Spend Efficiency"? How Utilities Can Use It - Max Henrion
4:15 PM - 4:30 PM: Q&A for Online Audience

Day 2: Thursday March 31

9:00 AM - 9:20 AM: Teaching Your Organization to Teach Itself - Sam Savage and Justin Schell
9:20 AM - 9:40 AM: Unleashing DCMA Data for Modeling & Readiness - Max Disla, Program Integrator, DCMA Boston Office
9:40 AM - 10:00 AM: A Data-Driven Framework for Readiness Management - Alejandro de la Puente, Chief Data Scientist in the Office of the Deputy Assistant Secretary of Defense for Force Readiness
10:00 AM - 10:20 AM: Putting the Squeeze on Python: How MS Excel is Flexing its Programming Muscles with Dynamic Arrays and Lambda Functions - Matthew Raphaelson
10:20 AM - 12:00 PM: Informal discussions

Abstracts (in order of appearance)

Sam Savage: Chancification

The discipline of probability management represents uncertainties as data structures called SIPs that obey both the laws of arithmetic and the laws of probability. This enables analysts with statistical training to estimate probability distributions for use by non-experts in decision analysis applications, much as experts generate electricity for use by non-experts in lightbulbs and other appliances.

The current SIPmath Standard is based on storing potentially millions of Monte Carlo trials. This has now been enhanced by three complementary breakthroughs in simulation technology, the Metalog distribution from Tom Keelin, the HDR portable random number generator from Doug Hubbard, and the new 3.0 Open SIPmath™ Standard from ProbabilityManagement.org. Just as the shift from Direct to Alternating current in the 1890s led to widespread Electrification, the increased efficiency of these open technologies could lead to widespread “Chancification,” providing organizations with a more coherent approach to modeling risks and opportunities.

Tom Keelin: Introduction to Metalog Distributions

Introduced in 2016, the metalog distributions are now the most flexible and easiest to use of probability distributions. Metalogs make it simple to model expert assessments with a smooth continuous distribution, simulate it, and gain otherwise-unavailable insights from simulated or empirical data. With a single set of simple, closed-form equations, metalogs can displace traditional distributions (like the Normal, Lognormal, Beta, Student t, Gamma, and others) by being more flexible, easier to use, easier to interpret, and faster to simulate. The metalogs allow users to choose among unbounded, semi-bounded, and bounded forms; are fit to data with ordinary least squares; and enable Bayesian inference in closed form. This presentation provides an introduction to the metalog system, including strengths, limitations, and practical-use guidelines.

Dan Fylstra: SIPmath Support in Analytic Solver

Frontline Systems’ Analytic Solver add-in for Microsoft Excel, our RASON cloud platform and modeling language, and our Solver SDK for developers have each offered integrated support for the SIPmath 3.0 standard from ProbabilityManagement.org, since August 2021. This includes support for Doug Hubbard’s HDR Random Number Generator, Tom Keelin’s Metalog distributions – with a powerful facility to best fit your data to the full family of bounded, semi-bounded and unbounded Metalogs – and support for the SIPmath 3.0 encoding of shared probability distributions. SIPmath distributions may be saved in tool-independent form anywhere as a file, but Analytic Solver also provides one-click cloud hosting of these distributions via the RASON service – where they are “first-class objects” that can be managed and governed. This presentation will review and provide quick demos of the practical use of SIPmath and the 3.0 standard using this software.

Connor McLemore: From Ready or Not to How Ready for What

Although the purpose of the Department of Defense (DoD) is accepted broadly to be “to provide ready and sustainable military forces to protect the nation’s vital interests,” the meaning of that statement is largely reliant upon the definition of the word “ready.” Yet it is generally unclear what it means to be ready. Ready for what? How ready? By when? To address this problem, we recommend the DoD adopt a simple, interpretable, and actionable data framework using stochastic scenario libraries that permit calculation of the probabilities of military readiness for specified missions at uncertain future times across unit types and military branches. It is based on the concept of auditable, stochastic scenario libraries long in use in financial engineering and the insurance industry. If implemented by the military, such a framework could allow mathematically coherent readiness estimates to better communicate “how ready for what” combinations of military assets are. Additional details can be found in our paper published in MOR Journal 2021 Vol. 26, #1, “Military Readiness Modeling: Changing the Question from ‘Ready or Not?’ to ‘How Ready for What?’”

Phil Fahringer: Probability Management at Lockheed Martin

Lockheed Martin is faced with numerous forecasting and decision making that are vital to our business, our shareholders and our customers – such as, forecasting revenue, forecasting inventory requirements for our fielded equipment that we support, forecasting maintenance requirements for fielded equipment, forecasting production and sustainment costs, deciding how to best invest in product and service improvements to maximize customer and business value based on how much budget is invested. All of these forecasts and decisions are layered with multiple levels of uncertainty around the assumptions and factors that influence the outcomes. Traditionally, we have relied on a combination of simple average values and applying some arbitrary level of additional uncertainty as a hedge to the final outcomes. To be sure, we have applied Monte Carlo simulation in certain circumstances such as inventory analysis requirements, but even this is heavily dependent on average assumptions. Primarily this has been because averages are well understood, and easily computed and updated, as well as managed and incorporated in analysis. Further – they are easily explained and stored and shared across the user community. It is well understood that using appropriate probability distributions for our underlying assumptions and critical factors for our forecasts and decision-making processes would yield superior insights. Unfortunately, probability distributions have not had any of the ease-of-use characteristics of averages, and therefore they have not been incorporated in many instances where they could potentially add tremendous value. But this is changing with the help of Probability Management.org and the vision of Probability Management. In this presentation I, along with my colleagues, will profile several use cases where we are employing and expanding the application of Probability Management, and we will discuss the anticipated value from these employments.

Steve Roemerman: Chance Informed Aircraft Fleet Management

Managing a fleet of assets requires a difficult blend of determinism and probabilistic thinking. It might be possible to agree 15% of an aging fleet will need engine replacement next year. But it will be very difficult to correctly choose which specific aircraft should be overhauled. The difficulties lie in the hyper dimensionality of the problem: deployment needs, unplanned failures, aircraft upgrade plans, congressional whims, and aircraft retirement plans. At the end of the F-111 fleet, the USAF flew newly upgraded Aardvarks from Grumman to the Yuma AMARG (Aerospace Maintenance and Regeneration Group) “the bone yard.” Careful study concluded prior USAF decisions made this the most economical alternative. Sadly, these problems are not isolated to the F-111 or the Air Force.

This presentation will discuss a “chance informed” approach, with illustrations from several Navy efforts, primarily featuring the management of the F-18 fleet. Benefits of considering a full span of uncertainty will be contrasted with point estimates. Benefits discussed will include improved accuracy, better risk information for decision-making, reduced human bias, and reduced organizational conflict.

Doug Hubbard: The FrankenSME: Synthesizing Expert Opinion

How do you combine the estimates of multiple expert opinions? How do you measure the performance of individuals, teams and methods in estimating and decision making? How do we know what works? The most popular methods of using teams to estimate are worse than just averaging different estimates or picking the best individual estimator. But improved algorithmic and behavioral methods exist which can make the estimates and decisions of a team better than the best individual.

Using over 60,000 trivia question responses gathered from training individuals to estimate probabilities, over 30,000 estimates of cybersecurity risks, and a review of extensive previous research, we analyze the performance of various methods for combining estimates of teams of individuals.

We build on this to propose an approach for the most important metrics that are usually never measured: the performance and ROI of decision-making methods and metrics programs themselves.

Eng-Wee Yeo: Making CDC Forecasts Actionable

Managing healthcare resources under uncertain COVID-19 surges is difficult and it is tempting to plan for surges in demand based on the average or best guesses of contagion forecast models. Unfortunately, this leads to systematic errors induced by the Flaw of Averages. This presentation provides the impetus and an approach to transform the uncertainty in daily COVID-19 hospitalization forecasts provided by the CDC into actionable data for making chance-informed healthcare resource decisions.

Justin Schell: Building Chance-informed Capability in Healthcare

Healthcare organizations offer a unique opportunity to build a robust chance-informed capability. Skill sets in actuarial science, statistics, and intervention based decision-making abound in healthcare organizations. All that is needed is a program to harness these skills with the right tools, analysis, and language to make chance-informed decisions intuitive to leaders. This presentation will walkthrough how healthcare organizations can access their talent and begin to build their own chance-informed capabilities.

Shayne Kavanagh: Explaining Cyber Insurance to Municipalities

Cyberattacks are a clear and present danger for all organizations, but local governments are particularly vulnerable. A 2020 study showed that local governments are more likely to be the targets of a ransomware attack than any other kind of organization and that 44% of ransomware attacks targeted local governments in 2020, a portion similar to in 2019. Cyberattacks are expensive. Cities like Atlanta and Baltimore have made headlines with the extreme cost of a cyberattack – these cities are reported to have incurred over $15 million each, including data recovery costs and the cost of downtime and lost revenue. The potentially extreme consequences of a cyberattack have caused many local governments to turn to cyber insurance. Given the potential losses from an attack, transferring the risk of an attack to the insurance market could be an attractive proposition. However, cyber insurance is a relatively new and potentially unstable risk management tool. The purpose of this session is to demonstrate a model that helps local governments weigh their options for spending on security controls, self-insurance, and/or commercial insurance.

Sam Savage: Stochastic Libraries in Infrastructure Planning and Risk Management

Stochastic Libraries of cost and schedule information may assist in the planning of projects, especially when performance clauses provide nonlinear penalties or rewards. Libraries of risk event outcomes, both pre and post mitigation allow the aggregation of risk across a large portfolio of infrastructure assets.

Max Henrion: "Risk-Spend Efficiency"? How Utilities can Use it

Some California utilities have been in the headlines for all the wrong reasons – including the 2010 San Bruno gas pipeline explosion that killed 8 people, the 2015/6 leak from Aliso Canyon gas storage that released over 100,000 tonnes of gas, and the 2018 Camp Fire sparked by a faulty transmission line that destroyed the town of Paradise and led to PG&E’s bankruptcy, conviction for manslaughter, and a $13 billion settlement. The California Public Utilities Commission requires utilities to assess systematically all the risks they manage, including gas explosions, wildfires, and cyber-attacks. Their framework includes risk-spend efficiency (RSE) as a metric to prioritize mitigations based on how much risk they reduce per dollar spent. It uses a multi-attribute value function to combine outcomes, including fatalities, reliability, and cost. I’ll review how utilities are using this scheme and some suggested improvements (by Sam Savage, myself, and others). I’ll also discuss how RSE and related ideas could help other organizations that must manage and mitigate a wide variety of risks.

Speaker Bios