What defines success in the age of COVID-19?
For most this is a simple question with an even more simple answer – preventing lives lost.
Until you consider that a certain portion of the population, once infected with COVID-19, face dim prospects regardless of how much intervention or preventive care received ahead of time.
So, then you switch your metric for success from preventing lives lost, to preventing unnecessary lives lost. But in adding just one word, you turned a metric for success into a point of reference – what is unnecessary for one person is necessary for another. What is an agreed upon mandate for masks and vaccines may be amenable to one portion of the population, but an abhorrent invasion of personal liberties to another. No matter how you distill the argument, narrow it in or expand upon it, the argument hinges on a subjective interpretation of the word, unnecessary.
The qualifier becomes the focal point of the metric, and as you go on evaluating the utility of the metric from different perspectives, you eventually realize the focal point becomes the metric itself.
Success, it follows, lies in agreeing upon an understanding of the term unnecessary. In the simplest sense, you can divide anything into two categories, necessary and unnecessary. Some furniture is necessary, and some is unnecessary – some arguments are necessary, and some are unnecessary. But this breakdown becomes overly simplistic for more complex concepts.
In quantitative finance, analysts view risk as a combination of systematic risk and unsystematic risk. Systematic risk is uncontrollable by nature, whereas unsystematic risk is controllable. Systematic risk comes from long term, immutable trends that are unresponsive to short term fluctuations. Unsystematic risks are caused by factors that can be controlled or reduced in a relatively short time. And last, but most importantly, systematic risk can only be substantially controlled, but unsystematic risk can be outright eliminated through an optimal investment strategy.
And the ratio of systematic risk to unsystematic risk depends upon the amount of risk you are willing to assume. In the financial world, risk is a ratio.
Applying this analogy to our current topic, we start to envision COVID-19 success as a ratio as well – as minimizing the number of nonpreventable deaths while eliminating all unnecessary deaths. Or stated more broadly, the ratio optimizing exposure to unavoidable adverse outcomes to eliminating unnecessary adverse outcomes.
This all may seem unnecessarily complex and even frivolous until you start to delve into the details about how we have attributed success around COVID-19 interventions in the past. Whether it was looking at positivity rates, or true positivity rates, or death rates, or excess death rates – the data has always been subject to interpretations and any data point that appears to project success can just as likely project failure when viewed in another context. Data is fundamentally incomplete in this regard. And the continued reliance on data to measure success will simply churn the wheels of past incompetence forward onto the future. Behavioral economists call this false attribution.
Defining success and failure in an age of false attribution has become more of an art than a science of late. Art is subjective and open for debate, with no one argument truly prevailing as every argument is inherently a matter of personal preference. Science begins subjectively, and encourages debate, but eventually the arguments are weighed against each other through an objective framework, and from that framework, one leading argument generally prevails.
Again, applying this logic to our current conversation, we extrapolate metrics for success as subjective ratios that are then benchmarked against one another. With the objectivity lying in the relationship between the two subjective metrics. As long as we agree how we compare the ratios, we should have no shortage of objective metrics for success that we can utilize to glean aberrations and trends. We can customize success metrics for targeted, localized population based on the prevalence or incidence of COVID-19 cases, determining focused policy changes for select regions to address safety concerns unique to that localized region. Principles of effective, targeted herd immunity espoused in the Great Barrington Declaration.
Most of the models project three key variables impacting COVID-19 incidence and mortality: the use of face masks, compliance with social distancing mandates, and the use of vaccines. And sensitivity analysis of various models adjust these metrics broadly across large geographic regions, instead of more locally – varying with the underlying demographic changes that take place as you move from town to town or county to county. As a result, the projections smooth out inherent fluctuations. Which is one of the reasons why long term projections were wrong.
A problem remedied by creating shorter projections, and selectively aggregating projections, which is what the CDC has done in recent months – creating one month projections by selecting aggregating different measures from different projections. Which plausibly can improve the accuracy, and help us define success relative to a forecasting standard. But also transfers the subjectivity from one model to the process through which all the models are aggregated.
We recommend a different approach. Rather than aggregating data, we recommend referencing different data against one another, creating an entire ensemble of ratios that can be benchmarked against one another. This way, data from less accurate sources will produce ratios that are more variable – which can then be normalized to more accurate data by discounting the inherent volatility. If there is systematic volatility in all the ratios, then we can predict a pending trend or change in COVID-19, and adjust our interventions accordingly – and adjust our definition of success. If there is unique volatility in just one ratio alone, or a small subset of ratios, then we can identify factors unique to those ratios that may be causing the aberrations.
Are there errors in the data underlying the ratios? Is the region of concern represented by the ratios uniquely changing due to a COVID-19 outbreak?
Critical questions that can target our responses and effectively allocate resources. A concern the Institute for Health Metrics and Evaluations highlighted back in October when projecting the economic demand for ICU medical equipment in the months of December, January, and February.
A concern, when arising in the past, led to a blame game of ‘too much’ or ‘too little’ equipment allocated, that devolved into political banter on media outlets across the country. Governors complained the federal administration provided too little resources and federal officials said governors were inflating demand. A concern that can be resolved by referencing data sets against one another among localized population to trend variations in ratios across fundamentally different local populations – determining where true demand surges exist, and the extend of the excess demand – while taking into account baseline differences among those local populations.
Ratios are powerful instruments that allow us to diversify away extraneous information that may bias our thinking, while trending information that we otherwise may not glean. The Italian economist, Pareto, understood this when he developed his famous 80-20 Rule. But Pareto never coined this ratio as a static relationship – that we perceive it to be – rather he envisioned the ratio to be dynamic, ever changing, but changing through that fixed ratio. Which is how we should structure the data sets when we create ratios that model localized COVID-19 trends – unique to each population but changing in a consistent manner relative to changing trends in the pandemic.
Creativity is generally considered a useless trait unless it appears in abundance. Then we call it genius. Applying a limited number of success metrics will produce incomplete definitions of success that will be subjected to endless debates with fundamentally incomplete, self-serving arguments. Integrating a model of success that incorporates a vast number of success metrics structured against one another will produce a comprehensive database of information that we can evaluate together – and decide upon an optimal course of action specific to the unique populations throughout the country.
Hopefully we all can call that a success.