Preparedness Insanity: Why We Need To Think Differently About How To Measure Preparedness

Author(s): 
Publication Type: 
Other Writing
Publication Date: 
July 6, 2015

By: Terry Hastings and Brian Nussbaum

Since 9/11, there have been billions of federal grant dollars awarded to state and local governments and an ongoing quest to measure preparedness. Congress, the Department of Homeland Security (DHS), Federal Emergency Management Agency (FEMA), the media and the American public have all expressed their desire to understand precisely how prepared we are, and how prepared we need to be. Yet, attempts to measure preparedness continue to fail, and homeland security officials still struggle to explain and assess preparedness.

It has been said that the definition of insanity is doing the same thing over and over and expecting a different result. This is exactly what has occurred with many of our national preparedness measurement efforts to date, as we have continued to seek a single system that computes preparedness based on a series of data inputs. This flawed notion included the ill-fated Cost-to-Capability program (in 2009) that aimed to measure the impact of grant dollars and has extended to the current approach used for the Threat Hazard Identification Risk Assessment (THIRA)/State Preparedness Report (SPR) requirements.

Like Cost-to-Capability, the THIRA/SPR relies on a series of data and information inputs to articulate preparedness levels. The ironic thing is that both initiatives require a tremendous amount of time and effort for very little return on investment (ROI) -- yet we continue to charge down the path of a single solution to the problem, as all of these approaches seek one system to measure everything. It is also ironic that the very quantitative measures pursued in the name of analyzing ROI should also deliver such a limited return.

Measuring preparedness against the broad array of threats and hazards we face does not lend itself only to simple, objective statistics in the way that measuring specific reductions in vehicle accidents, residential fire deaths or workplace injuries does. We need to think differently about how to measure preparedness, and this thinking must begin to embrace the notion of subjectivity.

When it comes to any sort of evaluation or assessment, subjectivity is generally viewed as a negative concept. Evaluations should be objective or based on a standard set of criteria. That is true when the topic of the evaluation is well defined or understood and there are solid metrics to consider. However, preparedness is a topic that lacks comprehensive standards or solid metrics, and “being prepared” often means different things to different people.

There are other fields in which complex concepts and risks are assessed using subjective but rigorous processes. For example, in the realm of cyber security – a notoriously tough area in which to find objective measures that can be compared across firms, sectors and jurisdictions – several pioneering information security experts have created a very well respected, but nonetheless subjective, tool for measurement.

Dan Geer (a computer security analyst and risk management specialist recognized for raising awareness of critical computer and network security issues before the risks were widely understood) and Mukul Pareek (a risk professional and has worked extensively in audit, advisory and risk management), have created the Cybersecurity Index, a survey based tool in which experts in the field subjectively assess the changing levels of cyber risk. They argue, quite rightfully, that, “Subjectivity in determining an index does not erode credibility so long as transparency and consistency are maintained.”

Read the full article at Homeland Security Today