What Gets Measured, Gets Done - The Long and Winding Road of Preparedness Measurement

For the state and local organizations that have been involved in federal efforts, or efforts of their own, to measure preparedness, that task is not taken lightly.  There have been many top-down federally directed efforts to measure preparedness.  Most of these initiated at the direction of Congress or the Government Accountability Office (GAO) in an effort to determine the nation’s current state of preparedness, the gaps in that preparedness, and the effectiveness of federally funded preparedness programs.

These efforts are already underway, because those who have been given the responsibility for preparedness programs care about the preparedness of the nation and want to improve upon those programs and preparedness efforts, rather than imposing unreasonable work requirements on those participating in preparedness assessment.

However, that does not mean that these efforts have not been burdensome.  The measurement pendulum has taken great swings between what should be assessed and how to assess (subjectivity vs. objectivity), and the result has been a half dozen or so approaches that: (1) have yielded very little in answering the question “How prepared is the nation?”; but (2) have also produced a great deal of frustration.  The same frustration is felt by those – from Congress to the Executive Branch and from the police station to the firehouse – who are most involved in the assessment process. In short, although a great amount of effort has been expended, a comprehensive report on National Homeland Security Preparedness has still not resulted from these assessment efforts.

In addition, although there may be problems with the data itself, analysis efforts have also been lacking.  In fact, the analytic efforts to provide something useful to the federal government, and back to the states and local communities involved, have only just begun.  Even if all the data provided was reliable, the current system of self assessment is a static process that is not comprehensive by any definition and therefore provides only a brief snapshot – which quickly fades – of the current state of preparedness. 

The What, How, and Why of Measurement Parameters

However, the field of preparedness measurement is rapidly evolving. And, while self assessment will continue to be a part of the measurement framework, other ideas and methods are being developed to tackle the seemingly intractable problems of what to assess, and how.  Many current discussions are grounded in three simple principles: (1) measure only what really matters; (2) what is measured should be just as relevant and meaningful to operational personnel as it is to political leaders; and (3) how it is measured should lead to an understanding of predicted performance, not simply produce an exhaustive inventory of operational assets and activities undertaken.

What adherence to these principles should lead to is a preparedness assessment process that does not attempt to measure everything. Measurement should be focused primarily, if not exclusively, on the critical enabling capabilities and, within that broad field, only the key indicators of performance – with special focus on those areas that are not regularly used or practiced.

Utilizing risk analysis can help apply focus to determining the specific critical capabilities that an area may need.  However, it seems clear that, without being able to demonstrate certain general capabilities as defined in the Target Capabilities List (TCL) – e.g., in incident management, planning, communications, and information-sharing – then having specific site- and/or team-based capabilities may not matter.  If sophisticated teams that are involved in a large-scale response are unable to communicate, they also may be unable to effectively operate during the incident.  Efforts therefore should be focused on reaching consensus in determining the subset of “make it or break it” capabilities that are needed by the nation and that will, as a minimum, have jurisdictions prepared not to fail.  Tightly focusing measurement on the most critical activities creates an opportunity for a comprehensive national approach to preparedness measurement that is not only meaningful but manageable as well.

Three Approaches, Capability Models, and Meaningful Evaluations

Comprehensive preparedness measurement should take advantage of three approaches: self assessments; quantitative measurement; and performance-based evaluations. The first approach, self assessments, can generate very important and useful data because the information developed comes from those who know their individual circumstances the best. It should be recognized, though, that there have been several problems with self assessments in the past – overly burdensome tools, for example, as well as tight time frames, unreliable technology, inadequate guidance, and “gaming” of the assessments – all of which resulted in, at best, questionable results. In order for reliable, accurate, and useful data to be generated, capability assessments must, first of all, be meaningful to those being assessed. The goal of future self-assessments and the collection of data must therefore be to support operational planning and programmatic decision-making at the level of those who are being assessed.

Quantitative capability models can be developed both to assist with planning and resource allocation and to help determine capability gaps. Such models can provide an independent baseline estimate – based upon national averages, demographic information, and risk criteria – of the levels of capability required for a given jurisdiction.  The same models can use quantitative data to inform investment decisions: (a) by determining the scalability of a capability to a given scenario, thus generating capability calculators; and (b) by estimating the full life-cycle costs of achieving a given level of a particular capability,identifying capability gains from investments, and optimizing the placement of new operational teams and capacity at all levels.

The evaluation of exercises and real-world events should be used to assess actual performance.  An effective performance-testing program at the national level would not only gather consistent data but also analyze after-action reports to determine what happened (and why it happened), and compare findings across different exercises and events to identify trends and common points of failure.  Moreover, it would assess holistically how capabilities integrate both horizontally and vertically. Past experience – Hurricane Katrina is perhaps the best example – has shown that the national response system frequently breaks down in complex events at the horizontal and vertical seams between capabilities.  Consequently, performance evaluations must include the analysis not only of individual capabilities themselves but also of the connections across capabilities as a portfolio.  All of this should be done not in an effort to judge or cast blame but, rather, to understand priority issues as quickly and directly as possible.

Without a comprehensive approach for measuring preparedness, the nation will continue to struggle to understand the current state of preparedness across all regions, and for all hazards.  Some areas may over-prepare relative to their true risk-based capability needs; others may under-prepare; and still others may prepare for the wrong things altogether. 

Timothy Beres

Timothy Beres is Vice President and Director of the Safety and Security Division of CNA, a not-for-profit research organization. Prior to joining CNA, he held senior leadership positions in the Department of Homeland Security and the Department of Justice. He received a Bachelor of Arts degree from Virginia Tech, is a public speaker, and has authored numerous articles in the field of homeland security. In 2005, he received the National Grants Management Association's Distinguished Service Award.

SHARE:

TAGS:

No tags to display

COMMENTS

RELATED ARTICLES

TRENDING

RELATED ARTICLES

TRENDING

Translate »