Saturday, April 04, 2009

Boiling the O.C.E.A.N.

Metrics projects that are intended to consolidate and report on the state of security for an organization rarely fail for a lack of measures. Information technology systems, processes and projects all throw off an impressive amount of data that can be captured and counted. The Complete Guide to Security and Privacy Metrics suggests over 900 metrics, and NIST Special Publication (SP) 800-55 Rev. 1, Performance Measurement Guide for Information Security extends this analysis from the system level to an executive view by providing a framework for summarizing the results.

So given all of the measures, structure and guidance available, why is it so tough to be successful? The silent killer in this space is often a lack of focus: too many metrics, too much aggregation, and too little analysis connected to business problems and goals to provide useful insight.

Instead, its better start with the stakeholders and focus on fully understanding their goals and decisions without limiting the conversation with assumptions about what is or isn’t going to be measureable.

Consider this subset of stakeholders, and some of their goals:
Executive management – financial health and strategic direction of the organization. Are we profitable and are we executing effectively in the markets we serve?
Risk governance / Security management – are we keeping risk at an acceptable level? Are we making the best use of the security resources we have?
Line Management – are we achieving operational goals, and aligning with strategic initiatives?
These questions become an effective filter for removing the measures that don’t matter, and for finding common measures that, with analysis, can serve many different purposes. Here’s where it may be useful to classify measures from a stakeholder perspective in terms of the types of decisions that they enable:
Output measures - what is the primary deliverable from a given team?
Coverage measures – how many locations, systems or groups are covered by a given process or policy?
Exposure measures – what proportion of the environment stores or processes regulated information?
Activity measures – how many requests have been received during a given reporting period? Addressed?
Null measures – which teams have not provided data?

The last category is an important one, as it highlights the difference between a measure and a metric. A measure is an observation that increases your understanding about a situation and improves the quality of decision making; a metric is a standardized measurement. Inconsistent, incomplete and missing data from key teams or groups are an important measure of program maturity. Sometimes it’s what you can’t count that counts.

Above all else, resist the pressure to measure everything. A few well-chosen measures will allow for versatile and powerful analysis. There are literally dozens of ways to analyze and present a limited number of well-chosen data points. And when captured consistently over time, the correlations between seemingly unrelated activities offer the opportunity to surprise.


dearista said...

I can tell you not into advice by your leading quote, but hopefully your into dialogue :)

Per your comment:

"A measure is an observation that increases your understanding about a situation and improves the quality of decision making; a metric is a standardized measurement."

A measure is like saying there was 5 fires in my precinct. A metric, usually factors two or more metrics, eg. the were 20% less fires this year than last, and we have 70% less fires than another precint per capita. These other two metrics require measurements of historical fire incidents and population size. You apply the get the idea.

Jeff Reava said...

Hi dearista - the Phil Carret quote? I apply that to security decision making, investing and public policy.

My working distinction between measure and metric comes from Doug Hubbard (How to Measure Anything), Andrew Jaquith (Security Metrics) and dictionary definitions of both terms.

Is it sufficient? I'm not sure yet because they haven't failed in the situations where I've applied them. But that doesn't mean they're robust enough.

Per your example, I think the big question is "what is a fire?" A campfire? 5 acres burning? Fires remain the measure, but the number of acres burned (standard measure) becomes the metric.

Is this closer to where you were headed?