Friday, April 24, 2009

Security policy pest control: Exterminate weasel words

Do your security policies suffer from an infestation of “weasel words?” If so, they need to be captured and destroyed. If that seems inhumane, they can also be recycled and sold to professional politicians, United States Federal Reserve Bank chairmen, or used in ready-to-make waffle mix.

What are weasel words, why don’t they belong in a security policy, and why are they associated with “waffling?” In the information security policy space, weasel words fall into two basic categories: undefined terms, and inherently vague phrases. For example:

Undefined terms:
“Shall be limited to authorized personnel…”
“…only IT-approved software may be installed”
“…must be restricted.”

Inherently vague phrases:
“…where possible…”
“…where feasible…”

So what’s the problem? Left unchecked, weasel words weaken an information security program by:
1. Generating an excessive amount of consulting requests for the security team. Scarce analyst time is consumed answering questions about the meaning of security requirements, instead of advising on how to implement them.
2. Creating uncertainty for functional teams. If the requirements aren’t clear, team leaders will not know how to prepare for audits, or how they will perform when examined because boundaries aren’t clear.
3. Allowing inconsistent implementation of security controls. Unspecified requirements are not requirements: the phrases used must constrain action in some way. Otherwise, you’ll see 20 different interpretations for each, and no consistency across organizational boundaries. And as the 2008 and 2009 Verizon breach investigation survey and the recent joint strike fighter intrusion incident shows, successful attacks gravitate to the areas of weakest security.
4. Leading to weak enforcement. What is the boundary between authorized and unauthorized? Where and how is IT approval granted? Without being specific, it isn’t possible to enforce.
5. Causing ineffective reporting. If there isn’t a clear threshold for when a requirement is “met” or “not met,” then how can you report on the state of security? If each control allows for a wide span of interpretation, a list of “met” controls doesn’t cover it. One caveat here: Fusion Risk Management has a great solution to this issue; when assessing current implementations, their processes allow for the assignment of a maturity level to each control implementation. This gives greater context than a simple “met” or “not met.” But even in this setting, there are defined thresholds that separate each level of maturity, which is the key to visibility and continuous improvement.

Policies are an opportunity to set direction for an organization at a high level. What is the intent of management? It’s important to be flexible, but vague is not the same as “high-level.”

The appeal of using inherently vague phrases is that they can be quickly inserted at draft time, and at first look they appear to allow for flexibility. The intent is to account for the give-and-take between risk and cost at the policy origination stage, since organizations do not have the resources to evaluate the cost of dozens (or hundreds) of controls across a wide range of teams, departments and business groups.

But weasel words are not a substitute for meaningful security governance. If a control is too restrictive, or isn’t clear, it needs to be reviewed by leadership and aligned with the needs and capabilities of the organization. And if there are substantial differences between units, then there needs to be an explicit documentation of how that risk will be handled. But a well-designed ISO 27001 Information Security Management System (ISMS) accounts for this.

When documenting a security requirement, follow this simple rule: if the organizational impact of a requirement isn’t clear enough to specify management intent in a given category, then leave it out until that impact is known.

Good security hygiene requires a pest-free environment. Find and exterminate all weasel words, and use governance to weigh risks and costs in a planned approach. This will help you trap them before they get back in again. Catch and release …

Sunday, April 19, 2009

Getting the most out of virtual teams

Most of the big challenges in information security require a multi-disciplinary approach. It takes specialized knowledge and input from many different areas for leaders to successfully balance costs to the business against the expected benefits of reducing risk while ensuring that operational goals are reached.

In global organizations, this usually involves virtual teams working with a mix of collaboration tools, with relatively few opportunities for face to face interaction. These matrixed teams can often feature a more diverse mix of countries, cultures, educational backgrounds and perspectives. But their value can be easily lost if one or more dominant voices crowd out the rest.


To keep that from happening, there are several decision making tools that can be helpful in a virtual setting which encourage collaborative and creative development within a project structure.

Spiral Development Methodology
If the goal of the project is to develop a process or internal service offering under tight timelines, and if role definitions and/or project deliverables have a significant amount of ambiguity, it may make sense to use the spiral development approach in order to ensure that a working process is implemented right away. While it isn’t labeled a “spiral” methodology, Kevin Behr, Gene Kim and George Spafford detail the essential steps for establishing control over change management in their book The Visible Ops Handbook: Implmenting ITIL in 4 Practical and Auditable Steps.

In contrast to traditional development methodologies that use a top-down approach which begins with fully specified requirements and ends with a final product, the spiral approach uses these steps:
1. Plan – specify requirements in as much detail as possible
2. Design – design the solution based on known requirements
3. Prototype – build a working process / solution and deploy it
4. Evaluate – compare prototype performance against expected performance; have the initial goals been met? Identify lessons learned and new requirements, and repeat steps 1-4 as needed.

By taking an iterative approach, the team can deliver a working solution that meets immediate operational and/or regulatory requirements while gaining experience that will be helpful in refining and improving the solution.

Improving decision making in virtual teams
As typically implemented, brainstorming in a team setting involves a facilitator documenting alternatives in the order in which they are most loudly, and frequently, repeated. Because they’re generated one at a time, some ideas get lost along the way, and at a certain point the list seems “long enough” and that’s the end of the input.

Even in a motivated team with good interpersonal relations, the “tyranny of the enthusiastic” may unwittingly crowd out other options. One way to prevent this is to use what is called the Nominal group technique:
1. Before the meeting, each team member writes down their own ideas on the problem; requirements, design issues, and solution approaches.
2. The team meets:
a. Each member presents one idea to the group; no discussion takes place until all ideas have been recorded.
b. The team asks questions to each presenter to ensure that their approach is clearly understood, and then evaluates it.
3. Each team member ranks the ideas presented and sends their “votes” to the facilitator. A final decision is based on the highest aggregate ranking.

While this involves more pre-work and coordination than the typical “brainstorming” approach, the advantage is a much fuller reflection of the capabilities of the team. And since all team members must present, it makes “social loafing” much less likely as everyone is expected to provide input.

Another approach, originally pioneered by RAND as a forecasting tool is called the “Delphi” method:
1. Each member provides a written forecast, along with supporting arguments and assumptions.
2. The facilitator edits, clarifies and summarizes the data
3. Data is returned as feedback to the members, along with a second round of questions.
4. The process continues, usually for about 4 rounds, until a consensus is reached.

Sometimes it’s possible to just throw people on a conference call and just hash it out. But other times, you need all of the creativity, engagement and effort that a matrixed team can muster, and all on a very short deadline. In those circumstances, an ounce of smart structure can yield a pound of results.

Saturday, April 04, 2009

Boiling the O.C.E.A.N.

Metrics projects that are intended to consolidate and report on the state of security for an organization rarely fail for a lack of measures. Information technology systems, processes and projects all throw off an impressive amount of data that can be captured and counted. The Complete Guide to Security and Privacy Metrics suggests over 900 metrics, and NIST Special Publication (SP) 800-55 Rev. 1, Performance Measurement Guide for Information Security extends this analysis from the system level to an executive view by providing a framework for summarizing the results.

So given all of the measures, structure and guidance available, why is it so tough to be successful? The silent killer in this space is often a lack of focus: too many metrics, too much aggregation, and too little analysis connected to business problems and goals to provide useful insight.

Instead, its better start with the stakeholders and focus on fully understanding their goals and decisions without limiting the conversation with assumptions about what is or isn’t going to be measureable.

Consider this subset of stakeholders, and some of their goals:
Executive management – financial health and strategic direction of the organization. Are we profitable and are we executing effectively in the markets we serve?
Risk governance / Security management – are we keeping risk at an acceptable level? Are we making the best use of the security resources we have?
Line Management – are we achieving operational goals, and aligning with strategic initiatives?
These questions become an effective filter for removing the measures that don’t matter, and for finding common measures that, with analysis, can serve many different purposes. Here’s where it may be useful to classify measures from a stakeholder perspective in terms of the types of decisions that they enable:
Output measures - what is the primary deliverable from a given team?
Coverage measures – how many locations, systems or groups are covered by a given process or policy?
Exposure measures – what proportion of the environment stores or processes regulated information?
Activity measures – how many requests have been received during a given reporting period? Addressed?
Null measures – which teams have not provided data?

The last category is an important one, as it highlights the difference between a measure and a metric. A measure is an observation that increases your understanding about a situation and improves the quality of decision making; a metric is a standardized measurement. Inconsistent, incomplete and missing data from key teams or groups are an important measure of program maturity. Sometimes it’s what you can’t count that counts.

Above all else, resist the pressure to measure everything. A few well-chosen measures will allow for versatile and powerful analysis. There are literally dozens of ways to analyze and present a limited number of well-chosen data points. And when captured consistently over time, the correlations between seemingly unrelated activities offer the opportunity to surprise.