Saturday, March 28, 2009

Security Policy as concept car

In the JMU Information Security MBA program, the main assignment for the second class is to put together an information security policy manual. During the lectures we spent most of our time focusing on frameworks and sources such as ISO 27001, COBIT, ITIL, NIST, SANS and many other sources of policy content. Thankfully, we also spent time working through some themes from The Design of Everyday Things by Donald Norman.

My favorite takeaway from the class was the realization that "fit" is an important concept in information security; so much so that it should be explicitly recognized in the policy framework. Policies must fit the security requirements, cost constraints, culture and capabilities of an organization.

At the risk of leaving out a number of "must haves" in my policy manual, I wound up putting together a Concept Car for security -- a collection of statements and requirements oriented around three questions:
* What does your business need?
* What can you execute?
* What can you afford?

They're not complete, but hopefully reflect a decent start in each of the categories that they address. I've also included links to all reference sources for more detail:

Information Security Strategy and Architecture
Information Security Charter
Acceptable Use Policy
Data Owner Security Policy
System Owner Security Policy
Platform Infrastructure Security Policy
Messaging Security Policy
Network Security Policy

Tuesday, March 24, 2009

Information Supply Chain Security

Abraham Maslow once wrote “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” But what if your toolbox has everything except a hammer? At the very least, it limits what you can build.

Last week at the University of Maryland I had the opportunity to be a part of a workshop to develop a Cyber-Supply Chain Assurance Reference Model, sponsored by the RH Smith School of Business and SAIC. Looking at the security challenges that organizations are now facing, the old toolbox seems about half empty.

Prior to the workshop I was very comfortable with confidentiality, integrity, availability, authenticity, and non-repudiation along with risk management definitions of loss expectancy as the basic language of information assurance. But after a few hours of looking at information technology in the context of a cyber-supply chain, it became apparent that we need better tools to characterize and manage emerging risks. There were a number of different perspectives represented at the meeting, but here’s my take:

Traditionally, assets are assessed individually and independently as part of the information assurance process. For internally facing systems with limited or explicit interdependencies, this isn’t a bad representation. But for organizations where boundaries with suppliers and customers are blurring, the interdependencies among these systems eclipse the value of the data they hold. From a risk perspective, Verizon’s 2008 Data Breach survey shows how attacks against vendors and suppliers become the entry point into “secure” organizations because of trust relationships. And from a financial perspective, high confidentiality requirements can make it difficult to ensure high availability in a cost-effective way.

Existing risk frameworks such as COBIT and ISO 27001 can describe these issues, but are not designed to model the trade offs in a way that helps security leaders optimize.

This is the point where the information security toolbox needs to draw on research capabilities from other disciplines. The Supply-Chain Operations Reference Model (SCOR) provides a proven framework for analysis that captures these dependencies.

The information supply chain analyst asks: where is information captured (created) and processed? What are the storage and delivery requirements? Risk, cost and the traditional “CIA” triad are variables in a business decision, rather than optimization goals on their own.

In contrast, infrastructure protection often takes an asset-centric view that attempts to identify the intrinsic value of an application or environment, separate from its role within an extended system. This makes the connection to business value more difficult to express, and to optimize.

The reference model will be published in April. In the meantime, there are still a few details that are being … hammered out …

Wednesday, March 18, 2009

Securing the organization, despite management’s best efforts to stop you

Looks like the abstract below is going to get the green light for the May 2009 Grand Rapids ISSA meeting. Ok, so the title is a bit of "red meat" for a largely technical audience, but the straw man here isn't management ... or security: it's the "ivory tower" textbook description of how security is supposed to work.

In reality, the most effective leaders that I have seen have been the ones who are pragmatic, patient, and unconcerned about "style points" when it comes to building an effective program. They just make sure that the number and severity of incidents keep trending in the right direction, even if the drivers of that success come from other parts of the organization.

Hopefully, I'll capture some of that in the slides that go with this presentation:

"Every text on information security says “be sure to get executive management support” before you start. But what should you do when that support is less than what you need, as is often the case in today’s cost-conscious environment? Management isn’t really out to stop you, although at times it may seem that way because of the contradictory pressures that affect the entire business.

Meanwhile, threats to information security are recession-proof, they don’t have layers of approval to contend with, and they’re not going to go away any time soon. Information security professionals need to respond to these threats regardless of the organizational challenges, and in the process build that support by demonstrating the value of the work they do. And they need to be strategic in their approach as it comes to requesting additional resources and support. The purpose of this presentation is to build on the concepts introduced in the Harvard Business Review “Managing Up” article collection, and presenting the impact of security with management-centric measures and analysis that will build the case for improving security by highlighting the facts, rather than fear, uncertainty and doubt.

Suggestions, success stories and one-line management rebuttals are welcome.

Something to the effect of: "Enabling the business / Serving customers / earning a profit ... despite security's best effort to stop you ..."

Monday, March 16, 2009

Making the right call

Cloud computing, or on-premises: which is more secure, and which is the better option for your organization?

It’s a simple yes or no question, and yet it shows just how much further security risk management needs to mature in order to command the stature of marketing or finance in driving company strategy. This isn’t to suggest that security is less important to an organization; it just hasn’t made as much progress formalizing and defending its decision making processes. Financial analysis tools can help in this category, so long as they’re not applied too literally.

For example, the “cloud vs. onsite” decision shares some important similarities with the “lease vs. buy” decisions that finance supports all the time. Finance uses a very simple decision rule to choose between alternatives: accept the decision that maximizes the net present value of the investment. Specifically: what is the sum of all cash flows (i.e. investments, expenses and revenues generated) and what discount rate should be applied to reflect the rate of return that is appropriate for this kind of investment decision?

Often the underlying assumptions and analysis are as important to decision makers as the final recommendation, so transparency is essential.

Given the rate of change in most organizations, security isn’t often asked to weigh in on a single investment choice in isolation. Usually, the decision involves picking the best course among alternatives, so it just needs to be clear, based on a consistent set of evaluation criteria, which alternative is comparatively better. And just as with the “lease vs. buy” scenario, decision makers need to see the analysis as well as the recommendation.

To compare alternatives, objectively, from a security perspective:
* Compare architectures. Which has greater complexity, and why? Higher complexity works against high availability.
* Compare security models: count the number and severity of exposures in each environment to attack.
* Compare control strength, using a common framework such as COBIT or ISO 27001: which environment provides greater defense in depth? What controls must perform effectively in order to ensure the security of systems and critical processes?

So long as both alternatives are assessed with standard, open frameworks the analysis will provide both a recommendation and a basis for evaluating all of the essential underlying assumptions. The intent is not to reduce the inherent variability of threat behavior into a single score that can be applied to both environments, or to conduct an expensive, overly detailed exercise. If there is a significant difference among the alternatives, it will begin to appear with a basic review of high level architectures and security models. If there isn’t much difference, then the decision threshold for security is likely to be met by either environment, and the decision rightly shifts to an evaluation of business benefits.

It only becomes difficult when you’re trading off performance and risk. But there’s a way to deal with that as well …

Sunday, March 08, 2009

Strategy-based Bracketology

In the information economy, it’s important to cross-train on select skills from other fields: there’s Operations Management for MBAs, Finance for Senior Managers, and perhaps the most important of all, Bracketology for Information Security Risk Managers.

Managing bracket risk
In the NCAA tournament, on average, the higher seed wins about 70% of the time. Most bracket pools score the results of each round the same, with 32 possible points for picking all of the winners in that round. There are six rounds, so the maximum possible score is 192. If you follow a high-seed strategy (i.e. pick the higher-ranked team) you’ll likely wind up with a score that's better than average.

Of course, if you pick straight seeds, you can expect the following:
* You’ll do well in tournament years that feature exceptionally strong top teams.
* You’ll be ridiculed by your friends for having no imagination and playing it safe.
* In a bracket pool of any size, you’re odds of winning are very, very low.

Everyone else picks upsets. Most people get most of them wrong, but a few get lucky, and the lucky ones come out on top. To have a shot at winning against your friends, prognosticators, or the masses on Pickmanager, you have to go with some underdogs. Each year there are usually a bunch of upsets, and the more you pick, the higher your potential score will be--at least in theory.

(As an aside, this perspective sheds a little light on how the current Wall Street mess started, and why it was so hard to stop: to attract investors, you have to produce top returns. And you’re not going to get top returns by always playing conservative.)

Start with history
Obviously, seeds are a strong indicator of performance, so it doesn’t make sense to just pick upsets at random. It’s good to look at the historical performance of each seed as a starting point. There are some upsets that happen every year, and the conventional wisdom is that they are “safe” to pick. For example, let’s look at 5 v. 12 first round matchup. Historically the 5 seed wins 67% of these games; an average of 1 to 2 upsets per year.

If you pick all 5 seeds, you’ll usually get 3 out of 4 possible points in the first round from those matchups. Sometimes they’ll all win and you’ll get 4 points; other times there will be two upsets and you’ll only get 2.

So putting your risk management hat on, which is that the best approach? Without any additional information, what strategy will give you the highest payoff? Consider the 2008 tournament 5 v 12 pairings:

(5) Notre Dame v (12) George Mason
(5) Clemson v. (12) Villanova
(5) Michigan State v. (12) Temple
(5) Drake v. (12) Western Kentucky

The left column on the chart below shows the 16 possible outcomes, with the historical probability of each. To see which one has the highest payoff, compare the columns to the right for each strategy: no upsets, 1 upset, or 2 upsets. (In the table, 2008 team names are listed instead of scenarios for clarity.)
  Pick Strategy 
All high seeds winNotre Dame upsetMSU and Drake upset
OutcomeHist. Prob.Max PointsExp. ValueMax PointsExp. ValueMax PointsExp. Value
All high seeds win20.2%40.8130.6020.40
Notre Dame upset9.9%30.3040.4010.10
Clemson upset9.9%30.3020.2010.10
Michigan State upset9.9%30.3020.2030.30
Drake upset9.9%30.3020.2030.30
MSU and Drake upset4.9%20.1010.0540.20
Clemson and Drake upset4.9%20.1010.0530.15
Clemson and MSU upset4.9%20.1010.0520.10
Notre Dame and Drake upset4.9%20.1030.1520.10
Notre Dame and MSU upset4.9%20.1030.1520.10
Notre Dame and Clemson upset4.9%20.1030.1520.10
Clemson, MSU and Drake upset2.4%10.0200.0030.07
Notre Dame, MSU and Drake upset2.4%10.0210.0230.07
Notre Dame, Clemson and Drake upset2.4%10.0210.0210.02
Notre Dame, Clemson and MSU upset2.4%10.0210.0210.02
 All low seeds win1.2%00.0010.0120.02
Expected Value (number of wins)100.0%2.682.272.15

Focus on specific outcomes, not typical results
It seems that if you know that one high seed is going to lose, you should pick at least one upset --and yet picking only the high seeds has the highest expected payoff (2.68). So what’s going on here?

Across the 16 possible outcomes of the four 5 v 12 games, a “no upset” strategy for this particular matchup ensures that the most likely scenario gives you the highest possible payoff, and the least likely scenario is the one that would leave you with the lowest possible payoff. (It does not hold true in the 8 v 9 case.) Knowing that 33% of the number 12 seeds are going to come out on top doesn’t help you pick the right ones. (Clemson and Drake were knocked out in the first round last year.)

The moral of the story: historical averages are important, but there’s a world of difference between knowing what typically happens and predicting what will specifically happen. You need a much higher level of confidence about specific outcomes (i.e. risks) in order to be more effective than just playing the odds.

How much more confident? Working backwards, if you adjust the probability of the scenario you think is most likely (e.g. MSU and Notre Dame as the only 5 seed winners) you can see what level of confidence you need in your prediction to justify making that choice.

Getting to that level of confidence requires research; knowing that you’ve reached it takes practice ...

Saturday, March 07, 2009

March Madness and Risk Management Strategy

Every vice, if it hangs around long enough, starts attracting self-justifying quotes. Ben Franklin came up with one of my favorites: “Beer is proof that God loves us and wants us to be happy.” I don’t necessarily agree, but I can empathize with anyone looking for ways to reduce their own cognitive dissonance. I also have a vice that I find virtuous: "March Madness," the annual NCAA college basketball tournament.

Each year, along with about 2 million other people, I sign up for Yahoo’s College Basketball Tournament Pick’em to see how many I can get right. Personal obsessions and Izzomania aside, I will proclaim with all sincerity that the skills you need to consistently make good picks in the NCAA tournament will also make you better at security risk management. Both risk management and tournament bracketology are based on making risk choices under uncertainty; both involve the judicious use of outside experts, rich statistical data, and intangibles. They also share the trait that over the short term, it’s really tough to tell the difference between luck and skill.

March Madness 101
The single elimination tournament is played in six rounds, with 64 teams seeded in 4 regions. In the first round, teams are paired with the highest seed playing the lowest seed e.g. 1 plays number 16, 2 goes against 15, all the way down to 8 against the 9th seeded team. Winners advance, so assuming that the high seed wins each game, in the second round the number one seed would then play the number eight team in the region; the two seed will play number seven, etc. Of course, the high-seed teams are regularly upset by lower seeds with a randomness and regularity that is … maddening.

Points are awarded during each round for correct picks as follows:

RoundPoints per correct pickNumber of gamesPossible points
113232 points
221632 points
3 ("Sweet 16")4832 points
4 ("Elite 8")8432 points
5 ("Final Four")16232 points
6 (National Championship)32132 points
Maximum Possible192 points

So there are 63 decisions to take before the first game begins, and the goal is to predict the winner of each game, in each round, in such a way as to maximize your total score:

Score for the round = points available * number of correct picks

This equation bears a very strong resemblance to the standard information risk equation below, which is used to calculate loss expectancy as part of the risk assessment process. Both equations define a payoff as the product of something you know quite a bit about (impact) and something that you can estimate to some level of confidence but not perfectly predict:

Risk exposure = risk impact * event probability

So if you get pushback for following the tournament in minute detail, obsessing over your picks and constantly checking your rankings every time there’s an update, take heart: It's not just a tournament, it’s a huge learning opportunity. Decision making in a dynamic, competitive situation with limited information and lots of uncertainty is a great environment for building your risk optimization skills.

Wednesday, March 04, 2009

Organizational Agility

It seems that 2009 is stacked against just about everyone trying to get new security initiatives off the ground. First we saw the waves of cuts and layoffs, with information security budgets left largely intact. But now the freeze is turning into cuts for security departments as well.

If only the threats to our environment were also struggling with the pressures of downsizing. But they’re not, so we have to stand up the most robust set of administrative, technical and physical controls we can muster with the resources we have.

Security departments aren’t the only teams that have to figure out how to win under these circumstances. Hockey teams are used to playing outnumbered for short periods of time. When a player is sent off to the penalty box, their team must carry on short-handed until the penalty time expires.

During this “power play,” the penalized team changes its defensive stance. They still directly challenge the attacking player with the puck, and maintain a depth of defenders in front of the goal to take away any open shots. But the defense can’t cover everything, and so they do their best to recognize and respond quickly as their opponent constantly shifts the point of attack.

Until the economy rebounds and budgets recover, many organizations won’t be able to fully staff every function and administer every control. It might take a year or two, but for now we’re in “penalty kill” mode. Situational awareness and the ability to respond quickly and cohesively is going to be especially important.

So how agile is your organization, and how does that agility impact your short-handed security strategy in a “power play” environment?

Measuring agility
Organizational agility is the ability of groups and teams to react to change in a way that benefits the overall organization. Agile business organizations observe market conditions, analyze opportunities, decide on a course of action and execute those plans effectively. (Well, in theory anyway. As military strategists like to say: “No plan survives contact with the enemy.”)

An organization with staff overburdened with responsibilities isn’t agile. So before trying to press on with a labor-intensive approach to security, it’s important for management to assess the organizational capacity to carry it out.

A good indicator of staff workload is meeting availability. So to measure agility, pick 30 people at random across the company and schedule a meeting without sending it. See how many are available during 2 or 3 different time slots this week. Then push it out 2 weeks, and choose a few more time slots. Then push it out a month. With a random spot sample of time availability, you can get a sense of the capacity of the organization to support key security initiatives.

If you find that the capacity is there, then labor-intensive activities such as security awareness training, information classification and risk assessment work can be sustained with a good chance of uptake and success. But if the calendar space isn’t there, it’s likely that your strategy will need to change. It may be better to focus on delivering technical security controls to your organization, instead of expecting as much from them.