Showing posts with label security policy. Show all posts
Showing posts with label security policy. Show all posts

Saturday, March 28, 2009

Security Policy as concept car

In the JMU Information Security MBA program, the main assignment for the second class is to put together an information security policy manual. During the lectures we spent most of our time focusing on frameworks and sources such as ISO 27001, COBIT, ITIL, NIST, SANS and many other sources of policy content. Thankfully, we also spent time working through some themes from The Design of Everyday Things by Donald Norman.

My favorite takeaway from the class was the realization that "fit" is an important concept in information security; so much so that it should be explicitly recognized in the policy framework. Policies must fit the security requirements, cost constraints, culture and capabilities of an organization.

At the risk of leaving out a number of "must haves" in my policy manual, I wound up putting together a Concept Car for security -- a collection of statements and requirements oriented around three questions:
* What does your business need?
* What can you execute?
* What can you afford?

They're not complete, but hopefully reflect a decent start in each of the categories that they address. I've also included links to all reference sources for more detail:

Information Security Strategy and Architecture
Information Security Charter
Acceptable Use Policy
Data Owner Security Policy
System Owner Security Policy
Platform Infrastructure Security Policy
Messaging Security Policy
Network Security Policy
Physical_Security_Policy

Thursday, February 05, 2009

Assessing Enterprise Risk with forensic tools

There’s no need for FUD (fear, uncertainty and doubt) or guesswork when making the case to management for improving the protection of sensitive information. A serious incident or close call is often the most effective form of persuasion, but it’s not the most desirable. Ironically, forensic investigation tools can be just as useful in preventing incidents as they are in responding to them. But the key is how they’re used. To make the case for change, build on a foundation of reasonably sized data samples, transparent criteria for characterizing results, and focus on the decisions these data are intended to support.

For example: in the 2008 Global State of Information Security Survey, authored by CSO Magazine, CIO Magazine and PriceWaterhouseCoopers, 54% of executives surveyed admitted that they did not have “an accurate inventory of where personal data for employees and customers is collected, transmitted or stored.”

Organizations that don’t normally handle personal data in the course of business might not put the risk of sensitive information loss high on their priority list. Businesses that routinely process high volumes of sensitive information may reach the same conclusion if they feel confident that all systems are consistently protected with highly restricted access. But in either case, without knowing how many copies of these records have been created and shared across end user systems--over the course of several years—a blind decision to either accept or mitigate this risk is likely to be off the mark.

Enter the forensic investigator, often overworked, with relatively little down time to spare. Armed with forensic tools and a basic understanding of what and how much to measure, they can provide a compelling case for decision makers without the expense of a huge data gathering exercise.

With sample results from 30 systems chosen at random, using predefined search strings that are applied the same way to each search, you can get a good feel for the scale of the problem with a reasonable margin of error, where reasonable is defined as: “precise enough to support a decision, while maintaining confidence in your conclusions and credibility with your audience.”

Consider a company of 40,000 employees, with no prior formal assessment of how much sensitive information is on its end user systems. Even a basic estimate would be a huge improvement in understanding the problem. Using output from this online calculator, the table below shows the confidence interval for sample proportions that range from 0 to 6 out of 30, and an estimate of the fraction of the 40,000 that these results most likely represent:



So if it turns out that 5 of the 30 systems from across the company contained sensitive information, you could reasonably conclude that up to 12,000 systems are affected. Is this too much risk? Depending on the threats and current protection capabilities, it could be. It may justify putting more education and enforcement behind a records retention policy, strengthening access controls and account reviews, or implementing a data loss prevention (DLP) solution.

One word of caution: while the initial sample showing 5 out of 30 may make the case for an awareness campaign, a second random test several months later with another small sample may not definitively show that things are improving. If the second sample shows 6 out of 30 (20%) still contain sensitive information, this sample proportion is within the margin of error of the first assessment (9% to 31%). That is, with a population of 40,000 end users, you’re about as likely to get 6 out of 30 as you are to get 5 out of 30 in a random draw. However, if you get zero out of 30 – then you’re much more likely to have achieved a (statistically) significant improvement.

How much more likely? To test against a threshold, use this calculator: http://www.measuringusability.com/onep.php.

Saturday, January 17, 2009

Four things to do with computer forensic tools (besides forensics)

When staffing an internal computer forensics capability for an organization, management needs to determine how to balance capacity with demand. At the extremes, you either have a backlog of cases waiting on available investigators, or investigators waiting on requests for support.

Even under the best of circumstances, the investigative caseload won’t follow a regular schedule and some amount of downtime is inevitable. Forensic analysts will need to spend some of that time putting together hash sets, updating scripts, evaluating new tools and doing all of the other arcane tasks that go along with keeping pace with the changing needs of the function. But for an IT risk manager, if you can tap into it, unused forensic capacity is an asset that can be extremely helpful in other contexts as well. Here are just a few examples:

1. Identify the prevalence of sensitive information on end user systems. Because they’re fast, thorough, minimally disruptive and often support remote data capture, forensic tools can help determine the “hit rate” of confidential documents across a randomly selected cross-section of the end user environment.
2. Measure the compliance rate against system usage policies. A scan of Internet usage can show the proportion of systems accessing content that poses a risk to the organization and/or its users. Over time, the amount should decrease if the training and awareness efforts are having an effect.
3. Estimate the amount of data at risk that is not being backed up. Depending on the architecture, this may be a bit more difficult to determine. A comparison of data files created or edited locally that are outside of backup routines will give a good sense of the amount of work lost each time a hard drive crashes, or a laptop is stolen.
4. Identify the level of unauthorized configuration changes. How long is the screen saver timeout supposed to be? What applications or changes are not allowed on a standard system build? This is less of an issue in organizations where IT has locked down the desktop. But where this is a contested issue, actually quantifying the impact can show the best tradeoff between control and usability for a given department or organization.

It goes without saying that nobody likes to be investigated. If the purpose, scope, approach and usage of this information isn’t spelled out in advance (i.e. good-faith random anonymous survey, not a warrantless wiretap) and communicated with the proper level of support, it’ll be the last time you get to try using forensic capabilities to tune security policies and practices.

But let’s face it – all of the critical data in any business either originates or is viewed from an end user system, which is often the least-defended part of the environment. Attackers realize this, and end user systems will always be a popular target. Unless you know what your exposure is, you won’t have a good understanding of what your policies and protection capabilities should be.