Thursday, February 05, 2009

Assessing Enterprise Risk with forensic tools

There’s no need for FUD (fear, uncertainty and doubt) or guesswork when making the case to management for improving the protection of sensitive information. A serious incident or close call is often the most effective form of persuasion, but it’s not the most desirable. Ironically, forensic investigation tools can be just as useful in preventing incidents as they are in responding to them. But the key is how they’re used. To make the case for change, build on a foundation of reasonably sized data samples, transparent criteria for characterizing results, and focus on the decisions these data are intended to support.

For example: in the 2008 Global State of Information Security Survey, authored by CSO Magazine, CIO Magazine and PriceWaterhouseCoopers, 54% of executives surveyed admitted that they did not have “an accurate inventory of where personal data for employees and customers is collected, transmitted or stored.”

Organizations that don’t normally handle personal data in the course of business might not put the risk of sensitive information loss high on their priority list. Businesses that routinely process high volumes of sensitive information may reach the same conclusion if they feel confident that all systems are consistently protected with highly restricted access. But in either case, without knowing how many copies of these records have been created and shared across end user systems--over the course of several years—a blind decision to either accept or mitigate this risk is likely to be off the mark.

Enter the forensic investigator, often overworked, with relatively little down time to spare. Armed with forensic tools and a basic understanding of what and how much to measure, they can provide a compelling case for decision makers without the expense of a huge data gathering exercise.

With sample results from 30 systems chosen at random, using predefined search strings that are applied the same way to each search, you can get a good feel for the scale of the problem with a reasonable margin of error, where reasonable is defined as: “precise enough to support a decision, while maintaining confidence in your conclusions and credibility with your audience.”

Consider a company of 40,000 employees, with no prior formal assessment of how much sensitive information is on its end user systems. Even a basic estimate would be a huge improvement in understanding the problem. Using output from this online calculator, the table below shows the confidence interval for sample proportions that range from 0 to 6 out of 30, and an estimate of the fraction of the 40,000 that these results most likely represent:



So if it turns out that 5 of the 30 systems from across the company contained sensitive information, you could reasonably conclude that up to 12,000 systems are affected. Is this too much risk? Depending on the threats and current protection capabilities, it could be. It may justify putting more education and enforcement behind a records retention policy, strengthening access controls and account reviews, or implementing a data loss prevention (DLP) solution.

One word of caution: while the initial sample showing 5 out of 30 may make the case for an awareness campaign, a second random test several months later with another small sample may not definitively show that things are improving. If the second sample shows 6 out of 30 (20%) still contain sensitive information, this sample proportion is within the margin of error of the first assessment (9% to 31%). That is, with a population of 40,000 end users, you’re about as likely to get 6 out of 30 as you are to get 5 out of 30 in a random draw. However, if you get zero out of 30 – then you’re much more likely to have achieved a (statistically) significant improvement.

How much more likely? To test against a threshold, use this calculator: http://www.measuringusability.com/onep.php.

No comments: