Saturday, January 10, 2009

Getting privileged accounts under control: spend less time finding, more time fixing

Are there too many privileged accounts on the business critical systems in your organization? If you suspect so, how would you find out, and how would you energize the leadership in your organization to act? And once you get management endorsement, what number would you set as the maximum allowable number of accounts on a system as a benchmark for non-compliant system owners to shoot for? You'll want all owners to verify compliance, but would a positive response from 50% of those owners justify the call to action?

Perhaps most important of all, after driving this change and moving on to the next problem, will you have the time and resources needed to follow up later in the year and make sure that the problem hasn’t reappeared?

As with any security issue, a small amount of effort should go into finding the problem, and the majority into solving it. To paraphrase Tom Clancy from Into the Storm: “The art of command is to husband that strength for the right time and the right place. You want to conduct your attack [in this example, on the problem] in such a way that you do not spend all your energy before you reach the decisive point." (page 153)

Using a tool like dumpsec for Windows it doesn’t take long to pull group memberships remotely from any given system. But if you’re dealing with hundreds or even thousands of systems, well, that’s a lot of energy to spend before reaching the decisive point, i.e. when system owners start removing excessive accounts.

Intuitively, it makes sense that you wouldn’t want to poll every system in a large environment. Instead, you’d take a sample. But how big of a sample is needed for you – and senior management – to be confident that you know the current state?

Turns out, you (and your boss) can be 90% confident of knowing the median number of privileged accounts on all systems across the server population if you start with a randomly selected sample of 18 systems. And because by definition the median is the middle value, you know that half of the systems are above the sampled value. If this value is too high based on the risk requirements of the environment, you can set a compliance goal such as “reduce the number of privileged accounts on each Windows systems to X by the end of the year.”

To find the median, follow these steps:
1. Pick 18 systems at random across the system population. Dump the list of users with privileged access from each system.
2. Arrange them from fewest to most accounts.
3. Throw out the lowest six and the highest six values, and keep the middle six.

The median number of privileged accounts will be between the low value and the high value of the middle six numbers out of the sample of 18.

For example, if I dumped the local admins group across a set of systems, I might get a result like this. (The “middle six” values are highlighted in bold):

49, 23, 17, 33, 17, 16, 28, 14, 29, 40, 12, 44, 34, 12, 25, 9, 10, 32**

So based on this sample, the median number of privileged accounts across all systems are 90% likely to be between 17 and 29. Granted, due to the architecture certain accounts may be present across all systems. And other factors may help determine if 29 is too high … or 17. But once you decide, you have a baseline value that defines the boundary between acceptable risk and excessive access, which can be communicated across the organization.

Once you’ve gotten buy-in and communicated the requirement, each system owner who wasn’t sampled can compare and confirm that they comply. And in keeping with Clancy’s principle above, only a fraction of your time was spent identifying the problem and communicating it: the rest goes in to helping fix it.

But why does 18 work? Where does the 90% confidence come from, and why throw out the bottom six and top six values?

Doug Hubbard explains it in Chapter 3 of his book “How to Measure Anything.” And while this isn’t a specific example in the text, there are a lot of intriguing applications to information security that he does cover.

Hubbard introduces the idea of finding the median from a small sample as “the rule of five:”

“When you get answers from five people, stop…Take the highest and lowest values in the sample…There is a 93% chance that the median of the entire population … is between those two numbers.” Why? “The chance of randomly picking a value above the median is, by definition, 50% -- the same as a coin flip resulting in “heads.” The chance of randomly selecting five values that happen to be all above the median is like flipping a coin and getting heads five times in a row.(pp. 28-29)”

In other words: 0.5 x 0.5 x 0.5 x 0.5 x 0.5 = .03125 With a random sample of five, there’s only a 3.125% chance of being above the median all five times, and the same 3.125% chance of being below the median all five times. So each time you take five random samples, you’re going to get values on both sides of the median 93% of the time -- the median will very frequently be between your lowest and highest value.

So if five samples gives you 93% confidence, why take 18 samples? From the example above, if you picked the first five at random and stopped, you would have found this:

49, 23, 17, 33, 17

With 93% confidence, you’d be able to assert that applications contain between 17 and 49 privileged accounts. With small samples randomly chosen, high confidence comes at the expense of intervals that are often quite wide. And in this case, it may be too wide to be useful. But picking more samples and tossing out six of the lows and six of the highs retains roughly the same level of confidence in the middle six, with the advantage of a much smaller range between the low and high values. And it’s the smaller range that allows you to understand the state of the environment, and set a credible level of improvement that the organization can meet.

More info I found useful:
How to Measure Anything http://www.howtomeasureanything.com/ Lots of gems on the site; check out the PowerPoint on measuring unobserved intrusions in information systems.

Confidence intervals for a median, with different size samples: http://www.math.unb.ca/~knight/utility/MedInt95.htm

**These numbers were generated by Excel; try it out for yourself. For this example I used the formula =5+(40*RAND()) to give a higher starting value than just "1."

No comments: