Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Saturday, January 10, 2009

Getting privileged accounts under control: spend less time finding, more time fixing

Are there too many privileged accounts on the business critical systems in your organization? If you suspect so, how would you find out, and how would you energize the leadership in your organization to act? And once you get management endorsement, what number would you set as the maximum allowable number of accounts on a system as a benchmark for non-compliant system owners to shoot for? You'll want all owners to verify compliance, but would a positive response from 50% of those owners justify the call to action?

Perhaps most important of all, after driving this change and moving on to the next problem, will you have the time and resources needed to follow up later in the year and make sure that the problem hasn’t reappeared?

As with any security issue, a small amount of effort should go into finding the problem, and the majority into solving it. To paraphrase Tom Clancy from Into the Storm: “The art of command is to husband that strength for the right time and the right place. You want to conduct your attack [in this example, on the problem] in such a way that you do not spend all your energy before you reach the decisive point." (page 153)

Using a tool like dumpsec for Windows it doesn’t take long to pull group memberships remotely from any given system. But if you’re dealing with hundreds or even thousands of systems, well, that’s a lot of energy to spend before reaching the decisive point, i.e. when system owners start removing excessive accounts.

Intuitively, it makes sense that you wouldn’t want to poll every system in a large environment. Instead, you’d take a sample. But how big of a sample is needed for you – and senior management – to be confident that you know the current state?

Turns out, you (and your boss) can be 90% confident of knowing the median number of privileged accounts on all systems across the server population if you start with a randomly selected sample of 18 systems. And because by definition the median is the middle value, you know that half of the systems are above the sampled value. If this value is too high based on the risk requirements of the environment, you can set a compliance goal such as “reduce the number of privileged accounts on each Windows systems to X by the end of the year.”

To find the median, follow these steps:
1. Pick 18 systems at random across the system population. Dump the list of users with privileged access from each system.
2. Arrange them from fewest to most accounts.
3. Throw out the lowest six and the highest six values, and keep the middle six.

The median number of privileged accounts will be between the low value and the high value of the middle six numbers out of the sample of 18.

For example, if I dumped the local admins group across a set of systems, I might get a result like this. (The “middle six” values are highlighted in bold):

49, 23, 17, 33, 17, 16, 28, 14, 29, 40, 12, 44, 34, 12, 25, 9, 10, 32**

So based on this sample, the median number of privileged accounts across all systems are 90% likely to be between 17 and 29. Granted, due to the architecture certain accounts may be present across all systems. And other factors may help determine if 29 is too high … or 17. But once you decide, you have a baseline value that defines the boundary between acceptable risk and excessive access, which can be communicated across the organization.

Once you’ve gotten buy-in and communicated the requirement, each system owner who wasn’t sampled can compare and confirm that they comply. And in keeping with Clancy’s principle above, only a fraction of your time was spent identifying the problem and communicating it: the rest goes in to helping fix it.

But why does 18 work? Where does the 90% confidence come from, and why throw out the bottom six and top six values?

Doug Hubbard explains it in Chapter 3 of his book “How to Measure Anything.” And while this isn’t a specific example in the text, there are a lot of intriguing applications to information security that he does cover.

Hubbard introduces the idea of finding the median from a small sample as “the rule of five:”

“When you get answers from five people, stop…Take the highest and lowest values in the sample…There is a 93% chance that the median of the entire population … is between those two numbers.” Why? “The chance of randomly picking a value above the median is, by definition, 50% -- the same as a coin flip resulting in “heads.” The chance of randomly selecting five values that happen to be all above the median is like flipping a coin and getting heads five times in a row.(pp. 28-29)”

In other words: 0.5 x 0.5 x 0.5 x 0.5 x 0.5 = .03125 With a random sample of five, there’s only a 3.125% chance of being above the median all five times, and the same 3.125% chance of being below the median all five times. So each time you take five random samples, you’re going to get values on both sides of the median 93% of the time -- the median will very frequently be between your lowest and highest value.

So if five samples gives you 93% confidence, why take 18 samples? From the example above, if you picked the first five at random and stopped, you would have found this:

49, 23, 17, 33, 17

With 93% confidence, you’d be able to assert that applications contain between 17 and 49 privileged accounts. With small samples randomly chosen, high confidence comes at the expense of intervals that are often quite wide. And in this case, it may be too wide to be useful. But picking more samples and tossing out six of the lows and six of the highs retains roughly the same level of confidence in the middle six, with the advantage of a much smaller range between the low and high values. And it’s the smaller range that allows you to understand the state of the environment, and set a credible level of improvement that the organization can meet.

More info I found useful:
How to Measure Anything http://www.howtomeasureanything.com/ Lots of gems on the site; check out the PowerPoint on measuring unobserved intrusions in information systems.

Confidence intervals for a median, with different size samples: http://www.math.unb.ca/~knight/utility/MedInt95.htm

**These numbers were generated by Excel; try it out for yourself. For this example I used the formula =5+(40*RAND()) to give a higher starting value than just "1."

Sunday, January 04, 2009

Security career snapshot - January 2, 2009

Now that the holiday break has ended and everyone is heading back to work, it seems like a good time for information security professionals at every level to take stock of available opportunities and chart a course for the new year.

Is it safer to stay put, or move?

While there's an abundance of forecasts available that predict where 2009 is headed, most are discouraging, few will turn out to be correct, and there doesn’t seem to be a method for sorting between the good and bad estimates that’s any more trustworthy than the estimates themselves.

Instead, I'd argue that it makes more sense to take a second look at the current role, the financial health of the organization, external opportunities, and the stability of the regional and national economy ... and plan according to current actualities.

To cut through that uncertainty, I spent some time over the break going through online job postings to compile a snapshot of security jobs that are currently open and available. I looked at job titles, years of experience required, expected regulatory / compliance background, certifications, and the most active hiring locations. This snapshot won’t show hiring trends for 2009, but my hope is that it’ll at least make a decent starting point for figuring out where the holes in the resume are, and which types of work assignments today may open doors for the next role.

I started with a query of security jobs using an aggregator site, and randomly selected a subset of 200 for analysis. I downloaded each full post directly from the offering website and parsed them locally using some scripts. Below are some of the high points. The margin of error on the survey should be plus or minus 7%. If you want a detailed look at the approach, or the data itself, just drop me a line.

Here’s what I found:

Most common job titles
A bit less than half of all security job openings are for the role of engineer, analyst, or administrator. Managers jobs appear less than 5% of the time, and director level only 1%.

Without more information it's tough to be definitive, but the numbers could imply a couple of things: first, that security organizations may be flattening right now as managers hire more staff; and second, that “individual contributor” roles may have more mobility across organizations than leadership positions. It’s also possible that management roles are filled through other means (internal candidates, etc.) more frequently than staff positions are.

Position title Number of postings Percent
Engineer45(22.5%)
Analyst30(15.0%)
Administrator14(7.0%)
Manager9(4.5%)
Consultant9(4.5%)
Architect5(2.5%)
Director2(1.0%)


Years of experience expected for each role
Across all positions, five years was the median level of experience required. Only 30% of positions expected two or fewer years of prior relevant work history. One interesting fact was that out of 41 postings with a specific requirement, that requirement was described 21 different ways (e.g. 1 to 4 years, 2 or more, 4-6 years, etc.) It seems the industry has generally standardized on which certifications and skills are expected, but not the level of experience associated with those skills that represent appropriate minimum requirements.

Years of experience requiredNumber of job postings
0 to 13
2 or more10
3 or more3
4 or more2
5 or more12
6 or more3
7 or more2
8 or more1
9 or more1
10 or more5


Most common regulatory / compliance keywords
Not every posting specifically cited regulatory requirements or security framework experience. But for those that did, the following are the most commonly listed:

Regulatory or governance requirementNumber of postings
Federal Information Security Management Act (FISMA)14
Code of practice for information security management (ISO 17799/2701/2702)12
Sarbanes-Oxley (SOX 404)12
Payment Card Industry Data Security Standard (PCI DSS)12
Health Insurance Portability and Accountability Act (HIPAA)7
Gramm-Leach-Bliley Act (GLBA)3


Most common certifications
As of early 2009, candidates with a security certification have an edge over non-certified candidates, but certification is not usually a make-or-break requirement. Less than half (47%) of all security job postings examined had listed certification as a requirement; around 20% described certification as “required” or “highly desirable.”

CISSP is the most commonly listed credential, although it often is provided as one of several examples e.g. “Professional security certification such as CISSP, CISM, GIAC, CCNA, CCSP, CCNP, MCSE, Security+, Network+.”

Security Certification (n=94)Number of postingsPercent
Certified Information Systems Security Professional (CISSP)48(52.7%)
Other (Cisco, etc.)12(13.2%)
Certified Information Security Manager (CISM)11(12.1%)
Certified Information Systems Auditor (CISA)10(11.0%)
SANS Global Information Assurance Certification (GIAC)10(11.0%)


Most active hiring locations
Finally, the top ten states (and Washington D.C.) listed by frequency of job posting:

StateNumber of postings (n=200)
California32
Virginia32
Maryland24
Washington D.C.17
Texas11
Massachusetts11
New York8
New Jersey6
Illinois6
Pennsylvania4


So if you're a Security Engineer with a CISSP and five or more years experience in your current role, with a strong background in FISMA, SOX and ISO 17799 who lives in the Washington D.C. area ... relax ... even in the midst of this economic mess, it looks like the world is still beating a path to your door. For the rest of us, though, we probably have some work to do.

Best of luck to everyone trying to improve their skills and find the right organizational fit in 2009. I hope this was helpful; if you have questions about specific skills, opportunities or regions not listed in this overview that you haven't been able to ferret out using the job search engines - let me know and I'll help if I can.

Friday, December 05, 2008

Risk metrics should drive security, without dictating it

How precise do risk measures need to be in order to be of value to an organization? Is it necessary to calculate an annual loss expectancy (ALE) for each type of information security risk in order to justify security decisions? For better or worse, most organizations have settled on a security budget that is a fraction of the overall IT budget, which in mature companies remains a steady proportion of annual revenue.

Given the challenge of putting together credible loss numbers across the range of identified threats against the organization, it doesn’t make much sense to try to optimize budgets purely against a risk forecast. Instead, security is best treated as a constraint in decisions to optimize revenue, operating costs, profit or other key measures. Protection for critical assets needs to cross an “adequacy” threshold. Conversely, when changes stress or stretch protection capabilities to the point of exposing critical assets to threats, the information security function begins raising the case for change.

So if risk management is more about being on the right side of a threshold, as is literally specified in the EU Privacy Directive / US Safe Harbor guidance, then precision is not nearly as important as confidence. Polling organizations such as Gallup provide a margin of error of 2% because the difference between winning and losing a contest is often very close. But in contrast, safety and security based decisions i.e. “we need to act, now” can become clear with margins of 10-15% or more. As an example, if the brakes on the family minivan squeak and start slipping, its time to get them replaced.

With the help of a few reasonable, simplifying assumptions, it is possible to make trustworthy risk-based decisions based on just two critical metrics: security control coverage, and information asset exposure.

These assumptions are as follows:
1. The impact of security incidents are best characterized in financial terms, i.e. information security incidents have the potential to affect current and/or future costs, and current and/or future sales. (Health and safety critical environments are an exception that should be treated differently.)
2. The value that IT security provides to an organization comes from decreasing the frequency and severity of security incidents by:
a. Preventing incidents from occurring whenever possible
b. Detecting relevant events where and when they occur, and mobilizing an effective response to minimize the damage and restore normal operation as quickly as possible.
3. Security control coverage is a leading indicator of risk to information systems, business processes and data.

Based on these assumptions, two key metrics for decision makers can persuasively frame the security “threshold” decision without requiring an unreasonable level of precision:
1. Information asset exposure: a measure of the relative contribution of that asset to the current and future revenue of the organization.
2. Security control coverage: a measure of the number and type of industry best practice recommendations implemented independently as layers of protection on each asset and process owned or used by the organization to serve its customers and stakeholders.

As an example, consider a company with $120 million in annual sales, $150 million in assets, 500 employees, tens of thousands of current and former customers, Market capitalization of $110 million, and an operating margin of about 18%. Based on these estimates, here’s a quick back-of-the-envelope estimate of the scale involved in information protection decisions:

$120 million in annual sales works out to about $330,000 per day or between $10,000 and $25,000 per hour. So to this company, the loss of several hours of downtime from a key system or systems, plus incident handling costs and lost worker time, etc. can run between $150,000 to $200,000.

According to a 2006 report from the Association of Certified Fraud Examiners, the median fraud loss for asset misappropriation (skimming, payroll fraud or fraudulent invoicing) is $150,000.

Forrester estimates that a privacy breach cost between $90 and $305 per record to address; the Ponemon Institute provides a similar number. Based on those estimates, losing personal information on 5,000 customers would result in costs of between $500,000 and $1,000,000.

Asset exposure, described as a fraction of revenue, is a linear function: the longer the downtime, or more records exposed, the higher the cost. But as described in an earlier post, security is not linear. In a population of systems connected by trust relationships, a failure in server A will lead to a compromise of server B, C, D and on down the line.

Earlier this year, Verizon published a Data Breach Investigation Report based on follow-up on over 500 cases in a four year period. While there’s much to take away from the results, two measures stand out in terms of shaping risk decisions: 85% of identified breaches were the result of opportunistic attacks, and 87% were considered avoidable through reasonable controls. That is; security control coverage provides a strong leading indicator as to the likelihood of experiencing a security breach.

So, given an operating margin of 18% (roughly average for the S&P 500) it could take $5 to $6 of additional revenue to make up for each dollar lost due to a security incident.

Against these measures, determining levels of acceptable risk becomes a much more straightforward exercise without the need for precise risk forecasting. Instead, it becomes a question of risk tolerance: will the extensions to the customer-facing systems generate enough new revenue to justify exposure to some of the scenarios listed above?

Metrics can frame the issues, but ultimately the business has to drive it.

Sunday, October 26, 2008

Can you afford bad security?

Within the current economic turmoil and uncertainty its becoming clear that the global economy is slowing, pressuring organizations of all sizes to compete more intensely for revenue while taking an even harder look at reigning in costs. These concerns cascade through the overall project portfolio to IT and security in the form of two very basic questions: What do we need? What can we afford?

In a company fighting for its survival, talking to management about improvements in information security may seem as relevant as changing the locks on a burning building. Naturally, fire is an immediate threat to an asset and its contents, but over a longer time horizon so is the risk of theft … or foreclosure.

Bottom line, some organizations can afford bad security. Others can’t. In some situations, immediate survival concerns will temporarily trump long term protection goals. But as the market meltdown in the United States in 2008 is showing us, it is just as plausible to see that relaxing key control requirements for short term profitability puts entire companies, and even markets, at risk.

The only way to get this right is to view security in light of the survival needs of the firm, and measure it to the same standard of every other investment. In the past, information security hasn’t been held to this standard, mostly due to measurement challenges. Hopefully, for the good of the profession as well as the entities we protect, those days are over and we can take up the challenge of proving our value more accurately and more persuasively than we have in the past.

“What the CEO wants you to know”
In 2001 Ram Charan wrote a gem of a book called “What the CEO Wants You to Know,” distilling business acumen into the effective management of five core measures of business health: cash, margin, velocity, growth and customers. Charan: “Cash generation is the difference between all the cash that flows into the business and all the cash that flows out of the business in a given time period …it is a company’s oxygen supply” pp.30-31

Margin is the difference between the price and cost of goods sold, while velocity is the rate at which those goods are sold. Growth includes expansion (more sales) and extension (new markets) while the Customers category represents how well the organization responds and aligns with market demands.

Naturally, some of these needs can become tactical and immediate while others are more strategic in nature. But all must be functioning effectively for a company to succeed, and any threat to these measures ultimately threatens the health of the company.

“What the CISO wants you to know”
If the five factors above represent the keys to a successful business, then good security is important to a company only to the extent that it affects those factors. If there’s no impact on customers, growth, etc. then there’s no value to security. Or, as your CFO probably read in school:

“A potential project creates value for the firm’s shareholders if and only if the net present value of the incremental cash flows from the project is positive.” [Brigham and Ehrhardt, Financial Management: Theory and Practice, 11th Edition, p.389]

Security issues expressed in terms of cash, margin, velocity, growth and customers, and measured in terms of net impact to the company have the best chance of resonating with decision makers.

Gordon and Loeb propose a three dimensional Cybersecurity cost grid as a tool for building that business case. The authors suggest failures of confidentiality, integrity and availability are to be analyzed in terms of direct and indirect costs, as well as explicit and implicit costs.

For me, the distinction between indirect and implicit didn’t seem as compelling as the difference between a net positive or negative effect on security, so I started segmenting the effect of security across Charan’s five categories this way:



Of course, measuring it is the real trick. But there are quite a few resources available to help with that...