Friday, May 08, 2009

A FAIR measure of defense in depth

Recently, the owners of a system containing sensitive information where I work began planning an upgrade to the latest available version. In addition to performance improvements and bug fixes, the new release also modified authentication and authorization processes. Compared to the current model, these changes would offer significant cost improvements in administration and support. But before flipping the switch, business and security stakeholders wanted to know “which configuration is more secure?”

To provide an objective answer to that question, we defined “more secure” as the configuration and support processes (i.e. security controls) that would result in the smallest amount of residual risk to the organization.

Initially, this looked like a simple consulting request from a business unit. Normally, the security team reviews the proposed security architecture and provides a recommendation. But long after the system upgrade, system owners and administrators will face new decisions that impact the security of the system. They needed advice and a full knowledge transfer on how security controls work together.

The Factor Analysis of Information Risk (FAIR) methodology makes this kind of analysis transparent by decomposing the analysis into lower levels of detail only as needed. In situations where the alternatives are very similar, FAIR allows an analyst to identify and focus only on the relevant differences.

FAIR defines risk as “The probable frequency and probable magnitude of future loss.” Given that we looked at two possible configurations of the same system, the underlying information value was equivalent in both situations, and the expected threats were also the same. So the probable magnitude of loss could only be determined by the controls, not by any differences in the underlying information. Since the FAIR framework structures the analysis in a hierarchy along impact and likelihood boundaries, an analyst can isolate the comparison to only the parts of the analysis to the subset of factors that are different. In this case, the focus was on control strength and control depth against the range of expected threats.

Using FAIR, we looked at the type and frequency of contact that threats would have with the system, their probability of action, and the number and strength of controls applied in each case.

In the end, the analysis objectively determined which configuration had enabled more layers of security controls that could not be circumvented by an attacker. (As an example of circumventing a control: logon banners may be required for legal reasons on certain systems, but an attacker may not interact with the system through those defined interfaces and thus circumvent the control. The AOL mumble attack is another.) And the threat-oriented focus provided a context for evaluating future system changes: owners, auditors and security team members now share a common understanding of how control changes add or remove layers of protection between threats and assets.

Eventually, we wound up with a reasonably portable security metric for comparison: defensive layers per threat vector.

It’s not the number of controls or compliance gaps that determine the security of a system, but the strength and depth of that protection that attackers can’t sidestep.

1 comment:

Miracles said...

Great use of FAIR Jeff. Glad to see you get on board finally!