Saturday, March 04, 2017

Sharp Sticks and Small Spaces: The DISH Threat Hunting Reference Model

Predator isn’t just a good action movie. It’s a great action movie. An advanced alien hunts humans for sport, picking off elite soldiers one by one. After each attack, our hero learns the strengths and weaknesses of his adversary. And in the end, with sharp sticks and small spaces, he initiates contact with a superior foe in the only way he could to survive.

In other words, a day in the life of cyber security operations.

If we sit back behind our consoles and wait for attackers to trip alerts, we too run a high risk of getting picked off, one by one. We have to constrain adversary behavior as much as possible, then actively search out signs of malicious activity to engage where we have a relative advantage.


Regardless of the threat landscape and organizational capabilities, we need to be threat hunting as effectively as possible. To get there, we have to answer some basic questions:
  1. What can we do that will be effective today?
  2. How do I determine my threat hunting needs?
  3. What is our capability roadmap - what should we buy and what should we build? What skill sets are strategic for these capabilities, and how do we attract and retain them?
  4. How do we measure effectiveness and adjust as needed? 
There are a number of high quality resources available to answer question #1. And SQRRL's Threat Hunting Reference Guide makes progress on items 2 and 3 by organizing proven processes and analysis techniques into a coherent service that can be defined and managed.

But what about the last one - how do we know that we're effective? Well, the short answer is if we're finding malicious content, and we don't have any data breaches...that's a good sign.

Good - but not ideal. The "hit rate" of malicious content only captures the numerator of a fraction: how much we see. The denominator is how much there is. If the numerator is the only thing we see changing, without context we can't tell if we're getting better or worse, regardless of how the numbers move.

Counting breaches is more accurate ... but it's an outcome (lagging) measure. You only get the data when something happens. See Predator, above. Not Being Alive is a terrible way to find out that our tactics needed adjustment.

To be sure, we still need these process and outcome measures. But a threat hunting reference model needs to define the problem space in a way that enables leading measures of effectiveness, and keeps hunt activities in the context of the total operations picture.

D I S H

The DISH Threat Hunting Reference Model characterizes activity on four dimensions:
  • Depth: The number of observable attributes examined for malicious activity. Examples: Processes, Services, Scheduled tasks. MITRE lists 127 different identifiable entities in the Attack Techniques and Common Knowledge (ATT&CK) framework for describing late-stage activities in the attack lifecycle (kill chain). After starting with all attack forms, and then subtracting out all items covered by secure configurations or high confidence security alert Use Cases, the remaining items represent the "visibility gap" that needs to be evaluated as hunting candidates based on risk. 
  • Intensity - Hunt frequency, as measured in number of times per day/week that data is collected and analyzed from each environment. Based on exposure and potential impact, some systems may be hunted weekly, while others are reviewed daily or more. Automated security alerts based on SIEM correlation rules that fire represent a special hunt case where frequency = continuous. This makes sense, because SIEM was designed to automate log review 'hunting' in the first place. Looking at SIEM and response automation as part of the hunting continuum enables leadership to evaluate hunting and alerting activities as complementary parts of a unified capability. To the extent that automated detection forces attackers to move slowly, this has the potential to increase the effectiveness of hunting activities.
  • Scope - The number and type of entities to be included in hunting activity. Examples include all endpoints, servers, DMZ hosts, websites, outbound connections, user sessions, etc. Scope, depth and intensity define the volume of data needed to support hunting activities.
  • Heuristics - The methodology used to proactively find malicious activity. Examples: Simple searches for Indicators of Compromise (IOCs) such as known file names, hashes, IP addresses, registry values, etc. High fidelity alerting on known malicious content. Also tool track detection - observation of known system changes that occur when malicious tools are used. Slightly more complex methods include Data Stacking, where all instances of a given entity are  captured across a hunt scope, and then rare occurrences are examined. At the high end of heuristic category are machine-assisted processes such as behavior and entity analytics that attempt to alert on outliers identified through time-series analysis or machine learning.
When combined with a vulnerability management program to reduce attack surface and SIEM alerting to automate correlation, most organizations can establish a reasonably manageable hunt space. Even with limited depth and scope, we can make meaningful progress very quickly. 









No comments: