Call us on:

What on Earth is hunting anyway?

‍Thank you, www.vectorportal.com
TL;DR --> Cyber threat hunting is a real capability. Read my definition near the bottom.

The Real Problem

If you’ve been around information security just in the past year or so, you’ve probably heard a new buzzword making its way into our lingo: “hunting” AKA “proactive hunting” AKA “cyber threat hunting” AKA "threat hunting." It usually conjures up this feeling of elite capability that only a few can do properly. To some extent, that’s true. But like everything else on the planet, it depends on what you mean by it.

I believe there’s a somewhat comprehensive way of describing what hunting is without contradicting what different people mean by it. Spoiler alert: they’re almost all true. But how? Let me explain...

#1

First, I need to address an issue. This issue is what I like to call the core security problem. That problem is judging the intent of code. Or, worse yet, judging the intent of a string of ones and zeroes that are next in the execution path. You see, computers were designed to process binary bits and then do something with them - something they were programmed to do. Most of the time that programming has good intent - make the computer do exactly as the user wishes. Sometimes, it has not-as-good intent - make the computer do exactly as the user wishes...but a user who shouldn’t have access to that computer. How is a computer to know the difference? Great question. If you come from a country that has a well-intentioned justice system, you already know that it can never perfectly judge people’s intents even though there is ample opportunity to question evidence, culprits, accomplices, victims, and bystanders. What if you’re trying to judge the intent of ones and zeroes that can’t answer your questions about where they came from or where they’re going? It’d be nice to ask computers and/or programs to judge the intent of other programs or executions, but we kind of already know where that’ll lead. This is why, I believe, it’s become intuitive to ask people to get involved in judging intent.

#2

This is where we get to the second layer of the security problem. It’s people. We came to believe, and rightfully so IMHO, that a human’s ability to judge intent is greater than a programmed bot’s ability. However, I think we did it wrong. What we did is build an apparatus that passively identifies network signatures of potentially ill-intended computer activity. These brittle signatures result in thousands to millions of alerts that need action. Who/What does the action? People. We created security operations centers (SOCs) as a best practice construct to organize a large group of people to handle the thousands to millions of alerts. When they’re done with those, they can do it again. And again. And also again. We want these SOC analysts to effectively analyze, judge, and respond to tons of low context data points that may or may not be indicative of malicious activity that’s usually occurring at a millisecond timescale (I have more to say on this...perhaps in a later blog). We’re so scared of the false negative (that is, there wasn’t an alert for real malicious activity) that we open the aperture of our detection solutions to emit the most amount of alert activity as possible. Now we’ve created a false positive problem (that is, alerts for non-malicious activity) that flood our SOC analysts. The result? There are so many alerts to wade through and not enough context to prioritize them that we’ve created a scenario where SOC analysts cannot effectively analyze, judge, and respond to every alert. Let me say it a different way: 

Our false positive problem has created the false negative scenario we feared.

Or, maybe this makes more sense to you: 

People make poor bots.

We wanted people involved for the right reasons, but I think we got it wrong.

One more comment, and then I promise I’ll summarize the problem that hunting is intended to solve.

#3

There’s a third layer to our security problem. This layer is that of a major detection capability gap. While the traditional signatured sensor, or the automated detection solution, in general, is decent at detecting exact activity we previously know to be bad, it is not very good at detecting novel threats. It is somewhat trivial for a moderately-willed human attacker to evade automated detection solutions. When their new tactics and tools are discovered, how does the passive monitoring apparatus keep up? It fundamentally can’t.

So, here’s the problem I think we’re running to “threat hunting” to solve:

Passive monitoring (automated detection solution + SOC) doesn’t cut it.  We need something proactive.
For those of you who are more graphically oriented

An Answer

Okay, this is where we get into what hunting really is. I wanted to first paint the picture of what problem we're trying to solve. Something outside of passive monitoring, something more proactive and on-demand, something that can discover threats in a new or different way...this is what I think people are calling “hunting.” Hunting, IMHO, is a reaction to a real problem.

Let me be more specific. Hunting generally takes three forms:

  1. Retrospective Discovery
  2. Artifact Discovery
  3. Pattern Discovery

Retrospective Discovery

From my perspective, this is the most common form of hunting...by far. The idea of this hunting form is that you are searching for known threats, retrospectively. A common scenario: you have new indicators from a threat intel feed, and you want to search historical data for it. This is a real, worthwhile problem that can’t be solved by throwing new intel into your passive monitoring apparatus. Another common scenario: in your digital forensics and incident response (DFIR) efforts, you identify some new bad hashes. Again, you can’t put this in your IDS to find already existing malfeasance on your network. You need a capability to look retrospectively at your data.

The most common avenue people perform retrospective discovery is by utilizing existing DFIR processes and people. Namely, collecting data from targets and pulling them back for analysis, blacklist matching, and perhaps some rule matching. These methods are generally IOC-based and scan-based versus more behavioral, real-time, or activity based.

Artifact Discovery

I believe this to be the next most common form of hunting. It consists of discovering novel threat artifacts. Artifacts, in this case, refer to data points that exist on disk or in memory on a computer. Examples are process metadata taken from memory, registry hives, file tables, DLLs, hashes, network statistics, and the like. Analyzing these artifacts could reveal new tools being used by attackers.

There are many ways to perform this style of hunting, but most common are frequency or outlier analysis against a variety of computer artifacts. This is sometimes called “stacking.” For example, “process stacking” is performing frequency analysis of processes run across a host or an environment to bubble up outliers, which are by definition anomalous and worth looking into. In some cases, such as when frequency analysis is automated and operationalized, this can be considered anomaly detection. Again, it’s generally IOC-based (not behavioral) and sometimes scan-based (as opposed to real-time or activity based). Other times, it can be query-based, meaning, the needed artifact data is already aggregated and just needs to be queried. Sometimes you can find machine learning efforts in this area as well.

Pattern Discovery

I consider this to be the rarest form of hunting, primarily due to the difficulty in acquiring the right kind of data and then aggregating, processing, and storing it with a queryable interface. Similar to artifact discovery, pattern discovery seeks to uncover novel threats. What’s different is the level of discovery. Pattern discovery reveals the behavior, method, or tactic of attacker activity, not just the artifacts left behind by an attack. In an increasing number of instances, attacks are leaving very little traces on computers, cleaning up their tracks and/or running entirely in memory. Let me give you an example…

Say you’re trying to hunt down a particular family of ransomware that’s abusing vssadmin.exe to delete Shadow Volume copies before encrypting files. An artifact-based approach to hunting could try to discover the ransomware implant itself, perhaps through outlier analysis (artifact discovery) of processes running on the machine or through matching all known hashes (retrospective discovery) of that ransomware family against binaries found on a computer. But what if no implants were installed during the attack; rather, a memory-only variation of the ransomware was utilized? Pattern discovery would be a boon here. Rather than looking for queryable artifacts left on a computer, pattern analysis would look for abnormal system behavior. In this example, a worthy pattern to hunt on could be abnormal parent processes of vssadmin.exe. Regardless of this ransomware’s reliance on on-disk implants or in-memory ones, that behavioral pattern could bring to light all anomalous Shadow Volume manipulations where vssadmin is utilized. Pattern discovery is, therefore, more behavior- and activity-based by nature, using data points that are emitted as a result of system activity rather than relying on a collection or query of historical artifacts. EDR solutions tend to provide these data points, sometimes generally referred to as “process metadata.” Or, if you’re cheap or cost conscious, take a look at a free alternative here.

A definition

If you consider the aforementioned forms of hunting, I believe a solid hunting definition hypothesis could be:

Hunting is the discovery of malicious artifacts, patterns, or detection methods not accounted for in passive monitoring capabilities.
In case you don't like reading, I summarized it here

An Alternative

So when I originally formed these thoughts over a year ago, I was trying to level set on what was meant by the term “hunting.” I hadn’t seen anything at that point that satisfied my desire for a more comprehensive explanation. I discovered that around the same time, Gartner had produced a pretty good overview of hunting here. I actually like Gartner’s breakout of the three forms of hunting. I documented my thoughts above since it fit better into my stream of consciousness, but I do think Gartner’s perspective is well done. Perhaps that language appeals to you more than mine.

Last Word

At Vector8, we perform all three forms of hunting on behalf of our customers. We can also teach you how to do this. If you'd like to know more, please get in touch with us below!

Written by
Kris Merritt
We would love to help you.
Get in touch with us to discuss your situation.