Threats and heuristics in enterprise risk management (infosec)

Welcome Spaf’s students. If you take an interest in information risk management there are a variety of pages on the topic. There is a dramatic tension between those who want to apply numbers to risk, and those who think it is impossible. There are numerous conferences and journals that look at this. Have fun reading. 

 

When trying to assess enterprise risk and the threats vectors that create risk there are standard models or derivations of frameworks that are found in the literature such as NIST and Octave-Allegro . The current practice is to take the various simplistic risk frameworks, whether single loss expectancy  (SLE=AV*EF) or annualized loss expectancy (ALE=ARO*SLE), then derive from that the set of threats and then vulnerabilities giving indication of loss or harm. We’re going to try and go a bit further down the path than these formulas usually allows us to advance.

We are used to talking about threats and vulnerabilities as if they were synonymous but the various formula are ways of showing that though they are linked they are disparate elements to the risk equation. A threat is an agent, entity, element, behavior, or actor upon the enterprise. The threat can be internal or external, and can be purposeful or inadvertent.

A vulnerability is some element of an operating system, application, or information asset within the enterprise that is subject to exploitation. A buffer overflow is a form of vulnerability. Patches and other forms of configuration controls are used to mitigate vulnerabilities at the system level. Information awareness campaigns and policy mechanisms are enterprise level efforts to mitigate vulnerabilities. Vulnerabilities can be categorized, counted, and evaluated based on the type and scope of the behavior the vulnerability exposes. A vulnerability does not increase risk unless their is a valid threat.

The risk equation suggested by Dan and Julie Ryan (retrieved from http://www.danjryan.com/Risk.htm)

This isn’t really an equation that you would put numbers into as the evaluation of some of the elements aren’t really quantifiable. There is no particularly good way to evaluate all of the threats since so many are unknown. However, you could model elements of the equation like threats a few different ways. Starting with threats you could think of them as humans, nature and hybrids. This kind of break out taxonomically has a tendency to hide some of the details of threats but follows a fairly standard practice for considering threats. Another way to analyze a threat would be to look at the relationship of threat to a cause as inadvertent (accidental) and purposeful (somebody is out to get you). It is just a simple change in the expectation to break out a few other elements of risk so they can be talked about. As an example an intentional act does not have to be malicious, and an unintentional threat could be a systemic or programming error. The following image depicts how this might be evaluated.

Standard Threat Model (Click to make larger)

 

The standard conceptual models do not necessarily fulfill the need for a comprehensive understanding of threats to the enterprise. When dealing with information assurance and security within the enterprise John McCumber gave us a fairly detailed model that fills out the other models slightly better.

 

Advanced threat model (Click to make larger)

Using McCumber to inform us we create a much more detailed map of threats. With this model we’ve jumped from merely talking about the threats to talking about what the threats are against. As an example threats are against confidentiality, integrity, and availability and they are matrixed (cross linked) with humans, nature, and technology as vectors of the threat. We’ve added the technology piece to allow us to conceptualize the different malicious and non-malicious attack vectors easier. Linked to our central theme of threats at the top of the previous diagram you will see “hybrid” listed in a red bubble. This is so we don’t forget that this diagram is comprehensive in nature and any or all of the vectors depicted could be valid at any time.

Various other threats are depicted but another bubble off the bottom in red is “mission”. That is to represent that the activity that an information asset use case represents might be a threat vector. On the obvious side of the ledger a missile guidance system if successful in guiding the missile to a conclusion will no longer be available after the bang. In another example a combatant commander might decide that delivering ordinance on target is more important than securing an information asset (e.g. data sharing among artillery batteries).

Other interesting elements are that we can now see inherently that policy and procedures are exposed as threats. What we do with an information asset may inherently be a risk, and how we protect that information asset can increase risk. Long convoluted enterprise patch management strategies exist to insure that only trusted patches arrive at the information system. There are a variety of threats that this process is meant to protect against.

We have evidence to suggest that patches released by vendors in the past have had unexpected consequences. These are rare cases and in some cases made it through the patch process so were not mitigated by the process meant to protect against the threat. Further there are known threat vectors looking at the patch window as an exploitation opportunity  of particular vulnerabilities. The process and procedures in place are increasing the risk as the impact increases and the mitigation strategy fails.

Why would the policy or procedure be a threat and the patch mechanism be a vulnerability? The threat is the operation or entity working or exploiting the vulnerability. Which brings up further threat issue. By itself the process, procedure, or policy is not going to be a threat as it is benign. It is dormant until is acted upon. So the threat is an actor working through a threat to the system to exploit a vulnerability. The policy and procedure of patching is not a vulnerability as there is no direct exploitation of it possible. You can easily exploit the vulnerability that the process has not directed yet. The patching process policy and procedure patching allow a third party to act or exploit the underlying un-patched or late to be patched system.

Why is this threat analysis important? Why is the discussion of value? By analyzing the threat and vulnerability issues feedback mechanisms from mitigation strategies can be evaluated that could be having the opposite effect expected. The opportunity cost of certain mitigations subtract directly from countermeasures taken. As an example countermeasure effort impacts mission effectiveness it subtracts directly from mitigation efforts.

Option A for showing opportunity costs as a factor subtracting from countermeasures (click to make larger)

This formulation isn’t necessarily the best formulation we can use for this problem set. There is an argument to be made that opportunity cost is actually a feature of impact and inherently inside of impact. We could depict this as follows:

Option B for risk opportunity cost (click to make larger)

What does this have to do with threats? The requirement for a risk evaluation to have a threat and vulnerability paring determines the probability of a loss and impact the severity. It is not usually explicitly discussed that opportunity cost is important to understanding the risk equation. Calling it out only serves a small purpose of showing the balance of threats. We can do the same for several other creating a substantially more detailed heuristic that serves no real purpose other than to discuss the original heuristic risk formulation better.

This is a risk heuristic blown out for explanatory purposes *updated* (click to make larger)

With this background in threats we can start to describe some other elements of risk like vulnerabilities and impacts. At some point we’ll talk about risk using real values and play with the stock market a little bit working with confidence intervals.  Special thank you to Dr. Daniel Ryan for talking me through several of these points. All errors, omissions, mistakes, and ignorance is completely my fault. His assistance and reading list has allowed me to advance faster than I could have on my own.

As always the blog serves as my work product as I play with, and advance different concepts I’m looking at. If you want to talk about or discuss elements on the blog let me know. I’m a professor I love talking about stuff. The crowd sourcing and commentary is invaluable for helping me to figure out interesting problems.

Leave a Reply