Attribution of cyber adversaries

Key Points:AIRPL086

  • Attribution has three distinct layers; political, technical, and forensic with each having different confidence levels and analysis strategies
  • Adversaries must interact with systems to exploit them and this creates evidence or anomalies that can be used for attribution
  • Evidence can be tampered with but there are corresponding sensing systems that remain outside of adversary control that can be used for audit
  • Root cause blame for an incident rarely is a technical implementation or user action but attribution remains the way to identify the threat



Three phases of attribution and corollary levels of confidence in attribution

By far the number one case for digital forensics is child porn. Examiners spend an inordinate amount of time chasing this crime and evaluating the provenance of the data. Getting a time line, attempting to assess collection behaviors, determining the scope and scale, and finally documenting the evidence is all part and parcel of the digital forensics discipline. Chasing attribution of adversaries in cyber space is no different and then again it is wholly different. Each case uses facts, evidence, and supposition to ascertain a confidence in attribution of an adversary.

The analysis of cyber intrusions differs depending on the tools, techniques, and procedures of the investigators. There is also a fundamental mission orientation that colors or skews the investigators understanding of attribution. If the investigator is a law enforcement officer attribution rises to a criminal requirement of factual assessment, and if the investigator is an intelligence officer it may fall into a hierarchy of low, medium or high confidence. Somewhere in between these two is network defenders who need enough evidence to declare an event an incident and then bin and identify the indicators of compromise that can be used for network defense.

Attribution is key to all parties in this discussion. You cannot make an accurate assessment of risk without understanding threat and that requires attribution. Attribution is more than just nation state, or criminal organization identification. Even absent the names or places of the adversary the identification of the tactics and techniques within the scope of a campaign allows for defenders, law enforcement agencies, and intelligence agencies to track and create indicators for watch and warning. Finding ongoing campaigns is important for scoping budgets, prioritizing resources, and achieving an understanding of risk. It is often ignored by immature information security organizations at their peril.


Motive, means and opportunity

Authors have skirted this topic in the literature but attribution comes in three basic forms.

Attribution Charts

Political attribution is the identification of motive, means and opportunity of the adversary that would gain from exploitation of an information asset. This is the fastest form of attribution and the least likely to assuage leadership concerns. Usually this is a geo-political context and within the scope of that context can be strong enough and fast enough for a leadership policy decision.

Technical attribution is the identification of common artifacts that can be associated to one or few adversarial groups. A lot of needles in a haystack look alike but given enough needles you can differentiate between them. These artifacts often come in the form of indicators of compromise, tools choice, cryptographic certs that have been stolen and are not known widely, coding or compile artifacts in malware, internet protocol address space, timing and dwell times, and dictionary attack order of words. A key point is that the adversary provides all of the data points for the attribution whether knowingly or unwitting.

Forensic attribution is the most time intensive and most difficult to process. Attribution at this level can take months or even years to accomplish. As much of the relevant data must be evaluated. The principles are to handle the exculpatory and inculpatory information. Hypothesis are developed and refuted. Counter hypothesis are challenged. Often this level will include multi-mode sensing where eyes, or human observation was included in the technical observations.


Evidence of adversary intrusion comes in many forms

Contrary to much of the press about intrusions having a paucity of information, they come with lots of artifacts, and evidence that can be sifted. When I say evidence I mean something very specific. Evidence is factual information that relies on scientific principles and can be effectively reproduced. Network logs, system logs, net flow, DNS cache, DNS logs, passive DNS systems monitoring the Internet, ISP logs, firewall logs, and so much more contain the breadcrumbs of adversaries.

Adversaries within information systems must leave bread crumbs of their intrusions behind. When chasing malware in volatile memory a common mantra is “Malware can hide but it must run.” Similarly, an adversary can hide but they must interact with the systems they are trying to exploit. As such they are subject to surveillance. It is up to the defender to know their systems and have them instrumented in such a way that they can detect an adversary or adversarial activity and once detected they can track and watch the adversary to ascertain the pace and breadth of the intrusion.

The basic TTP of adversaries is to gain access to systems (penetration), increase their privileges (get root), gain an understanding of the systems they have access to, and then move laterally towards targets of opportunity or information they desire, and finally remove/exploit information or create effects on the system.


Evidence is volatile and subject to attacks of integrity

Similar to other assumptions about attribution of adversary activity is the common criticism that proxies, and other mechanisms of anti-forensics exist. This illuminates two very substantial realizations.

First, most non-technical people and event still many technical people do not understand the methods and mechanisms of intrusion investigation. Lots of examples exist such as not understanding the correlation effort between memory analysis, disk based analysis and network analysis of adversary activity. As an example memory fragments can lead to disk artifacts that uncover network traffic. Systems such as passive DNS can lead back to not just the adversary infrastructure but the adversary themselves. Operational security (OPSEC) has long been a problem as the same hubris that infects domestic network defenders is existent within nation state adversaries.

Second, the real reason you want to have highly trained and very savvy investigators is that even the companies that build the technologies don’t understand the artifacts their systems create. In a modern operating system such as Windows or Linux a file fragment can be logged, written, touched, or otherwise manipulated during operation creating different artifacts over 64 times. Each adversary interaction is creating a shadow or fingerprint of how a system was manipulated.

This kind of depth of understanding is not hard to communicate or translate into a scalable capability. Except it does not follow normal hierarchical chains as this kind of knowledge is peered not necessarily transferable through standard dissemination models.  Peered information is context, content, and capability derived information that requires prior knowledge to understand.  That observation of hacker conventions, security conventions, and events is that presentations give context but do not transfer knowledge. It is the abysmally maligned “LobbyCon” where content and capability are glued to context and the price of admission is a substantial amount of prior understanding. The same knowledge transfer mechanisms are part of the university problem based learning methods used within the environment of peer-to-peer learning in group projects.

When put into the context of attribution using technical and observable artifacts there is direct integrity attacks an adversary can attempt such as log file and system cache records exploitation. There are indirect attacks on integrity adversaries create through the use of encryption and proxies. There are passive defender strategies to unmask attackers such as passive DNS, passive sensor systems, and cryptographic solutions for log files. Finally, defenders should know the network and have behavioral recognition capability of anomalous behavior.


APT hunting for managers or whom to blame

The current information security paradigm and information technology ecosystems came about through the collision of capability over reach and budget under execution. On the one hand information technology has made organizations hugely more profitable and efficient in the execution of mission. Information technology has literally recreated the economic base of the nation. On the other hand, the dual failures of operations within a resilient context and absence of information security as a risk inhibitor have created consequent realized risk.

Information technology is a system of systems that has tendrils reaching to all facets of an organization. The information technology operations context is a set of warring capabilities that are all counter to the best practices that any information systems student is taught. The reality is that operations of information technology is often under-resources, poorly strategized, and horribly deficient in technical capability. When you add in the requirements of legacy system maintenance, and the addition of external mandates absent holistic understanding. Few pundits understand the internal and external incongruity of keeping a network and the associated applications running in a network of tens of thousands or hundreds of thousands of users.

Information security has never actually been a primary consideration. Within the context of mission and budget the information security discipline is often a set of compromises that overall degrade security posture to meaningless. Information security is seen as a cost center, often resourced at 5% or less than the information technology operations budget, and usually subservient to mission. In in mature organization information security is seen as a barrier to getting the job done. Mature managers and leaders can guide but often become frustrated by the contradictory and culturally manifested barriers to accomplishing their roles. These incongruities are further inculcated as edge cases drive executive decisions.


The path of an attributive event

Attribution is rarely the top concept most people think about when a breach happens. Following the time honored leadership of panic and find somebody to blame. Evidence that might make attribution easier is mishandled by operations or incident responders. Less catastrophically a well maintained incident response process may collect evidence rapidly and leadership may understand that stopping the bleeding is necessary while also keeping an eye out for secondary adversary activity that has not been detected yet. Common responses to incident like shutting down systems can have more impact than the adversary was already having. If data is going out the door leadership may want to completely segment from the Internet.  All of this is reason for gaming and using a table top exercise with leadership to identify thresholds and response patterns to common scenarios.

When an event does happen there are some common patterns to what actually happens other than the modeled responses from SANS or NIST. A lot of wheels start spinning if an event becomes an incident. The lawyer war of definitions and policy structures set up before an event can actually hamper the technical response mechanisms to the point of being specific and actual risks to the network. Prior budget decisions made outside the security systems engineering practice (usually imposed) will start to surface rapidly. Keeping a lid on the costs of an incident response is alchemy.


The generalized path of an event:

  1. An event or act takes place that appears to be hostile in nature and may be part of a broader campaign effecting numerous victims (day zero)
  2. Internal or external entities identify time and place of act and notify network owners (can be instantaneous or years later)
  3. Triage and analysis to corroborate the event occurs. On large networks this can take weeks. On poorly documented networks (even if small it can take months)
  4. Victim notifications commence, requests for information are generated, writs, warrants, national security letters, and contracts of engagement are delivered as necessary (can take weeks to negotiate unless previous contacts and agreements are in place)
  5. Data from logs, networks, operating systems, networked devices, service providers and others are compiled. (can take weeks or months on sophisticated adversary investigations)
  6. Analysis and assessment of information that drives facts and timeline are created. Further investigative leads are determined and other victims may be identified (can take weeks or months)
  7. This is usually where the investigation of large corporations or government agencies leak to the press. The external collaboration and external data requirements drive some people to expose the “breach”. I often say the reason we have public affairs officers is because some people just like to see other people have a bad day.
  8. Determination of investigative direction is usually made on whether expenditure of time and resources are appropriate. This can be reversed sometimes years later if other incidents occur, other victims are identified, or adversary campaigns become apparent (usually a milestone for leaders).
  9. Early attributive statements can be made if adversary intrusion characteristics are known from previous events and incidents (mile stone in coordination with leadership ay predate investigative direction milestone).
  10. Patterns of adversary activity within an incident and based on historical activities are evaluated for consistency and changes in behaviors to inform the investigation and provide inculpatory and exculpatory facts (can take months.
  11. In the best case (law enforcement, intelligence agency, highly trained professionals) an intrusion analysis with adversary behavior, time line, infrastructure, and capability report can be peer reviewed and collaboratively conflicts in analysis can be identified before public scrutiny applied (can take weeks or months for collaboration).
  12. Finally, an intrusion activity can be attributed to a known actor set, or a pseudo actor capability and traits with estimative language. The caveat of estimative language is missed by most pundits (milestone).



Unfortunately, even though every network owner will willingly state they assume breach. Few executives in the board room or political leaders have the same assumption. It would not be the first CIO or CISO to be fired during an incident. Especially if that CIO or CISO points out the incongruity of budget decisions, the actual amount of their budget they really control, and the operational and security contexts of mission assumption and exemption they have been forced to accept. The CIO and CISO aren’t’ fired because of the breach but because they were unable to make the case to keep the breach from happening. Thus, even though a user may click on the wrong link, a manager may accept the wrong dialog on a web page, a router might be misconfigured, or a piece of vetted software actually contains a Trojan horse.

The blame rests fully on the executives of an organization for the decisions, compromises, efforts, and capabilities they budgeted and required to gain the profit and efficiencies of the information systems they are responsible for. Attribution is key to understanding the threats and capabilities of adversaries and informing all of those decisions and compromises on how to spend finite resources. Attribution and the path to attribution is a business executive board level and political leadership concern.


Further Reading

  • Boebert, Earl “A survey of challenges in attribution” Proceedings of a workshop on deterring cyber-attacks: Informing strategies and developing options for U.S. policy, National Academies Press, 2010
  • Catagirone; Pendergast; Betz “The Diamond Model”, DoD Document released 2013
  • Clark, David; Landau, Susan, “Untangling Attribution”, Proceedings of a workshop on deterring cyber-attacks: Informing strategies and developing options for U.S. policy, National Academies Press, 2010
  • Locard’s Exchange Principle (
  • Rid, Thomas; Buchanan, Ben “Attributing Cyber Attacks” The Journal of Strategic Studies, Vol 38, 1-2, 4-37
  • Yamamoto, Teppei; “Understanding the past: Statistical analysis of causal attribution”, American Journal of Political Science, Vol 0 NO 0, 2011, pp1-20 (pre-print copy used)


Leave a Reply