Considerations of defense in depth


Can we accept that the security of information is not the same as the security of systems? If you were to draw a Venn diagram of the two they would intersect but neither would encompass the other. Or, would they? It is possible that this discontinuity is the reason that information assurance and security of information may not be a solvable problem under the current paradigm. Whether we consider and reflect upon the aspect of information within in a system, or we consider that systems interpose themselves upon information there are direct issues to either situation.

It may be that a Thomas Kuhn style paradigm shift is required for information security. Consider the following; knowledge skills and abilities as required to hire information security professionals rely heavily on technology attributes; vendor solutions are implicitly if not explicitly technology related; the relation of technology to the security domain is nearly exclusively a computer issue. The foregoing being true may result in a substantial examination of the problem but it leaves the human factor area untended.

If systems (devices, computers, routers, switches) are not the answer to security why is it then the human factor is ignored? Is it possible that the systems (devices, computers, routers, switches) paradigm of defense in depth is part of the problem? Inherent in the application of devices to the security solution is the realization of defense in depth as a primary architecture. Defense in depth is often depicted as a two-dimensional representation (drawing) of a three dimensional object (e.g. castle), of a four dimensional issue (e.g. security) that rarely reflects the multi-dimensional issues of an information sphere. 


What to do about defending information where the control of the system or transmission medium is untrusted? This is a problem that faces governments and private organizations across the world. Though not currently alliterated as a grand challenge the prospect remains that information in a contested domain must still be protected at some level. A standard practice is to discuss defense in depth, but the public cloud model breaks this relatively insufficient tactic. This should be expected as inherently the cloud model turns over control of the infrastructure for increased flexibility. An issue has arisen where the “if all you have is a hammer, everything is a nail” school of thought has been the marque of defense in depth. There are other strategies and techniques that should be considered for securing information, but have not been given credence. Even as the concept of defense in depth as eroded.

Defense in depth is a principle that has been with the information assurance and security discipline for a long time. Instantiated in government policy the principle has guided information security asset acquisition since at least the mid 1990s. There is an issue though with using a metaphorical principle of combined arms military conflict in a highly transient technological atmosphere. The first issue is that the metaphor is not understood. The second issue is that it is unable to convey the difficulties of the actual underlying principle. The third issue is that it is wholly inappropriate for the domain in which the metaphor or analogy is used.

This paper will discuss defense in depth. The paper will attempt to answer whether defense in depth is actually decreasing security postures of organizations when considered as an aspect of information security. This paper is not an empirical analysis of defense in depth, and this paper is not attempting to position against defense in depth as a singular principle. This paper is going to try and show that defense in depth is used incorrectly, that as a principle the focus of the metaphor could be explored more, and that there are appropriate places to use the principle if rarely included.

Defense in depth as a learning strategy may be an excellent method of instructing students on the principles of layered defenses. Smith (2003) discussed in detail how the principle of defense in depth gave students of security a better understanding of providing resiliency. Smith describes defense in depth as having the principles of; deterrence; detection; and delay. As such within his research those elements were then mapped to the protection mechanisms of; psychological barriers, electronic barriers; physical barriers; and procedural barriers. This is an example of the metaphor advancing the discipline of information assurance and security.

It has been suggested that though information assurance professionals are well aware of defense in depth there are still other principles at work.  David Dittrich (2008) while discussing curricula of future “cyber warriors” specifically pointed out that the Committee on National Security Systems (CNSS) had considered defense in depth, and defense in breadth as the primary principles of information security (Dittrich, 2008).  This discussion of the metaphors used is important. As Dittrich points out, it will take over 10 years to train a computer security expert, and the path needs to be relevant and to the particular sub disciplines that they may take later. Using the totality of the metaphorical tool kit is important. When considering the significant leaps forward technology will take in that time period metaphors as patterns would be an important way to prepare students for technologies that we likely can not even imagine yet. This is also why teaching students specifics technologies instead of principles robs them of their education.

Why should we give credence to the critics of defense in depth? Weckert (2010) in discussing the precautionary principle is instructive on what could be the ethical conundrum.  The construction by Weckert plays out as positive duties, negative duties, and intermediate duties. The positive duty is that defense in depth may provide better security to information systems. The negative duty is that the not teaching defense in depth approach to learning may do harm.  The intermediate duty may be positioned as; unsecured information systems could be caused by not teaching defense in depth.  This obviously is going to mean we keep teaching defense in depth. The remaining question though is “are we actually teaching it?”


One of the problems with Anglo-Saxon understanding of conflict was defined by Luttwak (1980) as the failure of western militaries to have a word which defined the operational aspects of war. There is the word strategy to define the theater wide events and the word tactics to define the lower level specific techniques. However, there is a spectacular missing component in the planning and activities of the operational level. This is the chasm in which many forms of defense and offense are found such as defense in depth, defense in breadth, insurgency, and much more. Though each form may have theater wide implications and unit or tactical implementation techniques.

The advocacy of large unit armored warfare in depth was proposed by Fuller, Liddel Hart, and other authors specifically because of technology (Luttwak, 1980). This is a specific point that allows us to bring forward the technique to the realm of information technology today. Luttwak and others are saying that defense in depth is exactly what a technology leaping conflict spectrum should be considering.  At the heart of the defense in depth approach to American warfare is the principle of attrition warfare (Luttwak, 1980; p. 63).  In other words the information assurance and security metaphor of defense in depth is about failure of assets and loss of information control.  Considering that point carefully it is not immediately obvious that defense in depth is about mitigated failure. Attrition warfare allows for economic principles and other business oriented techniques to be used as measurement tools (Luttwak, 1980; p.64). As such defense in depth will allow the adherent to measure the effectiveness of their failure. Then declare it a success.

Further advancing the understanding of what defense in depth means for the information assurance professional consider closer the principle of attrition. The principle of attrition means that strength versus strength is the primary focus of offense and defense. The counter strategy to this technique is the avoidance of the enemy’s strength, and dissolution of their ability to wage a successful defense or offense (Luttwak, 1980; p. 64-65). Defense in depth should fail gracefully as attrition takes place and systems disruption should have minimal cascading effects. When later we consider the asymmetry of defense and offense this succeeding set of failures become more important. Inherent in the prospects of defense in depth is the continuous turnover of information assets.

There are downsides to this specific set of tools. We have to understand that defense in depth received it’s first test in World War 1, and was only a strategic technique fully defined (and with mixed results) in World War 2. Mostly an aberration the often-repeated castle as a defense in depth analogy is not where the term was even spawned. The information assurance professor drawing the castle on the white-board is making a multiplicity of errors. The two dimensional representation, of a three dimensional object, of a four dimensional model, is attempting to capture a multi-dimensional problem space. Of the wrong era, strategy, and events. A much better example of defense in depth would be the much-maligned Maginot Line.

Current Perspective

One element of defense in depth not often explained to students is exactly what it is expected to do. The explanation is usually flawed when taking a direction of what defense in depth is rather than does.  As previously discussed defense in depth does not insure that an entity is kept out of the network. As Tirenin and Faatz (1999) detail, defense in depth is about giving the appropriate authority the time to choose a course of action in defining a successful defense. In other words even in the information technology world the defense in depth strategy is a knowingly based process to failure. This is inherently counter to the understanding of most information technology leaders, but absolutely consistent with the historical principles of layered and attrition based military strategies. The asymmetry disadvantage of the defender inherently requires some way to provide perimeter and succeeding barriers to incursion as increasing qualitative and quantatative strategies of defense (Tirenin & Faatz, 1999).

Empirical tests though not conclusive using defense in depth on military systems give some startling conclusions. Though defense in depth may work to secure systems against intrusion and slow attackers by increasing work factor the same could be said to be true about the defender. In work to attempt a DARPA challenge authors Rubel et. al. constructed a system to DARPA challenge requirements. The large distributed publish/subscribe/query system for the challenge was implemented using the Joint BattleSpace  Infosphere (Rubel, Ihde, Harp, & Payne, 2005). When implemented as defense in depth it appears to have generated requirement three times larger (more equipment) and sixteen times larger in work factor for the defender than the reference baseline. This was described as “labor intensive”.  The results at the time of their writing were not available so it is not know if this effort even survived the test. It is offered primarily as the scope of an ideal defense in depth solution.

Chen and Leneutre (2009) discussed the perspective of workload and investigated the mathematical boundaries of defender effectiveness in a defense in depth scenario. Primarily interested in the work factor of defenders in detecting intrusions they came to the startling conclusion that as system complexity increased beyond a simple system it became nearly impossible to detect all attacks if there is more than one attacker. Even adding additional detection entities (people) would appear to have minimal impact, but does increase the resource issues rapidly.

One of the issues with attempting to implement defense in depth is the complexity issue. There are solutions and automated tools available that will allow for a logical diagram (network “attack graph”) to be completed of a specific network (Lippmann et al., 2006).  These types of tools though are primarily logical constructs and may ignore physical elements that hamper or decrease security. This is a continuing problem in evaluating the relevancy of defense in depth as a metaphor for securing networks.  It is imperative that defense in depth solutions be considered from a computer systems out to perimeter perspective. Byres and Lowe (2004) concluded that firewalls (as an example) have little effect once attackers are past that level. As such defense in depth must consider resilience of the systems themselves. This is especially true in SCADA where the cost may not be information but lives.

Defense in depth must be considered at all layers and in all domains to be effective as a systemic approach. Tools like what Lippmann et. al. (2006) suggest are doing is providing excellent logical cascading failures within defense in depth constructed networks. The illumination of previously unexpected attack paths is a significant contribution in restoring a logical form of defense in depth.

When looking at the layering approach and considering the physical medium within the totality of the information sphere it is obvious there are techniques to secure it and insure that secure remains intact. Consider the work by Shaneman and Murphy (2007) to integrate the SIPRNET and JWICS networks through fiber connections. The differences between extrinsic (external) and intrinsic (internal) monitoring techniques were substantial. The external monitoring of attacks against confidentiality and availability were rarely detected until they had been compromised whereas, intrinsic monitoring provided an earlier and substantially increased set of alarms.  This points out a corollary of other results that defense in depth has many tools to make it a worthwhile technique. It also points out that errors in choice of technique can have rapidly escalating deleterious effects on security. Both intrinsic and extrinsic would we considered defense in depth strategies, but only intrinsic would detect attacks in a timely manner.


If defense in depth will work in many cases why would there be reason to criticize it? As technology leaps forward, and as discussed earlier this is the principle area where defense in depth is best tactically. Why then is it such a bad idea? Equipment that is added to the network or additionally functionality that is added to the network can instantly have deleterious effects on the network. The addition of tools like switches or routers can instantly remove defense in depth as a strategy leap frogging the attack into the softest center of the network (Talbot, Frincke, & Bishop, 2010). Akin less to the unexpected use of armor at the Maginot line, and more like finding modern air-power supporting the Maginot line. The wireless access point reflects a rapid development in technology. Thankfully, there are structures that will allow for a secure implementation, but early attempts as derived proved to be risk intensive (Suarez, 2003). At issue though is whether the principle of defense in depth is even valid.

Kewley and Lowry empirically set out to study whether defense in depth or a layered defense fulfilled the proposition of increasing security. As (Kewley & Lowry) stated it as components and systems become more complex they become more difficult to secure because; 1) To secure something you must thoroughly understand how it behaves; 2) You must understand where the vulnerabilities are; 3) You must be able to plug the holes (Kewley & Lowry). Their results are instructive. It would be hard to argue that the complexity issue is not an inherent risk in most corporate and government networks. Kewley and Lowry created a military scenario and as so criticism of defense in depth is specifically relevant to government readers.

The Kewley team built a laboratory experiment using the approved Information Design Assurance Red Team (IDART) methodology.  The result of their first scenario indicated that work factor for the actual exploit remained constant for each additional layer, but work factor out of band was increased. The addition of additional layers actually decreased the resiliency of the system to availability attacks, and the work factor for confidentiality was far from a linear relationship (Kewley & Lowry).  Their second scenario had similar results, but was hampered by several issues (defense in breadth was not considered, and the clients remained vulnerable).  Kewley and Lowry do conclude that adding security layers does not necessarily guarantee increased assurance. In fact they suggest that is may decrease security as you increase the number of control surfaces and likely vulnerabilities.

The work by Kewley and Lowry is supported by the work of Bensinger and Johnson (2000). In their work they tested a more succinct set of boundaries, but were able to identify an interesting risk point that would seem obvious to some. Though the layers provided increasing work factor to the adversary the tendency of the attacker is then to attack the client directly. The layered approach crumbles when the attack is able to penetrate to the heart of the system as with about any web browsing computer. There are defense in “further” depth approaches to assist in protecting the client from direct attack. Locasto et. al. (2009) and his fellow authors examined the end point problem conclusively. Without secure operating systems the principles of defense in depth and transactional communication will always be suspect. Consider the metaphor closely and the issue of having untrusted agents as the last link in the chain may not be obvious at issue. The metaphor crystalizes as we realize that the untrusted (client) is what the rest of the chain is suspended from the ceiling with. The client is not the last link but the first link and therefore the most critical.

Issues with defense in depth are actually nothing new. Through omission and primarily obfuscation the information assurance and security discipline has increasingly relied on technological solutions for primarily human problems. The reasons for this are wholly outside the scope of this paper, but it could be said who chooses to do information technology tasks may have driven the solution space irreparably. Though closely aligned one only need jump to another domain that uses defense in depth with technology to see the wildly swinging pendulum of human factors. Muschara (2000) states that controls of nuclear plants are not only built in administrative processes but also training programs, indoctrination, and socialization norms. Muschara (2002) continues to say that defense in depth is a two-edge sword where organizational issues collect and are hidden by the processes in place. It takes investigation and continual awareness to identify weaknesses that have been instantiated in the process by administrative procedures.

Consider a specific example of defense in depth gone horribly wrong. If seven characters for a password is good obviously 14 or more characters has to be significantly better protection. This doubling up is not necessarily associated with a better security posture. In fact after a certain level (dependent on the user population) adding more characters to a password decreases the security posture significantly (Holstein, 2009). Another issue that follows this is to blame the user for violating security controls. Holstein (2009) says it succinctly; “The bottom line is that one needs to be careful about pushing for higher security assurance by increasing complexity or through defense in depth mechanisms. The result could be increased risk and vulnerability.”

This may seem counterintuitive to the technologist but that is likely related to their abject ignorance of the social animal using their systems. As if the human factors issue itself is not enough to create much angst there is also the technology. Defense in depth through additive technological layering is already an issue due to complexity (as described previously). There is also the factor that additional security layers can have unintended consequences where they compete and cause failures of the system. Network connectivity can be disturbed, security agents on computers can erode effectiveness and parasitically paralyze processors, and even crash systems into complete uselessness (Locasto, et al., 2009). A computing system destroyed by the security administrators accidentally is no less useless than one destroyed by hostile entities.

Other Options

As has been discussed earlier there are techniques that allow a defense in depth approach to work much better if considered early in the design phase. Simply layering security mechanisms can have obvious deleterious consequences, but using good systems analysis techniques and risk analysis to inform the security design process can create significant opportunities for success (Bakolas & Saleh, 2010). Control systems theory is a formalization of the state machine mechanisms and when added to the aforementioned concepts can create a safety-diagnosability-principle to assist the information theoretical components of controllability, observability and diagnosability (Bakolas & Saleh, 2010). Unfortunately state machines and other systems formalizations techniques seem to have been abandoned by information technology and information security curriculums.

An interesting option for creating workable defense in depth is to apply the layers dynamically. Considering earlier commentary on the problems of having “to much defense in depth” and the deleterious consequences on security due to user behavior. It may be important to activate defense countermeasures as needed. This is consistent with the issues of current countermeasures inadequacies in the face of unknown threats and zero day exploits. Inherently dynamic response may answer many of the criticisms of defense in depth if done correctly (Winkler, OíShea, & Stokrp, 1996).

The qualitative versus quantitative strategies of risk management can increase the security posture of an organization. This then can assist in the procurement of technology and processes that significantly increase the security posture of that organization. When looking at the desired results of defense in depth to often the assumption of unlimited resources is the underlying assumption of the techniques in play. This is rarely the truth a security administrator is to likely find. Optimization of security controls in a risk management framework can have significant impacts within the budgetary constraints and security posture of an organization (Bass & Robichaux, 2001).

There is an entire discipline that looks at recovery oriented computing as a method of resilience instead of simply fighting symmetrically against an asymmetric adversary. Much like the homily the willow bends in the wind while the oak shatters. This is a conceptual leap beyond the normal computing practices we see instantiated in the traditional computer enterprise. Patterson and his fellow authors (2002) looked at a variety of mechanisms and patterns from other industries that could be indicative of solutions to the information assurance paradigm. It is an interesting and likely revolutionary concept (as defined by Thomas Kuhn (1996)). Moving beyond the defense in depth strategy into the resilient and recovery oriented paradigm is not totally new to the information technology world. Databases fail over systems, and disk array systems (e.g. Raid 5) are examples of this type of thinking.


This paper set out to show that there are significant issues with the current considerations of defense in depth while at the same time showing there are solutions to the inherent issues of information assurance and security. Most assuredly the paper showed that protecting systems is not the same as protecting the information. There are to many examples of the securing of systems simply not securing the information on those systems. Defense in depth fails when the mechanisms can be leaped by new technology or gone around by user or adversary to attack a client directly. The current paradigm of systems security as instantiated in a defense in depth environment is an issue. Specifically the view of the technology component ignores the human aspect almost entirely while only superficially viewing the technology as a security mechanism. The ignorance of most information technology professionals to advanced concepts like state machines is an excellent place to focus on in future work looking at defense in depth.

There are contradictory viewpoints to the current erroneous defense in depth discussions. The literature is full of two-dimensional representations of defense in depth analogs that have little to do with each other and appear to literally be contradictory. As resilience as a principle security service rises the defense in depth paradigm may deprecate or fall back to it’s original roots. It is especially interesting that perhaps we are seeing in the rise of risk management principles this erosion happening with the associated managerial expectations of profit and return on investment.

The resultant conclusion that defense in depth is often depicted as a two-dimensional representation (drawing) of a three dimensional object (e.g. castle), of a four dimensional issue (e.g. security) that rarely reflects the multi-dimensional issues of an information sphere remains. As such it is likely the errant interpretations of the concept defense in depth is decreasing security. Simply put the many facets of information assurance and security are not well served by conflict based security models that ignore the humanity involved in the conflict at the expense of the systems inherently flawed by their creators. Defense in depth is a poorly understood set of concepts used by information security practitioners that aren’t even representative of the true operational aspects of conflict in real wars.



Bakolas, E., & Saleh, J. H. (2010). Augmenting defense-in-depth with the concepts of observability and diagnosability from Control Theory and Discrete Event Systems. Reliability Engineering & System Safety.

Bass, T., & Robichaux, R. (2001). Defense-in-depth revisited: qualitative risk analysis methodology for complex network-centric operations. Paper presented at the IEEE Military Communications Conference.

Bensinger, L., & Johnson, D. M. (2000). Layering boundary protections: an experiment in information assurance. Paper presented at the 16th Annual Computer Security Applications, New Orleans, LA , USA.

Byres, E., & Lowe, J. (2004). The myths and facts behind cyber security risks for industrial control systems. Paper presented at the VDE Kongress.

Chen, L., & Leneutre, J. (2009). A game theoretical framework on intrusion detection in heterogeneous networks. IEEE Transactions on Information Forensics and Security, 4(2), 165-178.

Dittrich, D. (2008). On Developing Tomorrow’s Cyber Warriors. Paper presented at the 12th Colloquium for Information Systems Security Education, University of Texas Dallas.

Holstein, D. K. (2009). A Systems Dynamics View of Security Assurance Issues: The Curse of Complexity and Avoiding Chaos. Paper presented at the 42nd Hawaii International Conference on System Sciences, Big Island, HI.

Kuhn, T. S. (1996). The structure of scientific revolutions: University of Chicago press.

Lippmann, R., Ingols, K., Scott, C., Piwowarski, K., Kratkiewicz, K., Artz, M., et al. (2006). Validating and restoring defense in depth using attack graphs.

Locasto, M. E., Bratus, S., & Schulte, B. (2009). Bickering in-depth: Rethinking the composition of competing security systems. Security & Privacy, IEEE, 7(6), 77-81.

Luttwak, E. N. (1980). The Operational Level of War. International Security, 5(3), 61-79.

Muschara, T. (2002). A dual human performance strategy: error management and defense-in-depth. Paper presented at the IEEE 7th Conference on Human Factors and Power Plants.

Patterson, D., Brown, A., Broadwell, P., Candea, G., Chen, M., Cutler, J., et al. (2002). Recovery-oriented computing (ROC): Motivation, definition, techniques, and case studies (No. CSD-02-1175). Berkley, CA: UC Berkley.

Rubel, P., Ihde, M., Harp, S., & Payne, C. (2005, 5-9 December). Generating policies for defense in depth. Paper presented at the 21st Computer Secuirty Applications Conference, Tucson, AZ.

Shaneman, S., & Murphy, C. (2007, 29-31 Oct. 2007). Enhancing the Deployment and Security of SIPRNET and JWICS Networks using Intrinsic Fiber Monitoring. Paper presented at the IEEE Military Communicaitons Conference, Orlando, FL.

Smith, C. L. (2003, 14-16 October). Understanding concepts in the defence in depth strategy. Paper presented at the IEEE 37th Annual International Carnahan Conference on Security Technology.

Suarez, G. (2003, 27-31 January). Challenges affecting a defense-in-depth security architected network by allowing operations of wireless access points (WAPs). Paper presented at the Symposium on Application and the Internet Workshops, Orlando, Florida.

Talbot, E. B., Frincke, D., & Bishop, M. (2010). Demythifying Cybersecurity. Security & Privacy, IEEE, 8(3), 56-59.

Tirenin, W., & Faatz, D. (1999, 31 October – 03 November). A concept for strategic cyber defense. Paper presented at the IEEE Military Communications Conference, Atlantic City, NJ.

Weckert, J. (2010, 7-9 June). In defence of the precautionary principle. Paper presented at the IEEE International Symposium on Technology and Society, Wollongong, NSW.

Winkler, J., OíShea, C., & Stokrp, M. (1996). Information warfare, INFOSEC, and dynamic information defense.




Leave a Reply