If I am a major corporation and I feel I may be a target, or know that I have been a target I want to have some idea about reducing my risk to attacks. The consistent hum and noise of the Internet on my backdoor is enough to keep me aware of the threat actors working towards exploiting holes in my defenses, but what about the less obvious actors? What is the class of threat considered an advanced persistent threat?
This class of actor is usually described as a nation state, or highly sophisticated actor, who instead of kicking in the door to your network, gains access and remains for a long time. They are not immediately obvious and in doing so they have a tendency to be able to exfiltrate information (documents, records, emails) and data (network topologies, network connections, system configurations) for a long period of time.
What does that actually mean to the regular folk who don’t use exfiltrate in a sentence unless watching James Bond? Instead of being an all out assault against your information systems this is a more subtle method of gaining access. Traditional intrusion detection systems, and anti-virus are not going to be protection against this type of assault. It is likely going to use an unknown zero day vulnerability to exploit a weakness in the information systems. A lot of press has played up the “nation state” factor in the equation of advanced persistent threats. As an example the Stuxnet exploit was said to be only possible by a nation state right up until a young researcher did it in his apartment working with a few other people. It was good enough to get a kindly request from the Department of Homeland Security to pull the presentation on his research.
Consider that we can assume nation states are involved in sophisticated espionage activities. There is a litany of cyber events that can be traced back to nation states over time. However, nation states resources aren’t necessarily required.
There are over 40K vulnerabilities in the CVE and each one of those vulnerabilities was a zero day exploit at some point. There are 25 common programming errors consistently identified as a problem. The window between a vulnerability being disclosed and being patched in corporate and government enterprises can be months. So even if you patched instantly upon approval to operate, when the patch is available, you still might take some amount of time to test and roll out the patch.
This creates a much larger picture of the threat landscape to information systems. It doesn’t take a genius to look at the current vulnerabilities that haven’t been patched. There is also a significant difference between the CVE and open source tracking groups like BugTraq. This creates a clearer view of the vulnerability landscape.
One of the problems of the mass media reporting is thinking that hackers and criminals are just sitting around swilling Mountain Dew and living in moms basement. Though the mystique of hacking is strong and the lack of deodorant at certain hacker conventions causes some riotous behaviors. The reality is that adversaries utilizing technology are not hampered by societal biases and can adapt to the environment much faster than others. The idea that hacker groups are not capable of doing sophisticated intelligence analysis seems a social bias of the media and government rather than the evidence of current events.
A large-scale exploitation may start with targeting a particular organization or industry through substantial research. What are the tools and techniques used by an organization will give good insight into a host of attack vectors. Looking through literature of specific companies or their SEC filings will give a good idea of what they are doing. Press releases from companies doing business with them will provide invaluable intelligence. Workers traveling with asset tagged equipment will identify them and provide knowledge of the current platforms.
Knowing that most companies are going to customize security hardening around NIST STIGs and OEM STIGs gives someone a place to start building an exploit matrix. Examine the tool Stuxnet for some insight on human factors and jumping the air-gap firewall. Basically the reconnaissance phase is about determining the security posture of an organization, determining the available exploit paths, and then choosing the highest likelihood of those paths to a successful exploit.
Now think about your information assurance and security professional. They are used to doing a risk management approach or thinking about security like this; there are systems; that have vulnerabilities; that are exploitable by threats. The adversary is not constrained by that logic chain. They are the threat and are thinking about what targets fit their exploits. The adversary is in a much shorter observe, orient, decide, act loop than the security administrators. Chance are that the security administrators do not have situational awareness of other environments that might mirror their own. Attack failures may not be correctly attributed, but the adversary now understands what doesn’t work in a very small set of what might work.
I recently read a blog post by a lawyer who said all the cyber hysteria was phooey. He went on to say that there were no significant threats and basically all the cyber security stuff was a hyped version of y2k. So, he was ignorant of y2k and cyber. Lawyers you can hang them but they breed like rats. The principle of evaluation by the adversary is not much different than the selection process used by a car thief. Skip the ones with alarms, the ugly ones, and the ones owned by the guys with guns and go after the easy targets. This is the principle of asymmetry in motion.
Once a target has been selected and an attack initiated the next step is hardening the exploit. Thinking back to a recent posting by LulzSec around their Bethesda exploit it appears another party had entered the systems of Bethesda and LulzSec detected that entrance. In other words they had more situational awareness of the victims systems than the victims system administrators did. It would be interesting to confirm that as a point. Once an exploit has been engineered and triggered the premise is to then protect that exploit. Kind of like a weed growing under the noses of the systems administrators until the roots are nice and firm. Then you (maybe) see it pop above the ground.
What isn’t going to happen is an enumeration attack where every port, and every Internet protocol address is triggered like is taught in most security courses. This is going to be a highly targeted and if done correctly will involve one simple strike that exploits a system and then moves through the network never detectable. Think about the common issue of intrusion detection systems. The false positive sets the filter rate, but the issue is with false negatives. Not knowing what you don’t know will harm you much more than knowing what you do know too much. Human ingenuity is to turn off those constant alarms. I read that this is what may have partially caused the Deep Water Horizon debacle. The alarms going off constantly then got ignored. A significant amount of information security education is toward basically cyber infantry tactics, but the advanced persistent threat is the sniper of cyber conflict.
After hardening the system behind them, infiltrating the network and moving beyond the exterior, or moving from clients to servers. The next principle is moving data out of the network. Wait. How do we jump air gaps again? Well there is always the ubiquitous USB key, the iPod, smartphone, back up tapes, wireless networks, “sometimes” connected systems, and infecting DHCP/DNS systems to help jump from one place to another. An exhaustive list of the exploit paths would also have to include building facility systems such as electrical and “burglar” alarms that may have connections to wireless systems. The fact is a dedicated adversary sitting around brainstorming is going to come up with way more than me on a blog post.
Consider the adversary now harvesting your data. Like some kind of cyber burglar they are gathering up your data. How do they know what is important? What is your business? What are you working on today? Tomorrow? Your behavior patterns and the file metadata is going to identify likely data objects to get first. Though simply downloading email archives will work too. Encrypting your email archives may help, but where do you store the keys? Look at a fictional business information environment. What do your data objects look like? Drawings, letters, email, receipts, services, or what make the information objects on the systems? Are the information systems centralized into an office productivity suite where the data objects are houses in databases that can be copied in total without the need of the data repository applications? Are email systems secured against wholesale copying by their functional specification, or are you relying on the good graces of a system administrator who was just superseded by an adversarial entity? How many of your systems are protected by the social controls you place on a network administrator? Protecting passwords and information security mechanisms will likely fail when an adversary has gone around all of those traditional mechanisms and has that level of control within the target environment.
The final stage will be covering their tracks. It is likely a silent withdrawal will be the current practice. While leaving command and control bots behind just in case. On the surface this pattern of intrusion looks like what most people have been taught to look at in their information assurance courses. However, there is a significant set of differences:
- Enumeration is dispensed with.
- The use of zero day exploits is nearly undetectable with traditional tools.
- The systems and security administrators can do everything correct under current state of the art practices and still be exploited.
- Current tactics and strategies are wholly unprepared to deal with the threat vector.
- Most security mechanisms will not detect malicious activity, as it will be discounted as administrator use.
I don’t like the euphemism advanced persistent threat. The use of the techniques are not new, nor advanced, but most assuredly they are persistent. The fallacy is that in the “old days” APT as described above is how an elegant hack was accomplished. Using the systems as designed for wholly unexpected results was the epitome of elegance. Information security education though has instantiated specific techniques like enumeration as primary techniques for penetration testers. I think what has happened is that tool creators have become so few, and tool users so many that the people who write attack tools have risen above the fray to APT status.
In general most attacks that create the noise of the Internet are done by tool users who couldn’t code their way out of a box given visual basic. So, the people who could code up a log file obscuration script on the fly so a log ‘window’ over-writes the log with current time stamps (not detectable on separate hosted log file servers) are fairly rare.