In anti-forensics attackers attempt to erase or disguise the traces of their intrusion and try to make the investigator’s work as difficult as possible. Techniques used to elude computer forensics operations range from the simple deletion of log files to the installation of advanced and nearly invisible rootkits at the system level. An important work in this area is Eoghan Casey’s Association for Computing Machinery (ACM) article discussing sophisticated intrusions and examining the relationship that anti-forensic techniques have with national security concerns. His article, “Investigating sophisticated security breaches,” relates how sophisticated intruders operate, weaknesses that they use, and some methods to overcome or prevent certain anti-forensics techniques.
Lab 7 is the compilation of all our previous labs. Building upon what we have learned in our previous labs we now have the opportunity to put that knowledge to the test. Lab 7 gives us the opportunity to use the tools we’ve learned to defend against an attack from an adversary as well as try an attack against one of our adversaries. We will carefully create a plan for exploiting our assigned adversary and will report back to that team when the exploit was successful or defended against. Professor Liles will also be notified.
This week’s review of published literature builds upon the information presented, and knowledge gained from all of the articles and exercises from our previous 6 labs. All of the articles discussed in this literature review directly relate to what we have been learning in the previous 6 labs. The common thread tying the articles together in lab 7 discuss the prevention of security breaches and the ability to ascertain the perpetrators who exploit the systems. In addition, a couple of articles tie in nicely overall to what this course has been all about. Setting up of a lab based setting whereby we as students learn about tools used to defense against and/or use to exploit systems. All of this has been done in a safe and controlled environment. The last article presented this week relates directly to the exercise in lab 7 in which we attempt to defend against and exploit each others systems.
Network intrusions are the most challenging computer crimes to investigate, especially when the intruders are highly sophisticated and motivated (Casey, 2006, p. 49). In his article “Investigating Sophisticated Security Breaches” Eoghan Casey gives us insight into the difficulties investigators encounter when dealing with network intrusions. Acting quickly is essential to preserving evidence before it is lost or destroyed by the attacker. Sophisticated intruders gain entry to networks through known vulnerabilities and conceal their presence while obtaining valuable data using strong encryption tools to cloak their activities by encrypting data before they steal it (Casey, 2006, p. 49). Highly skilled investigative teams with specialized tools for preserving hard drives, physical memory, and network traffic are needed to catch sophisticated intruders. The ideal team for investigating an intrusion should be experts in information security, digital forensics, penetration testing, reverse engineering, programming, and behavioral profiling (Casey, 2006, p. 54). Improved training, tools, and data gathering techniques are needed to help investigators. In our previous lab exercises we have examined all of the above. This by no means makes us experts however; it does help us gain insight into just how important understanding these concepts can be for investigating network intrusions.
Mobile agent technology offers a new computing paradigm in which a program, in the form of a software agent, can suspend its execution on a host computer, transfer itself to another agent-enabled host on the network, and resume execution on the new host (Jansen & Karygiannis, 1998, p. 35). In their article “A Protocol Preventing Blackbox Tests of Mobile Agents”, Fritz Hohl and Kurt Rothermel presents us with a protocol for preventing testing attacks against blackbox protected mobile agents. There are a few problems associated with mobile agents. First, they allow programs on computers that are not controlled by the employer of the computer. Second, there is a fear of the computer owners obtaining viruses, worms, and Trojans that will ultimately damage their systems (Hohl & Rothermel, 1999). Blackbox protection can be used to prevent most malicious attacks however; there is still the threat of blackbox testing. Blackbox testing attacks can be prevented using a protocol that needs a small service, a registry, on a trusted node. The assumption though is that the participating agents have the blackbox property to protect against modification attacks (Hohl & Rothermel, 1999).
Computer security is of interest to everyone. Whether it’s your own personal computer at home or the one you use for work or school there is great concern for protecting it against viruses or malicious attacks. Courses like TECH 581-Computer Operations involving hands on lab participation allow us to understand how to protect our pc’s and networks against such attacks in a controlled and safe environment. In their article “Defense Against the Dark Arts”, Mark W. Bailey, Clark L. Coleman, and Jack W. Davidson introduce us to their method of teaching computer security. Much like our class the authors focus their course on defending against viruses and introduce their students to tools that enable them to detect and defend against attacks. Using compiler tools allows the authors to accomplish two objectives at once. While teaching their students about viruses, worms, and vulnerability software, they also present concepts of computer science by studying program analysis (Bailey, Clark, & Coleman, 2008, p. 315). The motivation for teaching their course was to make students aware that threat to computers such as viruses and worms are a serious problem, and that computer science students should understand malware schemes, how to detect them and how to defend against malicious software.
Lab based, hands on courses in computer security are an effective way to teach technology students about malware and hacking attacks. This article as well as the one prior relates to all of our lab exercises in that we were able to set up a network in an isolated controlled environment, and then using the exploit tools we researched, performed penetration testing and ultimately attacked a pre-determined team. In her article “Cyberattacks: A Lab-Based Introduction to Computer Security”, Amanda M. Holland-Minkley introduces us to the course she developed. Her course, appropriately named “Cyberattacks”, was created to teach her students about programs that are developed to defend against exploits rather that to exploit users (Holland-Minkley, 2006, p. 39). Even though her audience was designed for non-major and introductory level students it still was a good start at teaching students about computer network security. As she continues to develop her program, in time it will become more like the program and labs we have taken in TECH 581-Computer Operations.
Red team-blue team exercises take their name from their military antecedents. The idea is simple: One group of security pros, a red team attacks something and an opposing group, the blue team defends it. Originally, the exercises were used by the military to test force-readiness. In the ’90s, experts began using red team-blue team exercises to test information security systems (Mejia, 2008)
In their article “Breaking Blue: Automated Red Teaming Using Evolvable Simulations” Stephen C. Upton, Sarah K. Johnson, and Mary L. McDonald discuss an approach to manual Red Teaming called AutoRedTeaming. Red Teaming has been used for some time now in the military defense community to uncover system vulnerabilities (Upton, Johnson, & McDonald, 2004, p. 1). The authors discuss a supplemental approach to red teaming which they have begun to develop called AutoRedTeaming. This is where they automate the vulnerability discovery process using a combination of algorithms and agent-based simulations to “break blue” (Upton, Johnson, & McDonald, 2004, p. 2). The authors conducted their first tests of the AutoRedTeaming concept. The results of the test run of one of their algorithms against the test scenario found ways to “break blue” and obtain significant volumes of data (Upton, Johnson, & McDonald, 2004, p. 2).
For the first part of the lab a machine operating system was chosen. Due to the renowned security associated with Linux operating systems, the group decided to use Debian Linux. When choosing security measures for the chosen virtual machine, the group decided on using the “iptables” firewall software for Linux. The configuration file for iptables was configured as follows:
# Generated by iptables-save v1.3.6 on Tue Jul 14 14:10:20 2009
:INPUT ACCEPT [6:468]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state –state NEW,RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp –dport 22 -m state –state NEW -j ACCEPT
-A INPUT -j LOG –log-level 4
-A INPUT -j DROP
# Completed on Tue Jul 14 14:10:20 2009
This allows only connections on port 22 for remote access. We decided to use SSH2 as the method for allowing remote logins to the machine. Remote connectivity was part of the requirement of the lab; otherwise we would not have allowed it. While SSH2 does have a vulnerability associated with it, it can only be exploited under certain circumstances. The iptables config also drops all packets that are not part of an established connection. Therefore, the machine will not reply, even with a reject packet to the source host. This reduces the effectiveness of port and ping scans. The firewall is also configured to log input drops in order to record any attacks attempts. Once the machine had been configured, the machine was left on during the specified time window and the IP address “184.108.40.206” was given to Team 5.
Once our machine had been configured, we had to plan our attack phase against Team 2. Team 2 had notified us that their IP address is 220.127.116.11. First, before an attack can be launched, one must identify the purpose of the target and the attack. The point of the lab isn’t necessarily to successfully exploit the system but rather to hide all attempts of an attack. Ideally, a successful exploit would be desired but the overall stealth of the attack is the goal. The first part of the attack is reconnaissance. What is the target operating system? Are there any open ports on the target system? The ideal method of obtaining this information is by performing a passive scan of the system. However, this method requires certain environmental characteristics, which are not present in this scenario. These characteristics will be covered in greater detail later.
The next possible method of reconnaissance is active scanning. This method allows for easier detection of the target, but increases the risk of detection of the attacker. Therefore, certain precautions must be taken into account. Nmap was used to scan the target. The command and arguments used to perform the scan were as follows: “nmap -e eth0 -T Sneaky -p1-1024 -O -sS –source_port 999 -S 18.104.22.168 -PN 22.214.171.124”. The timing for this attack was set to “Sneaky” in order to reduce the likeliness of detection; however, this timing increases the duration of the attack. In order to reduce the duration of the attack, only ports 1024 and below were scanned. The results from only these ports can give us an idea of the security measures in place, without having to scan all 65550 ports. The scan’s source port and source IP address were also spoofed in order to avoid detection. The source IP was set to 126.96.36.199, which is the IP of the target. The actual IP address of the attacking machine was 188.8.131.52. The scan was also set to perform OS detection and use SYN packets (for a quieter scan). After some time, the scan completed. The results indicated that all 1024 ports were filtered. Therefore, no exploits could be used and the attack was over, because no operating system version or any open ports could be detected.
During the attack phase, the firewall logs for iptables were checked. The logs indicated evidence of a port scan from the IP address 184.108.40.206. The /root folder was checked for the required exploit file from Team 5, but no file existed. No other evidence of attacks from any other sources existed. After the machine had been powered off once the attack window had passed. Team 5 had reported a successful exploit against the IP address 220.127.116.11. However, in their description, they indicated that the machine attacked was a Windows XP SP0 machine. Possible reasons for this could be that another machine pulled the same IP address from DHCP. Of course, these tests were all run on a subnet that contained many different attacks happening simultaneously. It is also possible that other attacks could have changed the results, such as the possibility of ARP poisoning or other attacks that could have redirected the attacks to another machine. Of course, the point of the lab is to detect the attacks, and a port scan was detected. It cannot be certain that the attacking machine was in fact Team 5. This is true due to avoidance methods such as IP spoofing and due to the fact that Team 5’s scan may have been only performed on the Windows XP SP0 machine that shared the same IP address.
The attack against Team 2 was unsuccessful because of the security measures that they put into place. It seems that Team 2 followed a similar method of securing their machine as our group. Since all of the ports on Team 2’s machine were filtered, Nmap was able to detect that the machine was dropping the packets rather than accepting or even rejecting the packets. Because of this, Nmap was unable to discover an operating system/version due to the fingerprint matching too many OS types. Even though this firewall rule set can avoid OS detection, it can still be performed; however, this environment does not allow for it. As stated earlier, passive scanning would be the ideal method of scanning this host. When performing passive scanning, the operating system can be detected because the attack is not attempting to send uninvited traffic but rather to analyze invited traffic from other hosts. The problem with this method is that there is no traffic to analyze.
When considering the security of a system, ideally one would want the highest amount of security and the highest amount of usability out of the system. Of course, in computer security, security is traded for usability and vice versa. There must be a comfortable balance between the two. Luckily for all of Teams involved in this lab, there is almost no usability out of the machines. This brings us back to why leaving SSH2 open on our machine wasn’t important. While it is an entryway into the machine, the only commonly known attacks require the SSH2 connection to be intercepted and re-negotiated to SSH1 and then have the encryption brute forced. This, of course, requires someone actually using the system. Since the machines were administered via VMware, the connections were not used. Simply due to the fact that these virtual machines do not provide functions and merely lie dormant on the network waiting for an attack attempt, the majority of possible exploits are not possible.
While there are many types of exploits that can be performed remotely and do not require the system to be generating traffic, some ports still need to be open for the attack. Therefore, even unpatched systems can still become very resilient to uninvited attacks simply by using strong firewalls. Of course, the opposite is true too. While strong firewalls are important, invited traffic can still be used to exploit a system. For instance, if an attacker can get a victim to visit a malicious web site, exploit code can be executed on the machine and bypass the firewall because the user requested the exploit code.
Even though Team 5 exploited another system, the results of the lab have not changed. When considering anti-forensics, the port scan was still detected, though it is still unknown whether the attack actually originated from Team 5. Even if the team did attack the correct machine, exploiting is almost impossible considering the circumstances. This is because every team is using a fully patched system. Considering the information that was covered in lab 6, even if the team used an aggressive scan, such as Nessus, the results would have been the same and the likeliness of detection would have been much higher. In order for a successful remote attack, the attacking team would have to exploit a new or unknown vulnerability in the operating system that allowed for full write access to the machine. These types of exploits are rare, unless of course they utilize 3rd party software. This also requires the software to be running as a service or being used by a user. The combination of the absence of services/utilization of the machine by a user and the fact that these machines are almost completely blocked off by firewalls makes the possibility of exploitation very slim. In fact, most of the systems might as well have been unplugged from the network altogether. Therefore, the lesson learned from this lab is that the usability of a system directly affects the likeliness of an exploit.
The only issue involved in the lab was the fact that another system claimed the IP address of our virtual machine. Whether the machine had pulled the address from the DHCP server itself or if another attack (such as ARP poisoning) was used is unknown. However, we feel that it ultimately did not affect the final outcome of the lab
While the team was not able to exploit another team’s machine, other teams were not able to exploit our machine either. The team does not believe that the purpose of the lab was to truly be able to exploit another machine, but more the ability to learn how to protect our own systems. This lab showed that by following simple guidelines a system can be secure from a good amount of exploits. The reason why so many systems are easily exploited is because not everyone secures their system properly. If everyone followed the recommended guidelines, there would be more secure systems, which makes everyone safer in the long run.
Bailey, M. W., Coleman, C. L., & Davidson, J. W. (2008). Defense Against the Dark Arts. SICGSE’08, 315-319.
Casey, E. (2006). Investigating sophisticated security breaches. Communications of the ACM, 49(2), 48-55.
Hohl, F., & Rothermel, K. (1999). A Protocol Preventing Blackbox Tests of Mobile Agents.
Holland-Minkley, A. M. (2006). Cyber Attacks: A Lab-Based Introduction to Computer Security. SIGITI’06, 39-45.
Jansen, W., & Karygiannis, T. (1998). NIT Special Publication 800-19 Mobile Agent Security. 35-35.
Mejia, R. (2008, April 27). Red team, blue team: How to run an effective simulation – Network World. In Network World NetworkWorld.com – Network World. Retrieved July 28, 2009, from http://www.networkworld.com/news/2008/042508-red-team-blue-team-how.html?page=1
Upton, S. C., Johnson, S. K., & McDonald, M. L. (2004). Breaking Blue: Automated Red Teaming Using Evolvable Simulations. Presented at the Genetic and Evolutionary Computation Conference 2004, 1-3.