The purpose of this exercise is to emulate a realistic attack/defense scenario in a virtual test environment, and to use forensics to determine the pattern of an attack. First, relevant literature is consulted and reviewed. Next, the team develops a plan for defense of a virtual machine, and implements the machine according to this plan. Closely related, a plan of attack is evolved through reconnaissance and research, with an attempt being made across a network to penetrate a machine defended by another team. Finally, an analysis is made of attacks on the team’s own machine, and the results reported.
It is an oft repeated phrase: “the best defense is a good offense.” While this may seem counter-intuitive with respect to this lab exercise, as the target for attack is a different team than the invading team, there is merit in the concept. From the initial planning stages, our team realized that a proposed offensive approach must necessarily be addressed in the defensive posture: there was every reason to anticipate the same methods might be employed against us directly. It was found, that from the very beginning of the planning phases, a large benefit was derived by creating a unified plan, which was not strictly delineated into areas of defense and attack.
It is with this noted that we set our research goals. First, we propose to develop a connected plan of defense and offense, each aspect leveraging against the other. Second, we attempt to utilize this unified plan to engage in an effective defense of a virtual machine against attacks from a network based threat (namely, team two), all the while propagating a pointed attack across the network of our own against a remote target (specifically, team four). Finally, we analyze the effectiveness of our defense using forensic methods.
This week’s assigned readings follow the pattern of the previous weeks, relating directly to the current lab, aspects of penetration testing, or class structure. The group read an overview of forensics in sophisticated “high stakes” security breaches, two different methods of approaching security education, a proposed methodology for automated red teaming, and software design concept for defense against “blackbox testing.”
While the majority of this week’s activity dealt with attacking other teams’ systems, the real intent of the lab experiment was to expose the students to anti-forensics, the art of hiding one’s tracks. In order to be proficient with this skill, one must understand the principles of forensics. Eoghan Casey’s article “Investigating Sophisticated Security Breaches” gives the reader an overview of best practices in digital forensics as well as a brief look at some of the techniques used to obfuscate attacks (Casey, 2006, p. 48). Casey cites the growing sophistication and organization of operators attacking networks and their ability to conceal attacks as motivation to be more serious about computer crime (pp. 48-49). He postulates that the best way to conflict these breaches is to combine the skills of digital investigator and security professional in order to efficiently and legally gather evidence once a security breach has been detected (p. 49). Casey does a good job of covering the more common techniques used to cover one’s tracks in an attack, as well as providing evidence gathering techniques and tools. He also points out areas where investigators must be cautious, lest they provide false evidence, or end up in violation of computer crime laws themselves (p. 53).
Casey’s article is well written and gives a good overview of his topic for his target audience. He implies, rightly so, that a security breech is inevitable, and that good forensics practice may help locate the parties who carried out the attack. While it is mentioned, Casey does not emphasize the idea that these sophisticated attackers may hide behind several layers of communication, and that an investigation may only detect the point of entry to the network, not the actual root attacker. Casey also avoids any discussion of preventative measures, though this is likely due to the focus of the article rather than an unintentional omission.
While Casey provides us with post-attack evidence gathering methods, Fritz Hohl and Kurt Rothermel (1999) propose a method for preventing penetration of the system from occurring in the first place. In “A Protocol Preventing Blackbox Tests of Mobile Agents”, they propose a method of validating input to web applications to prevent the use of random input generation as a scheme for vulnerability detection (p. 1). The authors see this method of probing as a major vulnerability in current application design (p. 2). They propose the addition of a registration value to application code to track instances of interaction and require that inputs from each agent to remain the same (p. 5). The idea is to prevent multiple inputs in rapid succession that would be indicative of a blackbox probing attempt.
Hohl and Rothermel (1999) propose an innovative method for preventing this particular type of attack. Even they admit that there is work to be done. The additional code creates quite a bit of overhead in the application (p. 11). The method requires manual editing of existing code and does not provide a stand-alone mechanism for validation (p. 12). The authors unintentionally provide the students with an excellent example of applying blackbox testing principles, normally used in application testing, to penetration testing. This methodology would be useful for attacking systems that serve a web based or remotely available application.
In a similar vein, Stephen Upton, Sarah Johnson, and Mary McDonald (2004) presented “Breaking Blue: Automated Teaming using Evolvable Simulations.” Rather than focus on one type of attack in the IT world, Upton et al. examine the concept of red teaming in general (p.1). The authors state that red teaming is “manually intensive” and requires the work of dedicated experts. They hypothesize that automated, simulation based red teaming could be used to gather information that could assist with the manual process and reduce “surprise” (p. 1).
The group was unable to find the actual presentation that the document summarizes, so it is unfair to criticize the brevity or quality of the work. It is notable that the authors establish the idea of automated testing, in this case for what appears to be military or law enforcement tactical simulations. Even in this early stage, they find that simple simulations only produce the results of preset input, and postulate that artificial intelligence or learning systems are needed in order to fully realize the potential of automation (Upton et al., 2004, p. 2). Based on the results of the assignments in this class, it is possible to extend the ideas that Upton et al present to build a red teaming or penetration testing engine that would be very capable in aiding system testing.
In order to use such a tool however, the operator would have to first understand the basic concepts of security. “Cyberattacks: A Lab-Based Introduction to Computer Security” by Amanda M. Holland-Minkley (2006) describes a course dedicated to teaching these basic concepts (p. 39). Holland-Minkley states that there is a growing need for information security education and while it is crucial for information technology students, those in other majors should be taught as well. She proposes a course designed to this need (p. 39). The author proposes curriculum that teaches both the theoretical side of the topic, such as ethics, history, and policy, and the applied view, with labs that demonstrate malware, viruses and other exploits, with the goal that students become more aware of the potential dangers and be capable of taking steps to mitigate the threat (p. 39).
Holland-Minkley (2006) declares the program to be a success. While her methods appear to be sound, her statistics show little or no positive change, and in some cases, what the group feels is actually a step backward (p. 44). In spite of this, the author presents a solid program of study for undergraduates in non-technical majors who wish to increase their knowledge of the topic. While most of the skills are transferable, the content appears to be a bit elementary for graduate students in technology majors.
Another spin on teaching information security technology comes from Mark W. Bailey, Clark C. Coleman, and Jack W Davidson (2008) in their drolly named “Defense Against the Dark Arts.” The authors state that rapidly declining enrollment in computer science classes is a major issue. They propose to combat this loss of enrollment (and presumably therefore billable hours) by creating a class that “today’s student considers relevant.” To this end, they propose a class that teaches programming by focusing on malicious and “anti-virus” code (p. 315). The rest of the paper is a detailed description of the curriculum. There is nominally a section on ethics in the first week of the class. The rest of the content teaches engineering and reverse engineering of code using viruses and other exploits as examples (p. 316).
Bailey et al. (2008) call the class a success, and in that it appears to have met the goal of increased enrollment in computer science courses, the team has to agree (p. 318). While the class the authors describe is similar to the one we are taking now in that it teaches controversial and potentially dangerous skills, the team finds the authors’ motivation suspect. If the goal was to prepare well trained developers in methods to combat poor code writing practices and malicious software, we would have no problem with it. However, Bailey et al. blatantly state that the primary concern was drawing in students, which translates to watching the bottom line (p. 315). Using “edgy” dangerous material for financial gain smacks of unethical behavior.
Methodology and Procedure
Plan of Defense
Foremost, the team decided to use a Windows XP Service Pack Three machine, as prior research had shown this to be a secure setup. Additionally, it was believed that though a Linux machine could be properly configured for the exercise, by large majority the team members were more proficient and comfortable in a Microsoft NT kernel based environment. It was also felt that extensive forensics, if necessary, would be easier to accomplish for the aforementioned reasons. In order to secure this system, the Microsoft Specialized Security, Limited Functionality (SSLF) Workstation group policy for XP SP3 as per “Security Compliance Management Toolkit: Windows XP” was applied (document from Lab five). Also, the firewall was enabled, with only a single port for Secure Shell (SSH) being opened. As no files were being shared, the Microsoft file sharing service was disabled for the outward facing network interface. Finally, all existing user accounts were disabled, including the default “Administrator”; with two accounts being added, one with administrative privileges and one without, both with high complexity passwords to prevent brute-force cracking attempts. Two network interfaces were configured; one which connected to the “outside” laboratory network, and another which was only accessible by local subnet.
In order to facilitate the network login requirement functionality of the exercise, the team installed the OpenSSH server for Windows package (http://sshwindows.sourceforge.net/). One other SSH package was evaluated (http://www.freesshd.com/ ) , but it was found to be prone to crashes and did not appear to interface properly with the native Windows NT authentication system. The OpenSSH server was installed, the proper commands were run (mkgroup, mkpasswd) to generate a permissions mapping for a non-administrative user, and the service was started. The non-standard high port number of 22000 was chosen for the service to listen on; while this does very little to deter detection, it might not be apparent on a quick Nmap scan, which typically interrogates only the low one thousand ports.
The team elected to use this SSH server for two specific reasons. The first, as the graphical user interface was not present, many local escalation exploits were unlikely to function; i.e. Internet Explorer could not be run from the command line, and other Graphic Device Interface (GDI) elements could not be compromised. Secondly, as the main remote connection means of Microsoft Windows (Terminal Services a.k.a Remote Desktop) is unable to verify the identity of the remote host on connection (unlike SSH which uses a fingerprinting system), well known “man in the middle” attacks exist by which to exploit this flaw. The team believed the installation of this SSH server application to be within the bounds of the exercise, as Microsoft documentation directly recommends the use of the OpenSSH server for use with its Windows Services for UNIX (SFU) since it provides no SSH functionality natively (Russel, 2004). We did not consider this being “artificially protected” as it is a standard usage pattern; moreover, no complaints were raised by the attacking team in this regard.
In regards to other security measures, after discussion and reconnaissance of other teams’ shared network folders, it was decided to run the VM for the exercise from the temporary disk storage allocated to the Citrix session, as this appeared completely unavailable to other users. After further consideration, it was thought that a dropped Citrix session might cause the entire VM to be erased, and so it was copied to the presumably private “Students” U: network drive. The passwords on this working copy were changed, and the original VM labeled “Lab 7” was left in the shared folder, with fake run lock folders added in an attempt at misdirection. This was done to obscure the actual identity of the target machine, and to throw off any attack which involved circumventing file permissions and “stealing” the VM’s drive for password file cracking purposes.
The team was forced to modify its defense policy, as on the second day of the exercise, attackers (team two) successfully logged on to the team’s machine. Evidence of intrusion was gathered, and permission for an hour of off-line time was obtained. It was believed that the laboratory instructions indicated modification to the team’s machine was allowed during the “down time,” and so the account passwords were changed. This action was later disputed by the attacking team as “cheating,” although it was ultimately judged by the highest authority to be a reasonable interpretation of the rules, if not the original intent, and so was allowed to stand without remedial action.
Because of the nature of this intrusion, it was suspected that one or more team resources were compromised. A new login password was emailed to the contest judge in an encrypted zip folder, with previously known private shared information used for the decryption pass phrase. Additionally, the new account passwords were not emailed to other team members, but held only by the person maintaining the team’s VM. The SSH client side software was changed to reject connections which did not use SSH version two: the reason for this will be examined further in the forensic analysis section. Finally, the Citrix environment was treated as compromised, and so no logon procedures were initiated into the team’s VM for the rest of the exercise. These defensive steps ultimately proved successful, as no other attempts were made to enter the team’s machine.
Plan of Attack
Before the red teaming exercise began we started passive reconnaissance by inspecting the previous laboratory reports that had been submitted by team four to Professor Liles’ blog. We used this to determine possible information leakage of operating system type, passwords and security methods used in previous labs. Laboratory five required each team to choose an “as-is” operating system and attempt to exploit it. Then, the teams were required to select a security document that pertains to the operating system that they chose. For lab five, team four chose the Windows XP SP3 operating system and NIST document SP 800-68 to secure it. We believed that it would be unlikely that team four would put in the extra time and effort required to select a different operating system and vendor documentation for lab seven. We also located the user name and password that they had assigned to their systems in lab one. Although we felt that it is unlikely that they would use the same user name and password, it was worth noting.
Once the red teaming exercise began we started conducting scans for open ports on the target system using both Nessus and Nmap. To avoid detection we conducted the scans late at night when it was unlikely to be monitored. Scans conducted on team four’s system late Wednesday night and Thursday showed that there were not any open ports. Since this was our only avenue of attack at the application layer, we began exploring other options for compromising the target.
One of our ideas for compromising the system involved making a copy of team four’s virtual disk file. The virtual disk file could be browsed using VMware Workstation so that the SAM file could be extracted. Once that was done, a password cracking tools such as Ophcrack, John the Ripper, or Cain and Abel could be used to brute force the password from the SAM file. Using the recovered user name and password, the virtual machine could be booted with VMware Workstation and any other information necessary to breach their system could be obtained. This method proved untenable since the folder permissions did not allow us to copy the virtual disk file from team four’s folder.
We also investigated vulnerabilities within VMware itself. We were able to find a few reported vulnerabilities associated with VMware, however only one that would possibly assist us. The others were simply denial of service attacks (CVE-2009-1146, CVE-2008-3761) (“VMware Products”, 2009). The path traversal vulnerability within VMware could allow a user within the virtual machine to have unauthorized access to the host file system (CVE-2009-1147) (“VMware Workstation”, 2007). By creating a shared folder with the host operating system, a user within the VM could create a specially constructed path name to gain access to the host folders. This could be used to access the virtual disk file of our target (Jackson, 2008). Unfortunately, this method of privilege escalation did not work when running Windows XP in the virtual environment. We attempted to mount the shared folder using nUbuntu, however it deadlocked VMware during two consecutive attempts.
A network scan of the target system on Friday showed that the results of the scan had changed from “no open ports” on Thursday to “all ports are filtered” on Friday. The change in scan results indicated that Team four might have activated a firewall sometime between the scan on Thursday and the scan on Friday. We were hopeful that team four would also activate their remote login as required by the lab specifications. Before accusing team four of not following the laboratory instructions, we needed to be certain that a scan would not reveal the open port; that it was not somehow hidden by Windows firewall. According to Microsoft documentation, Windows firewall uses a stateful packet filtering system. When a network request is made from the system to a remote computer, a port is opened to allow for a response. Windows firewall monitors these requests and only accepts packets on that port whose source IP address matches the destination IP address in the initial request. This keeps unsolicited packets from entering through that port. To allow for outside connections, however, it’s necessary to create an exception within the firewall. This allows a single port to be opened to requests from outside traffic. Windows firewall allows for the exception of a port and source IP address, but not protocol (Davies, 2005). We tested this hypothesis by configuring Windows firewall in a Windows XP SP3 with a configured exception. We ran a port scan to insure that the open port was visible when a specific IP address had not been included in the exception. Since the instructor had not publicly furnished an IP address, we believed that either no exception had been made or no remote log in service had been configured.
We emailed Professor Liles and team four to verify that team four did, in fact, provide a remote log in. Professor Liles confirmed that they had not. He emailed back a short time later advising that team four had sent him login information at that time, however, he was unable to log in. We verified through a port scan that team four still did not have any listening ports on their virtual machine. Another scan a short time later revealed that they had a listening port at number 3389 (ms-terminal services). A simple Google search showed that port 3389 was the default port for Windows Remote Desktop. A quick login attempt from a Windows XP VM on the network confirmed this.
Our initial login attempt to team four’s remote desktop was made using the user name “administrator” and the password” Pa88word”, which team four had provided in their lab one write-up. We were unable to log in using these credentials. We began researching known vulnerabilities associated with Remote Desktop. Two vulnerabilities were found; the first involved using a buffer overrun to create a denial of service attack (“Microsoft Security Bulletin”, 2005). A denial of service attack would not offer us any benefit since our objective was to breach the system. The second concerned possible information leakage (“Microsoft Security Bulletin”, 2002). Both of these vulnerabilities were quite old and likely already patched. We decided to attempt to capture the user name and password during a login using the security tool Cain and a man-in-the-middle attack. We did this by performing ARP poisoning of our target MAC address while listening for a login attempt.
We also began researching tools to brute force the remote desktop login. Hydra appeared to be the most promising. Hydra uses tables of possible login credentials and works with many different remote login protocols. A closer look however, showed that it did not work with Remote Desktop. In fact, searches for online password crackers that work on Remote Desktop revealed nothing useful. Since Remote Desktop uses the same protocol as Terminal Services, we searched for tools to brute force a terminal services login. Two programs looked promising, TSCrack and TSGrinder (Gates, 2009). Only one site could be found to download TScrack. The attempted installation crashed the VM, making it unbootable. Once we copied a new VM and reconfigured it, we turned our focus toward TSGrinder. After installing TSGrinder we attempted to run it against our target system. On numerous attempts to use TSGrinder against the remote system, it reported that it could not get a handle to the remote login page.
We continued our attempt to capture remote desktop login using Cain with ARP poisoning, however at this point we believed that we had exhausted all other feasible options. There were some options that we believe would have worked, but would have been unethical. For example, we could have captured Citrix login information of Team four members using Cain. Once we cracked the password hashes, we could have logged into Citrix with their credentials and had access to their target VM virtual disk. As proof of concept, we captured the NT hashes of two of our own team members. We were able to brute force the passwords within about five minutes each. We also considered brute forcing their student email to capture communication between the team members and instructor. Another option involved crafting a spoofed email that appeared to be from Professor Liles. The email could direct them to visit a malicious web site from their target system, or simply advise them to login using remote desktop (so that we could capture their login credentials). This too would have fallen outside the scope of acceptable behavior.
While the team’s VM was not compromised according to the rules of the exercise (no files were placed on C:), it was felt a forensic investigation of the unexpected password compromise to be important, both from a defensive standpoint, and as a fulfillment of the laboratory exercise. This proved to be somewhat challenging, as the compromised elements seemed to lie outside our domain of control, and so only best guesses could be made.
The initial intrusion was noticed by a number of things present in the team’s VM. Repeated login actions, both successful and unsuccessful were found in the log files. Additionally, a number of zombie processes were noted, including “notepad.exe,” which most likely resulted from starting this application from a command line with no graphical user interface present (the team was able to replicate this scenario exactly by this method, and so assumes this is what occurred). Most telling, however, was a text file named “team2.txt” left on the non-privileged user’s desktop. The team was unsure if this was actually an attempt by team two or a test by Professor Liles, as the blatant lack of stealth did not seem in accord with the laboratory goals for an attacking team. All these are illustrated by Figure 1, which was submitted as evidence toward the team’s previously mentioned “down time” request. In actuality, uncertainty existed even at this point that team two was involved, and it was fully expected that Professor Liles would reply that this had been a test for rules conformance. When the request for offline time was granted, it became more certain that this was the work of team two; ultimately confirmed by the “cheating” complaint of the next day.
It was after this incident that the team increased its level of network reconnaissance. Extensive examination of the network was done, and it became obvious that massive ARP poisoning, likely by multiple parties, was taking place. A simple “arp -a” executed from the Windows console shown at different times during the exercise illustrates the changing MAC addresses (Figure 2) . A series of “nslookup” commands recorded at this time revealed the machines from which traffic was being redirected (if DNS could even be trusted at this point):
M:Program FilesSupport Tools>nslookup 220.127.116.11
M:Program FilesSupport Tools>nslookup 18.104.22.168
M:Program FilesSupport Tools>nslookup 22.214.171.124
Additionally, Wireshark was used to analyze ARP traffic, and IP conflicts were reported.
While we were uncertain as to the exact function of the machines which were compromised on the network, we assumed by the host names that one was a web portal, another possibly the Citrix application server, and the third to be an authentication or session database of some sort. We inferred this using web searches on Citrix based deployments, with the sites found being typically represented by one such as this: http://www.petri.co.il/record-audit-terminal-citrix-rdp-sessions-observeit-product-overview.htm . The member of the team doing this research was relatively unfamiliar with Citrix deployments, and cannot be certain the assumptions made were accurate. Although not noticeable in this specific instance, it was realized that the file server for network shares appeared to be compromised by traffic redirection also.
It is with these details in mind that the team became extremely paranoid about all data which in theory could cross the poisoned network. It was at first assumed that the SSH login connection had been compromised, as research from an offensive angle revealed that such programs as “Ettercap” and “Cain and Abel” were capable of man in the middle attacks on SSH connections by requesting a SSH protocol lower than two: previous protocol versions are susceptible to this type of attack. It was noted that “PuTTY,” the client side program used, by default will downgrade from connection protocol version two if requested by the party being connected to. The “PutTTY” settings were modified in order to eliminate this “downgrade” vulnerability; and the assumption was made that this was the means by which team two acquired the login name and password. However, an analysis of connection logs to the team’s VM via SSH revealed that no connection was ever made to the machine over the “outside” network using the compromised password up to the point that the break-in occurred. Hence no interception could have occurred up to the time of this event.
Further in this line of inquiry, the login records appeared to indicate by both behavior and IP address that the same party which initiated the very first login of the attack also planted the file on the desktop (although this cannot be known for certain simply by IP address, as this can be spoofed). By behavior, we note the first hour of login attempts unsuccessfully employed the login name non_root when the actual real login was not_root . We think it likely that this was due to the attacking team misreading a capture file or errors induced by the verbal transmission of information. It also might indicated this was transcribed from short term memory, such as with over-the-shoulder peeking. There was a reverse IP lookup anomaly noted by the SSH server in the logs during this attack period, but this occurred ‘after’ the first successful login by the aforementioned attacking address: so this indicates that interception of a session, if it occurred at this anomaly point, was the ‘not’ the root cause of compromise. It became apparent that the probability of this being the compromising means was rather low.
A possible means of compromise was considered in the use of email based communication. It was unknown if university email was capable of being intercepted; although precautions by means of encryption were taken on this paranoid notion. A more likely scenario is the use of sniffed and cracked network credentials to directly access the relevant email accounts (which in this case have the same login usernames and passwords), although we did not believe team two would utilize this questionable technique. Therefore, we ruled this as a low probability source of compromise also.
Further, the method of using sniffed credentials to access the team’s account file shares, and copying the running VM’s main virtual disk was examined. The team verified that it is possible to copy the virtual disk while the VM is running (this was tried successfully in a experiment) Also, it was certain that some of the team members’ credential’s were available: the usernames were advertised, and the login passwords used were the default assignments, which were of a well known format and were very weak (the hashes were cracked in about five minutes, as noted in the attack plan section). However, as the infamous LanManager hash vulnerability was eliminated by security policy, a brute force approach was presumably the only viable method to attack the team’s VM configuration (Gilman, 2004) . Since the compromised password was con2->sys6=TRUE; it is nearly certain that this was not cracked due to its complexity and the short time span of the exercise; and so this was ruled a very low probability of being the means of compromise.
Finally, we investigated the possibility of the Citrix session being compromised directly, and thought this the most likely. It appeared in Wireshark that all the traffic for the session was arriving in plain text to the connected client over the compromised network: therefore, we think it likely the team’s VM’s user account name and password were “sniffed” off of the network using Ettercap or equivalent. Essentially, we believe all keyboard input was being transmitted overtly on the network, and so judge this the most likely means of compromise. It is also possible that the Citrix session was accessed directly using specific session usernames and cracked password hashes, but we feel the less ethically questionable means most probable when coupled with its simplicity.
Results and Discussion
Simply put, the team (team three) was unsuccessful in its attempts to compromise team four’s virtual machine. In the area of defense, we were ultimately successful against team two’s intrusions, although team two did enter our system via remote login. The forensic analysis of this intrusion showed that the user name and password utilized was in all likelihood obtained by monitoring of the team three’s Citrix sessions.
This lab has verified what we have seen in previous labs: most system vulnerabilities occur at the application or human interaction layer. Attempting to access a system without any network-facing applications and with no user interaction occurring on the system is analogous to laying siege at the walls of an empty city. Even lower level reconnaissance such as packet sniffing and ARP spoofing require some network traffic from a user or application to be effective. A properly configured operating system is secure when left to itself.
With relation to this exercise, the team must admit that with the benefit of hindsight, we would change a number of things with regard to methods and preparation. There can be no doubt that it was a mistake on this team’s part not to map the network which served as the “battlefield” well before the action began. As we did not know what the “normal” network should look like, we found it difficult to ascertain when something was amiss. We could not trust IP addresses, DNS queries, or MAC addresses, much less services which were network based. A bit of reconnaissance and research before the exercise began would have put us in a much better position defensively, including the option to use static ARP tables for key network servers. On the offensive aspect, the team probably should have begun ARP poisoned traffic interception from the start of the exercise. However, as unstable as the network appeared during the exercise when this team was not ARP poisoning, it may have proved to be ultimately infeasible due to the increased amount of network disruption.
Further considerations expose the risk of using a direct Citrix session to manage the defensive aspects of the team’s VM. Although it is not known if it would technically violate the rules of the exercise, we would consider using a VPN connection to the lab network, with an additional tunneled login session to the actual VM for use in managing the defense. This eliminates the Citrix session interaction snooping vulnerability, yet allows the machine to be managed effectively. We wonder if a few other teams employed this method, as we saw no indication that any members of these teams logged into the Citrix server for the entire span of the exercise.
Finally, we think it important to emphasize that the “defensive” and “offensive” aspects of the exercise often proved complementary to one another. As noted, research into offensive methods allowed us to initially harden our system against these same types of attacks. Additionally, the forensic analysis used in reverse engineering an intrusion proved to be the springboard for new approaches on the offensive front. We do not believe this phenomenon to be unique to an exercise such as this: it is a concept intrinsic to any force which fulfills both the role of attacker and defender, dependent on circumstance. It is important to note, however, that a cross purpose force must have effective communication between the actors of the roles: the complementary nature of these activities is only apparent if the information gained is shared mutually.
Problems and Issues
We experienced intermittent network interruptions on our VMs. We believe that this was due to ARP poisoning from the various teams. We also lost two days of trying to access team four’s remote login because of our reluctance to accuse them of not supplying one. As previously stated, one of the virtual machines crashed while attempting to load exploit tools and would not boot afterwards. Additionally, one knowledgeable team member could not actively participate in the penetration testing because his administrative access to the Citrix system could be considered an unfair advantage. Finally, the issue of trust as applied to network resources was a continual and unresolved problem for the duration of the exercise.
In conclusion then, team three was unsuccessful in penetrating team fours virtual machine, though login interception and VMware file exploits were attempted. The team’s machine was successfully defended, although some reason for concern was found with team two’s ability to login to the system. A forensic analysis of this attack points to Citrix session eavesdropping as the most likely source of information leakage. Furthermore, the team found a synergy existed between the defensive and offensive techniques, and used this to increase both its offensive and defensive postures. Some problems were encountered during the exercise: most notable was network connectivity issues, and a missing team member. Finally, it was determined that a significant error was made in not mapping the lab network in its “normal” state, as no pre-existing baseline existed by which to judge trusted resources.
Charts, Tables, and Illustrations
Figure 1: Annotated screenshot submitted as proof of team two’s intrusion.
Figure 2: MAC address changes noted between command execution intervals.
Bailey, M. W., Coleman, C. L., & Davidson, J. W. (2008). Defense Against the Dark Arts. Proceedings of the 39th SIGCSE technical symposium on Computer science education, 315-319.
Casey, E. (2006). Investigating sophisticated security breaches. Communications of the ACM,49(2) 48-54.
Davies, J. (2005, December 27). Chapter 13 – Internet Protocol Security and Packet Filtering . Retrieved July 24, 2009, from Microsoft Technet: http://technet.microsoft.com/en-us/library/bb727017.aspx
Gates, C. (2009). Tutorial: MS Terminal Server Cracking. Retrieved July 23, 2009, from The Ethical Hacker: http://www.ethicalhacker.net/content/view/106/24/
Gilman, C. (2004). LMCrack – cracked in 60 seconds. Hitchhiker’s World, 9, Article 4. Retreived July 28, 2009, from http://www.infosecwriters.com/hhworld/hh9/lmcrack.htm
Hohl, F. & Rothermel, K. (1999). A protocol Preventing Blackbox Tests of Mobile Agents. Tagungsband der ITG/VDE Fachtagung Kommunikation in Verteilten Systemen (KiVS’99). Springer-Verlag.
Holland-Minkley, A. M. (2006). Defense Cyberattacks: a Lab-Based Introduction to Computer Security. Proceedings of the 7th Conference on Information Technology Education, 39-45.
Jackson, J. (2008, February 28). VMware vulnerability allows users to escape virtual environment. Retrieved July 24, 2009, from Government Computer News: http://gcn.com/articles/2008/02/28/vmware
Microsoft Security Bulletin MS02-051. (2002, September 18). Retrieved July 24, 2009, from Microsoft: http://www.microsoft.com/technet/security/bulletin/MS02-051.mspx
Microsoft Security Bulletin MS05-041. (2005, August 9). Retrieved July 24, 2009, from Microsoft: http://www.microsoft.com/technet/security/Bulletin/MS05-041.mspx
Russel, C. (2004, March 26) Migrating UNIX Applications to Windows via Microsoft Services for UNIX. Retrieved July 28, 2009, from Microsoft Technet site: http://technet.microsoft.com/en-us/library/bb463202.aspx
Upton, S. C., Johnson, S. K., & McDonald, M. J. (2004). Breaking Blue: Automated Red Teaming Using Evolvable Simulations. GECCO.
VMware Products Multiple Vulnerabilities. (2009). Retrieved July 24, 2009, from Tenable Network Security: http://www.nessus.org/plugins/index.php?view=single&id=36117
VMware Workstation Shared Folders Feature Lets Users Read/Write Arbitrary Files . (2007, April 30). Retrieved July 24, 2009, from Security Tracker: http://www.securitytracker.com/alerts/2007/Apr/1017980.html