Tech 581W Computer Network Operations, Laboratory 1: Team 3

Abstract

The purpose of this exercise was to create an experimental environment by which penetration testing could be conducted in a controlled way. Four virtual machines, three Microsoft variants and one Linux distribution, were selected and linked together via a virtual subnet in the VMware Workstation software suite. Various sources of penetration tools were found, evaluated, and a subset of these tools were selected for future use. The tool selection was categorized by a nine layer Open Systems Connection (OSI) Model, with a model layer chosen to classify each tool. Additionally, the tools were then assigned to a specific coordinate of the standard McCumber three-space information security model. This ‘laboratory’ setup will be the basis of future penetration testing experiments.

Introduction

Security of sensitive information has been a concern which knows no timeframe. From the earliest recorded accounts of human interactions, the concept of ‘data security’ or the protection of knowledge which would grant an adversary a substantial advantage, and the consequences of failure to protect said knowledge are present.  In the current era, these concerns have become increasingly common, even to the point of being relatively mundane. This is due partially to the sheer amount of information which is generated every day, which supersedes by orders of magnitude a similar timeframe even just fifty years past. Additionally, the vast array of contemporary information has brought with it a much wider range of consequences when the security of sensitive data is breached, many unknown or even impossible in the past. These consequences can dire, including instantaneous poverty via the theft of electronic financial assets, the total collapse of national transportation infrastructures, and the immediate execution of huge population groups via modern weapons systems.

It is with the severe consequences of these security breaches in mind that many researchers have chosen to adopt a more proactive stance toward data security, namely in the concept of penetration testing. Penetration testing typically involves a group of professionals chosen to conduct outside attacks against the security of a network system which contains substantial sensitive data assets. This ‘red team’ uses various methods and exploits known to exist in the ‘real’ adversaries’ repertoire to attempt a penetration of the ‘secure’ network’s defenses. Any breaches attained are described to the target networks maintainers, with the intent that these defects can be rectified before a truly sinister entity exploits them for real harm. This concept of penetration testing is the motivation for this laboratory experiment, the fundamental concepts and procedures being simulated in the microcosm of a virtual networked environment. In this exercise, we wished to address three general research concerns, the first being the question: what are the steps and materials necessary to create the basic virtual penetration mockup?  Secondly, we sought to examine the scope and relevance of penetration tools available to security community. Third, we attempted to classify the specific threat area of tools judged suitable for use in the test environment in relation to both network and security theoretical models.

Literature Review

Article: Red Teaming

One of the primary techniques used to test a system is the use of Red Teaming. Red Teams help us to think like a computer hacker, using the same resources and knowledge. Red teams will simulate a malicious attack, while other red teams try to defend against it. An example of Red Teaming in use occurs at Darmstadt University of Technology in Germany. Since 1999 they have conducted Hacker Contest which is a lab course where teams attack other systems and analyze the attacks on their own systems. Similarly the US Military Academy at Westpoint conducts cyber defensive exercises similar to capture the flag. Red team hacking should become more difficult over time, raising the bar on security.

Article: Components of Penetration Testing

There are various components of penetration testing. External network hacking involves attacks from outside the local network. These most common of these attacks involve firewall, routers, and web servers. Internal network hacking involves hacking from within the local area network. This is often done within the company using its own equipment. Application testing tests custom software for security vulnerabilities. Wireless LAN assessment involves war driving. War driving is the act of looking for insecure wireless networks. Social engineering is also sometimes used. Social engineering involves making contact with people within the organization to try to gain sensitive information that would allow access to the network. Trashing simply involves looking through the garbage to in an attempt to find sensitive information that has been discarded.

Article: Creating a Testbed for Red Teaming and Experimentation

Since March, 2004, University of Southern California has maintained a testbed known as DETER for testing network security. Because of the containment of their system, it has been particularly useful in testing malicious code and denial of service attacks. The level of containment can be scaled based on the level of threat. Experimenters are required to state the nature of their experiments prior to conducting them so that the appropriate level of containment can be set. The system can provide remote access for experimenters while not allowing the system to route packets outside the nodes of the testbed. This is accomplished by a firewall between the testbed and the internet. The system is designed to mimic a small version of the internet. The original disk images containing the operating system, configuration files, and input output states of nodes can be restored to return the system to its original state. This allows for experiments to be repeated precisely or modified in a controlled manner. The results of experiments are archived for future use. The system includes tools to measure the various metrics of interest for comparison with other experiments. The system has been used for testing worm behavior and both open source and commercially available antivirus software.

A similar testbed system is located at Indiana University of Pennsylvania.  Their lab, known as a cyberwar lab, is a standalone lab using Linux on all of the machines.  The goal is for one team of students to try to gain root access while the other team tries to defend against it. The attacking team attempts to map the network using ping, nmap, nslookup, traceroute, and dig. They would use Nutcracker to attempt to find passwords. A third team would work on forensics. Although Linux was chosen because it was free and open source, it would have been more realistic to build a network containing various operating systems. They also found that a Linux server that doesn’t provide any services is extremely secure, however unrealistic in a business environment.

The University of Arkansas a Little Rock has developed a testbed using virtual machine technology. The advantage is a cost efficient system that can be isolated. It allows for the simulation of a large network with minimal hardware requirements. It allows for disposable operating systems that can be discarded and replaced once they are infected. Examples of virtual machine products are VMWare, XEN, Qemu,  and Microsoft Virtual PC.

A course named Cyberattacks is available at Washington and Jefferson College for non-it majors. In addition to classroom activities, students conducted labs involving viruses and antivirus software, spyware and adware, and password cracking. Students use malware creation tools known as script kiddies to create custom viruses and worms. They then use antivirus software to determine if their custom malware could be detected.

Article: Threat Assessment using Modeling and Automated Attack

An article by Ray, Vemuri, and Kantubhukta suggests using object oriented modeling techniques to assist red teams in understanding and planning for attacks. A threat model is a functional decomposition of an application using a dataflow diagram to demonstrate possible vectors of attack. The types of threats for each vector are then identified. An attack tree is constructed to model the steps in the attack. The root of the tree represents the compromised system. The child nodes are the possible steps involved in conducting the attack. The leaves are the different approaches to achieving the goal of compromising the system. The automated attack model uses sequence and state-chart diagrams along with XML to represent the attack and defense methods. Psuedo code is generated to show the programming logic. Information from the diagrams is stored in a database in XML format.

Article: Common Omissions

In a paper entitled “Broadening the Scope of Penetration Testing Techniques”, Ron Gula identifies several areas that penetration testers typically miss. The 14 things that he lists are DNS Spoofing, third-party trust, custom Trojan horses, database exploits, routing infrastructure, testing the intrusion detection system, Web site server side applications, TCP hijacking, testing the firewall, ISDN phone lines, network brute-force testing, testing non-IP networks, Ethernet switch spoofing, and exploiting chat tools. One of the reasons why some of these are overlooked is because testing them may cause some instability in the production system. Another reason is that the tester may not want to involve unsuspecting third parties. An example would be a custom worm attached to an email which makes its way to an employee’s home computer. Another area that is not always explored is zero-day exploits, which are new attack vectors not previously discovered. Because the degree of difficulty greater, they are most often created by experienced network penetration testers.

Article: Vulnerability Testing Using Fault Injection

In his paper “Vulnerability Testing of Software Using Fault Injection”, Aditya P. Mathur describes fault injection – changing the software environment to test the fault tolerance of the software systems. Fault injection is “the deliberate insertion of faults into an operational system to determine its response”.  The faults are designed to mimic faults that may occur during the intended use of the system. The Environment-Application Interaction (EAI) fault model is created which emulates environment faults that are likely to cause security violations.  The model emulates what a real hacker would do.  The environment fault becomes input into the application, thus causing a fault within the application. An environment fault may also be malicious code that is called by the application, thus causing a security fault directly in the environment.

 

 

Methodology

The first task was to set up a lab environment. In order to make the same systems available to the entire group, we used VMware workstation over Citrix. We used the pre-created Virtual Machines (VM) provided by the CIT&G department. We used the virtual network settings tool to create an additional network with an address of 192.168.3.0/24. We then disabled the virtual adapter for the host machine in order to free the .1 address normally taken by virtual adapter. Each of the Windows VMs was then given static addresses as directed in the lab assignment using the windows networking utility. We edited the /etc/network/interfaces file in the Debian VM to assign the appropriate address to it.

The scope of the security tools available to the security community was examined in a tree based search pattern. Initially, three individual root tree nodes were represented by three distinct sources. One root node consisted of the security tools release compilation ‘Backtrack’, the second root node was the FreeBSD ‘Security’ and ‘Net-tools’ ports collections, and the third root node became a tool list on the security website ‘www.insecure.org.’ The sources and the tools presented in each were evaluated with regard to application in this exercise. Additionally, each tool examined was treated as a possible link to unknown tools by direct means, such as websites listed in tool documentation, or indirect means via results of a web search engine query. Through this method, a tree structured pattern of search emerged, with new sources emerging from those already discovered.

The classification of each tool with regard to security and network models was closely related to the above mentioned search process. It must be confessed that the entire search-evaluation exercise was begun with the premise of achieving near uniform distribution amongst the seven theoretical network layer classifications of the OSI network model (taken as security ‘threat areas’). This led to a perceivable bias in the classification of some tools, as a number of layers exhibited a much greater variety of applicable tools over other layers.  Some tools suffered from ‘repurposing’ simply due to the order in which they were discovered. In theory, ‘repurposing’ of tools occurred if and only if the application of the tool was judged to be of equal worth at a different OSI network layer than first apparent, but due to time constraints and lack of knowledge of tool operation, some tools may not have been classified strictly by their strongest properties. Security model classification by the McCumber cube model was done after the tools were classified into OSI layer ‘threat areas,’ though classification often reflected the inherent difficulties in applying a theoretical framework to ‘messy’ real world functionality.

Results and Questions

The creation of the VMware Workstation based test environment proved straight forward. Virtual network configuration was easily accomplished through the ‘VMnetcfg’ utility, and appears to accurately reflect a ‘real world’ equivalent switched network topology. Furthermore, it appears trivial to add additional virtual machine images to the VM network, which will prove useful for the addition of specific guest machines configured to run standalone penetration testing distributions. The total capacity of the number of images the current VMware Workstation host machine can run concurrently is unknown, but could be determined empirically if necessary.

The general OSI model classification criteria used is illustrated in Table 1.1.The detailed results of the tool evaluation procedure are presented in Table 1.2. As noted above, some OSI level ‘threat areas’ were addressed by many more tools than others, and some of this can be seen in the distribution of tools in the table categories. Additionally, the extended layers, layers zero and eight, represent ‘tools’ of an abstract or solely theoretical construction. It is assumed that these will lie outside the bounds of experimental scope, as actual application of many of these ‘tools’ would be unethical and illegal, issues of practicality aside.

It is notable that nearly all of the experimentally viable tools fall into the McCumber ‘technology’ category. This is to be expected, as ‘technology’ is a locally predictable force which is known to have definable limitations and strengths. Technology is also a subset of reality which has been crafted by human minds; it is logical in its processes and controllable in the scope of its utilization. In many respects, technology resembles an ideal ‘virtual world,’ it is designed to function according to the scientific method, it can logically be constructed and deconstructed at will, and it is nearly uniform in behavior among subclasses of entities due to standardization. In this respect, technological devices present the ideal ‘mass victim’ in that one exploit is nearly guaranteed to work across the board for all same-class devices. This stands in contrast to the other McCumber classifications, as ‘policy/practices’ and ‘human factors’ exist in the ‘real world’ and exhibit neither standardization nor universal exploits, much less consistent functionality.

It is also true that many of the current security vulnerabilities exist only ‘because’ of the use of technology (i.e. electronic transaction vs. real goods bartering). Therefore it follows that these vulnerabilities can only be exploited by utilization or misuse of the technological construct—hence we use technology to defeat technology.  It is true that non-technical devices can ‘defeat’ technology, but often ‘defeat’ is not synonymous with ‘exploit’ (e.g. a hammer versus a circuit board).

There can be no doubt that the true effectiveness of penetration testing is compromised by a bias resulting from commonly accepted tools and known exploits. The standard penetration test uses the standard tools, and so finds the standard problems. To paraphrase a well known saying: one never hears the fatal shot; so too, the truly effective and dangerous security exploit arrives suddenly and with no prior warning. While the use of existing penetration tools may be useful in preventing ‘copycat’ attacks, real pro-active assessment requires innovation ‘beyond’ the standard accepted procedure. The self-selecting nature of current penetration testing techniques in reality provides the same class of protection as most signature base anti-virus software: today’s vulnerabilities will only be detected by tomorrow’s update.

 

Problems

Of foremost concern, it was noted that a virtual test environment imposes limitations upon the extent to which ‘real’ penetration testing can be simulated. For example, many of the available tools and exploits addressed certain characteristics of hardware based Cisco routers. At this time, it does not appear possible to simulate a Cisco router in the virtual environment; hence some of the more powerful network exploits will remain unexplored. This could be remedied by using real hardware in conjunction with VMware Workstation in ‘bridged’ network mode, but this begins to stretch the definition of a ‘virtual’ test environment, and hence violates one of the primary aims of the experiment. A somewhat acceptable solution may be to use the VMware Workstation ‘NAT’ network configuration, which must by its nature implement some routing functionality; or, a virtual machine could be configured to act as a router on the virtual network: but this still excludes the use of specific platform tools.

Additionally, many of the tools examined within the scope of this exercise simply appear not fit within the scope of the virtual test environment. SIP eavesdropping, instant messaging snooping, email attacks: all these assume a dynamic environment filled with exploitable real-time human generated content. This seems impossible, or at the very least impractical to simulate within the virtual test environment. Here to, many powerful tools may go untested due to the nature of the test environment.

Finally, as stated earlier, some problems were uncounted fitting the ‘attack area’ of tools into the general theoretical frameworks of the OSI model and the McCumber cube. A ‘closest fit’ was always chosen out of necessity, but many tools fit equally well in multiple categories, the difference lying only in the intent the attacker. Theoretical models with higher levels of granularity would solve many of the classification problems, but much of the strength of the current models is in their relatively simplistic elegance. Yet again, one encounters the dissonance of elegant theory with ‘real world’ application.

 

Conclusions

In conclusion then, it was found that with the exception of specific hardware simulation and real dynamic human factors, VMware Workstation can be configured to provide a viable and ultimately useful virtual penetration testing environment. Additionally, a myriad of penetration tools were evaluated, selected, and classified by OSI layers and McCumber cube coordinates in preparation for penetration testing simulations. Even though some tool classification ambiguities were encountered, there is every expectation that each tool will be effective in regard to its targeted ‘threat area’ designation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Charts, Tables, and Illustrations

Table 1.1 OSI Exploit Classifications and Examples

Table 1.1 OSI Exploit Classifications and Examples

OSI Layer Technology Host/Media Layer Exploit Method
Layer 8 / People The protocol is cognition, politics, and process; the equipment is sneakers. N/A Social Engineering, Fraud, Confidence Games. Examples: Phishing – phony e-mails to solicit information; shoulder surfing- watching someone’s screen as they enter information; Trojans applications that appear innocuous but carry a malicious payload.; Flattery; Intimidation; Theatrics; Impersonation; Stealth; Resentment; Financial reward; Idealism; Deception; camaraderie; Romance; Extortion ; confusion; trust; ambition; ego; discord; fear; chaos; overconfidence;  misdirection; gossip; boredom; congeniality; Anger; Pride
Layer 7 / Application This is the FTP/HTTP/SMTP protocols. Host Layer Buffer overrun / execution stack exploits, rogue executable insertion; DNS/DHCP/RIP based attacks search engine based recon . Examples – Google mail enum – uses Google to find e-mail addresses in a given domain; GHDB – uses Google to find sensitive information or vulnerabilities in websites; pirana – scans for vulnerabilities in e-mail content filters; relay scanner – looks for open SMTP relay servers; xspy – key logger; Rainbowcrack – hash brute forcer; pwdump – outputs local account password hashes; John the ripper – password cracker; Cain and Abel – password recovery tool Dsniff – traffic monitor, extracts potentially useful data; Metasploit – exploit framework; whisker – CGI vulnerability scanner; Brutus – password cracker; inguma – vulnerability scanner aimed at Oracle; Amap application scanner; Nikto – web server vulnerability scanner.  Ophcrack – windows password cracker; pantera – web sever vulnerability scanner; paros – web server vulnerability scanner; Scanhill MS messenger scanner; Slurpie – distributed password cracker; uberkey – keylogger; VNCcrack – cracks VNC passwords; Braa – SNMP scanner; AIM sniff – Reads AIM traffic; crack – UNIX password cracking; gwee – web exploit utility. THC Hydra – password cracker
Layer 6 / Presentation Data translation and encryption. Host Layer Man-in-the-middle attacks, key cracking, data payload eavesdropping (Applies in many layers, but arguably fits best here).Examples: Aircrack – captures wireless packets to decode WEP keys; Ettercap – performs man in the middle attacks can be used to leverage exploits at other layers; Ike-scan – VPN/IPsec scanner; Psk-Crack – Ipsec cracker; SSL scan – queries ports for ciphers and certificates; l0phtcrack – password utility; scanSSH – scans for SSH protocol in use; PuTTY – terminal emulator
Layer 5 / Session Connections between machines. Host Layer NetBIOS  / NetBEUI vulnerabilities, connection hijacking, credential impersonation. examples: SMTP-vrfy – uses verify command in SMTP to find user names NMAP: Network scanner – works at multiple layers to aquire information; Nbtscan – NetBIOS name scanner; KaHt2 – RPC Trojan; Unicornscan – network scanner works on multiple layers; MBSA – security auditing tool, works across layers; Hackbot – vulnerability scanner. NBAT – nnetBIOS auditor; KRIPP – displays clear text passwords
Layer 4 / Transport Reliability and message continuity. Host Layer Packet sequence exploits, Denial of service attacks, open port / service discovery, host fingerprinting. Examples: Firewalk –  scans gateways for open ports; Protos – protocol scanner, looks for protocols running on a given host; Amap – looks for protocols running on specific ports; Packit – can capture and spoof packets also operates at 2 and 3;Superscan – port scanner;  TCPdump – packet sniffer; Wireshark – packet sniffer; Hping2- portscanner/packet modification tool with functionality similar to Nmap; Nessus – port/vulnerability scanner; Netcat – scanner, tunneler; Data tunneler; SARA – scans network traffic. Used across other layers can find trust relationships; PBNJ – network monitor; Strobe – port scanner; ISN prober- reviews packet header information; p0f – system fingerprinter; sinfp – os fingerprinter; sbd – netcat clone; telnet; Socket APIs;
Layer 3 / Network Logical addressing such as Internet Protocol, routers and gateways. Media Layer IP address spoofing, router table modification, ICMP based exploits such as denial of service attacks or network reconnaissance. ASS – protocol scanner; Deep Magic Information Gathering tool (DMItry) – scans for host information; DNS-ptr – resolves DNS names; DNS-walk attempts zone transfer to gain DNS database (deprecated) DNSMapper – Maps subdomains; DNSpredict – uses Google sets to find subdomains. Dig – finds name servers for a given domain; DNS enum – finds name servers for a domain and attempts zone transfers; TCtrace – performs traceroute using only syn packets; SING – used to modify ICMP packets; route injection; Ping of death (deprecated) malformed ping causes DoS; Fragrouter – manipulates packets to avoid IDS;Angst – active packet sniffer; TCPshow – packet decoder; THC IPv6 kit – tools for hacking IPv6; Nemesis – packet injection tool; Netsed – packet modifier;
Layer 2 / Data Link Physical addressing such as MAC addresses, NICs, and bridges. Media Layer MAC address spoofing, low-level denial of service attacks, spanning tree / network logical topology reconfiguration exploits. Examples: GNU MAC changer – spoofs MAC addresses. Yersinia – can spoof several layer two and 3 protocols for various purposes.  Kismet – passively collects packets to identify wireless networks.
Layer 1 / Physical The media layer such as radio or cables, dumb hubs and repeaters. Media Layer Wireless sniffers / jammers, cable taps, modified hardware, and premises infrastructure destruction / modification based on physical access. Examples:  Netstumbler – detects wireless signals;   Schematics; Specifications; Case studies ; Blueprints; explosives; Fire; Liquid nitrogen; Magnetic disruption; Voltage spikes; Ultrasonic disruption; Radioactive isotopes; Sledge hammer; conductive jumpers; wire cutters/razor knife; epoxies; disassembly; corrosion agents; hardware substitution; destructive resonance; multi spectrum data recording/electromagnetic imaging (TEMPEST); Ultraviolet light induced degradation; wedges; RFI; EMP; Microwave; Particle based disruption; Theft
Layer 0 / Kinetic Cyber  activity is diverted into kinetic energy by servos or devices N/A Access to physical hardware, manual overrides or parallel controls, reconfiguration of physical or electrical properties. Examples: HMI; PLC; RTU

Table 1.2 Exploit Tool Listing

Layer 0 Human Machine Interface (HMI) Technology, storage, integrity
Layer 0 Remote Terminal Units (RTU) Technology, transmission, integrity/confidentiality
Layer 0 Programmable Logic controller (PLC) Technology, transmission/processing, integrity
Layer 1 Schematics (know the system) Technology, Processing, Confidentiality
Layer 1 Specifications (fault tolerance) Technology, Processing, Confidentiality
Layer 1 Case studies (similar systems) Technology, Processing, Confidentiality
Layer 1 Blueprints (as-builts) Technology, Processing, Confidentiality
Layer 1 Explosives (unlimited physical disruption) Technology, Processing, Availability
Layer 1 Fire (thermal disruption) Technology, Processing, Integrity
Layer 1 Liquid Nitrogen (thermal disruption) Technology, Processing, Integrity
Layer 1 Magnetic disruption ( Magnetic fields measured in Teslas) Technology, Processing, Integrity
Layer 1 Direct energy injection (over-voltage) Technology, Processing, Integrity
Layer 1 Ultrasonic disruption Technology, Processing, Integrity
Layer 1 Radioactive isotopes Technology, Processing, Integrity
Layer 1 Sledge Hammer (limited physical disruption) Technology, Processing, Integrity
Layer 1 Conductive jumpers ( circuit alteration) Technology, Processing, Integrity
Layer 1 Wire cutters / razor knife (circuit alteration) Technology, Processing, Integrity
Layer 1 epoxies / adhesives Technology, Processing, Integrity
Layer 1 Non-destructive disassembly  (physical) Technology, Processing, Confidentiality
Layer 1 Corrosion agents Technology, Processing, Integrity
Layer 1 Hardware substitution (“Trojan” hardware) Technology, Processing, Integrity
Layer 1 Destructive resonance ( see Tesla) Technology, Processing, Availability
Layer 1 Multi-spectrum data recorders (“black box” reverse engineering) TEMPEST Technology, Processing, Confidentiality
Layer 1 Electromagnetic imaging (“black box” reverse engineering) TEMPEST Technology, Processing, Confidentiality
Layer 1 Ultra-violet light induced degradation (plastics, unprotected EPROM based controllers) Technology, Processing, Integrity
Layer 1 Wedges (anything to jam gear meshes, solenoids, etc.) Technology, Processing, Availability
Layer 1 Direct signal injection (analog local control, e.g. 4-20 mA systems) Technology, Processing, Integrity
Layer 1 Radio Frequency Interference (RFI) Technology, Processing, Integrity
Layer 1 EMP based attacks (destructive) Technology, Processing, Availability
Layer 1 High power microwave disruption Technology, Processing, Availability
Layer 1 Particle based disruption / surgical destruction (e.g. linear accelerators) Technology, Processing, Integrity
Layer 1 Theft Policy-practice, Confidentiality, Storage
Layer 1 NetStumbler Technology, Transmission, Confidentiality
Layer 1 Aircrack Technology, Transmission, Confidentiality
Layer 2 Kismet Technology, Transmission, Confidentiality
Layer 2 ifconfig (MAC id) Technology, Transmission, Integrity
Layer 2 GNU MAC Changer Technology, Transmission, Integrity
Layer 2 Ettercap Technology, Transmission, Integrity
Layer 2 Yersinia Technology, Transmission, Integrity
Layer 3 Angst Technology, Transmission, Confidentiality
Layer 3 Sing Technology, Transmission, Integrity
Layer 3 Ass Technology, Transmission, Confidentiality
Layer 3 igrp (route injection) Technology, Transmission, Integrity
Layer 3 Packit Technology, Transmission, Integrity
Layer 3 ping (DoS) Technology, Transmission, Availability
Layer 3 Fragrouter Technology, Transmission, Availability
Layer 3 Superscan Technology, Transmission, Confidentiality
Layer 3 Tcpdump Technology, Transmission, Confidentiality
Layer 3 Wireshark Technology, Transmission, Confidentiality
Layer 3 Nmap Technology, Transmission, Confidentiality
Layer 3 Hping2 Technology, Transmission, Confidentiality
Layer 3 DMItry Technology, Transmission, Confidentiality
Layer 3 DNS-ptr Technology, Transmission, Confidentiality
Layer 3 DNS walk Technology, Transmission, Confidentiality
Layer 3 DNS mapper Technology, Transmission, Confidentiality
Layer 3 DNS predict Technology, Transmission, Confidentiality
Layer 3 Dig Technology, Transmission, Confidentiality
Layer 3 DNS enum Technology, Transmission, Confidentiality
Layer 3 TCtrace Technology, Transmission, Confidentiality
Layer 3/4/5/7 PBNJ Technology, Transmission, Confidentiality
Layer 3/4/5/7 Unicornscan Technology, Transmission, Confidentiality
Layer 3/4 Strobe Technology, Transmission, Confidentiality
Layer 3 tcpshow (tcpdump interpreter) Technology, Transmission, Confidentiality
Layer 3/4 THC IPv6 Attack Kit Technology, Transmission, Integrity
Layer 3 Nemesis Technology, Transmission, Availability
Layer 3 Netsed Technology, Transmission, Integrity
Layer 4 ISNprober Technology, Storage, Confidentiality
Layer 4 Nessus Technology, Transmission, Confidentiality
Layer 4 Netcat Technology, Transmission, Confidentiality
Layer 4 Firewalk Technology, Transmission, Confidentiality
Layer 4 Socat Technology, Transmission, Confidentiality
Layer 4 p0f (Passive OS finger printer) Technology, Transmission, Confidentiality
Layer 4 SinFP (Frame finger printer) Technology, Transmission, Confidentiality
Layer 4 sbd (Improved netcat) Technology, Transmission, Confidentiality
Layer 4 Telnet Technology, Storage, Confidentiality
Layer 4 C/C++ BSD socket API Technology, Storage, Confidentiality
Layer 4 Pearl script with sockets Technology, Storage, Confidentiality
Layer 4 Python script with sockets Technology, Storage, Confidentiality
Layer 4 Protos Technology, transmission, Confidentiality
Layer 4 Aman Technology, transmission, Confidentiality
Layer 4 Packit Technology, transmission, Confidentiality
Layer 4 Superscan Technology, transmission, confidentiality
Layer 4/5/6/7 MBSA (Microsoft Baseline Security Analyzer) Technology, Storage, Integrity
Layer 5 hackbot Technology, Storage, Integrity
Layer 5 SMTP verify Technology, storage, confidentiality
Layer 5 Nbtscan Technology, Storage, Confidentiality
Layer 5 kaHt2 (RPC exploit) Technology, Storage, Integrity
Layer 5 NBAT (NetBIOS Auditing Tool) Technology, Storage, Confidentiality
Layer 5 NMAP Technology, storage/transmission, confidentiality
Layer 6 l0phtCrack Technology, Storage, Confidentiality
Layer 6 Ike-scan Technology, Transmission, Confidentiality
Layer 6 PSK-Crack Technology, Transmission, Confidentiality
Layer 6 SSLScan Technology, Transmission, Confidentiality
Layer 6 ScanSSH Technology, Transmission, Confidentiality
Layer 6 PuTTY Technology, Storage, Confidentiality
Layer 7 Nikto Technology, Storage, Integrity
Layer 7 Ophcrack Technology, Storage, Confidentiality
Layer 7 Pantera Technology, Storage, Integrity
Layer 7 Paros Technology, Storage, Integrity
Layer 7 Scanhill (MS Messenger Sniffer) Technology, Transmission, Confidentiality
Layer 7 Slurpie (Distributed pw cracker) Technology, Storage, Confidentiality
Layer 7 uberkey (keylogger) Technology, Storage, Confidentiality
Layer 7 VNCcrack Technology, Transmission, Confidentiality
Layer 7 Braa (SNMP scanner) Technology, Processing, Integrity
Layer 7 AIM Sniff Technology, Transmission, Confidentiality
Layer 7 crack (UNIX based) Technology, Storage, Confidentiality
Layer 7 gwee Technology, Storage, Integrity
Layer 7 THC-Hydra Technology, Storage, Confidentiality
Layer 7 KRIPP Technology, Transmission, Confidentiality
Layer 7 Amap Technology, Storage, Confidentiality
Layer 7 xspy Technology, Transmission, Confidentiality
Layer 7 RainbowCrack Technology, Storage, Confidentiality
Layer 7 Pwdump Technology, Storage, Confidentiality
Layer 7 John the Ripper Technology, Storage, Confidentiality
Layer 7 Cain & Abel Technology, Storage, Confidentiality
Layer 7 Dsniff Technology, Transmission, Confidentiality
Layer 7 Metasploit Framework Technology, Processing, Integrity
Layer 7 Whisker Technology, Processing, Integrity
Layer 7 SARA Technology, Processing, Integrity
Layer 7 Brutus Technology, Storage, Confidentiality
Layer 7 Inguma Technology, Storage, Confidentiality
Layer 7 Google Mail-enum Technology, storage, confidentiality
Layer 7 GHDB Technology, storage, confidentiality
Layer 7 Relay Scanner Technology, storage confidentiality
Layer 8 Flattery Policy-practice, Confidentiality, Processing
Layer 8 Intimidation Policy-practice, Confidentiality, Storage
Layer 8 Theatrics Policy-practice, Confidentiality, Storage
Layer 8 Impersonation Policy-practice, Confidentiality, Storage
Layer 8 Stealth Policy-practice, Confidentiality, Processing
Layer 8 (Other’s) Resentment Policy-practice, Integrity, Processing
Layer 8 Financial Reward Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Idealism Policy-practice, Confidentiality, Storage
Layer 8 Deception Human Factors, Confidentiality, Storage
Layer 8 Camaraderie Policy-practice, Confidentiality, Storage
Layer 8 Romance Policy-practice, Confidentiality, Storage
Layer 8 Extortion Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Confusion Human Factors, Integrity, Processing
Layer 8 (Other’s) Trust Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Ambition Policy-practice, Integrity, Processing
Layer 8 Surveillance Policy-practice, Confidentiality, Processing
Layer 8 (Other’s) Ego Human Factors, Integrity, Processing
Layer 8 Discord Policy-practice, Integrity, Processing
Layer 8 (Other’s) Fear Policy-practice, Integrity, Processing
Layer 8 Chaos Policy-practice, Integrity, Processing
Layer 8 (Other’s) Over-confidence Human Factors, Confidentiality, Storage
Layer 8 Misdirection Human Factors, Availability, Processing
Layer 8 Gossip Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Boredom Policy-practice, Confidentiality, Processing
Layer 8 Congeniality Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Anger Policy-practice, Confidentiality, Storage
Layer 8 (Other’s) Pride Human Factors, Integrity, Processing

References

Arce, I., & McGraw, G. (2004, July/August). Why Attacking Systems is a Good Idea. IEEE Computer Society , pp. 17-19.

Benzel, T., Braden, R., Dongho, K., & Neuman, C. (2006, March). Experience With Deter: A Testbed For Security Research. IEEE , pp. 1-10.

Coffin, B. (2003, July). It Takes a Theif: Ethical Hackers Test Your Defenses. Risk Management Magazine .

Du, W., & Mathur, A. P. (1998, April 6). Vulnerability Testing of Software System Using Fault Injection. pp. 1-20.

Gula, R. (2001). Broadening the Scope of Penetration-Testing Techniques. Enterasys Networks White Paper , pp. 1-18.

Heien, C., Massengale, R., & Wu, N. (n.d.). Building a Network Testbed for Internet Security Research. Consortium for Computing Sciences in Colleges .

Holland-Minkley, A. M. (2006, October 21). Cyberattacks: A Lab-Based Introduction to Computer Security. SIGITE’06 , pp. 39-45.

Micco, M., & Rossman, H. (2002, March 3). Building a Cyberwar Lab: Lessons Learned. SIGCSE ’02 , pp. 23-27.

Mink, M., & Freiling, F. C. (2006, September 22-23). Is Attack Better Than Defense? Teaching Information Security the Right Way. InfoSecCD Conference ’06 .

Ray, H. T., Vemuri, R., & Kantubhukta, H. R. (2005, July). Toward and Automated Attack Model for Red Teams. IEEE Computer Society , pp. 18-25.

12 comments for “Tech 581W Computer Network Operations, Laboratory 1: Team 3

Leave a Reply