Abstract
The purpose of this exercise was to examine the role, application, and concerns associated with the employment of passive reconnaissance tools. First, we develop a definition of ‘active reconnaissance’ within the scope of network penetration attacks. Using this, we test and classify a significant sampling of security tools into this category, with reference to network and security models. Additionally, we examine the concept of a ‘meta exploit’ and evaluate its application, along with recounting patterns and biases found in common network reconnaissance tools. We continue, and research the concept of ‘hostile’ security tools, both in case studies and preventative measures to counteract this threat. Furthermore, we evaluate existing literature available with regards to the effective execution of this exercise.
Introduction
If it is possible to classify an ‘active’ reconnaissance tool, then to, it follows that a ‘passive’ reconnaissance tool must be subject to some similar definition. As we have in the past classified ‘active’ tools to have three necessary characteristics: namely presence, risk, and limit of scope, we move in a similar direction with the concept of the ‘passive’ tool. Specifically, we have defined a ‘passive’ tool to have three necessary properties: uncertainty, invariant risk, and limit of scope.
The property of ‘uncertainty’ is closely related to the idea of ‘presence.’ We note that with a ‘passive’ tool, the presence of the attacker is not required to be definable within the scope the target’s resources. This does not mean that the attacker ‘must’ not be present; it simply implies that the victim of attack must not know ‘with certainty’ that specific local resources have been compromised: hence the term ‘uncertainty.’ Moving further in our definition, we present a concept closely related to the first element, that of ‘invariant risk.’ Simply put, ‘invariant risk’ means that the actions of the attacker do not increase the danger of being discovered. The attacker may have already taken significant risks to establish a foothold on the target’s network: this risk remains and can be taken as a constant. We submit that by our definition, a ‘passive’ action creates no ‘new’ risk above any risk already existing.
Finally, we define ‘limitation of scope’ to be exactly the same property attribute as the ‘active’ tool description. Limitation of scope implies that the actions taken by the attacker are not the ultimate objective of the attack, but a means to an end. Thus, any consequences to the victim resulting from this ‘passive’ reconnaissance are a secondary product against the goal of obtaining information for planning and the accomplishment of the ultimate objective.
Since we have laid out our definition of a ‘passive’ tool, we propose three areas of research with respect to this concept. In the first, we seek to discover specific ‘passive’ reconnaissance tools commonly available, and subdue them to a theoretical system of classification. Secondly, we examine application of these tools within a real network environment, in association with the idea of a ‘meta exploit.’ Third, we research the possibility of these tools being internally compromised, or made ‘hostile,’ and the means by which this can be defeated.
Literature Review
In Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing (Jung, 2008) the authors described a system designed to detect private information being sent over the internet from popular applications. Their concern was that the disclosure of such information may invade the privacy of the user (p 279). A further concern was that many common applications transmit information in plain text, which may be intercepted by an unintended third party (p 280). The concept of Privacy Oracle was created to detect not only what information was leaked, but when it was leaked, and to whom (p 279).
The methodology used in Privacy Oracle was differential black-box fuzz testing (p 279). While running the application in a virtual machine, the output was captured using Wireshark (p 283). The changes that occurred in network output when the input changed were then analyzed. Because changes in output could have been caused by variations in the testing environment, controlled virtual machines were used. This allowed the operating system to be returned to its original state before each test, minimizing the affects of environmental changes. The program output changes that occur due to the effects of the external servers were outside the control of the experimenters (p 280).
The concept of Privacy Oracle was described as a fully automated system that prepared input, and then captured and compared output (p 280). Input automation was achieved through AutoIT, a third-party program that automated program input through the use of scripts (p 280). The output was captured using Wireshark (p 283). Extraneous network traffic was removed prior to further analysis (p 281). The output was then sorted by destination IP address and protocol (p 282). Once extraneous network traffic had been removed and the output sorted, the outputs of the different tests were compared using NetDialign (p 282). NetDialign was employed as an algorithm to detect statistically significant differences in byte sequences (p 282). False positives were then manually removed (p 286).
Several concepts used in Privacy Oracle are applicable to our research. The use of virtual machines to create a controlled testing environment and the employment of virtual snapshots to return a system to a previous state are powerful tools with which to evaluate the effects of security tools. The use of Wireshark to observe network output from applications, similar in practice to that described in this paper, was beneficial in part one of this lab, and necessary in part two.
In Random Testing for Security: Blackbox vs. Whitebox Fuzzing (Godefried, 2007) the author discusses the benefit of using ‘whitebox fuzzing’ as opposed to’ blackbox fuzzing.’ ‘Blackbox fuzzing’ uses randomly mutated input to test program execution. The limitation of ‘blackbox fuzzing’ is that execution may not follow all of the control paths within the program. The author suggests using ‘whitebox fuzzing’ as an alternative. The ‘whitebox fuzzing’ method first identifies conditional statements within the program, and forms input based on the conditional paths. This article describes two of the techniques that can be used for fuzzing applications. These techniques will likely be useful as we proceed in further research with security tools (future labs).It is assumed that fuzz testing will be a component of our future penetration testing research.
Methodology and Procedure
Part 1
We began by analyzing the large list of tools generated in the first week’s exercise within the scope of our definition of ‘passive.’ We then proceeded by breaking down the various tools into broad classifications. These classifications included packet sniffers, fuzzers, scanners, digital forensics, password crackers and spoofers. This data was then assembled into a table format which reflected our research with regard to these tools and their role in ‘passive’ reconnaissance. Various tools from within our set of ‘passive’ tools were tested within the scope of a networked environment, some in conjunction with ‘active’ tools. Finally, we proceeded to theorize a definition by which ‘active’ reconnaissance tools, having been reduced in execution speed, can be ‘reclassified’ as ‘passive.’
Part 2a
Similar to the prior week’s exercise, tests were run on a real wired network setup, with actual physical host machines present. For the duration of the tests, the ‘target’ machine was running Windows XP 32 bit Service Pack 3, with latest updates installed. The ‘attacker’ machine was running Microsoft Windows Vista 32 bit Service Pack 2. The ‘observer’ was a machine running FreeBSD 7.1 RELEASE i386 32 bit. All were on the same subnet, and interconnected via a switched Ethernet local area network with IP address assignment handled by DHCP. The ‘observer’ was part of a 100 Mbs segment of the network uplinked to a 1Gbs switch, to which the ‘attacker’ and the ‘target’ were connected. All were verified up and ‘visible’ on the network before the tests were run. This particular setup was chosen for ease of use (as the VMware Workstation setup over Citrix can become cumbersome to use) and ‘realness’ of application: we were not certain how precisely VMware’s virtual network hardware duplicates the characteristics of switched Ethernet LANs.
The Windows Vista machine was chosen as the ‘attacker,’ for convenience of use, with NESSUS and NMAP/Zenmap GUI being downloaded and installed on the machine. Wireshark was installed on the FreeBSD machine to facilitate third party examining. First, NESSUS, and then NMAP were run against the ‘target’ machine, and the entire network was scanned for thoroughness. The ‘observer’ machine was then set up to capture all visible network traffic via Wireshark, and the tests were run against the ‘target’ machine and the entire network once again, each experiment being separately captured and saved as a file. The capture files, specifically the large full network scan, were sorted ‘by conversation’ using Wireshark’s built in functionality. Additionally, a separate experiment was conducted using NMAP on the entire network (running in a network bridged nUbuntu virtual machine) and monitored by a real host running ‘lanmap’ from a KnoppixSTD bootable distribution. We believed Wireshark to be among the most appropriate tools for use in this experiment, as it is a natural fit into the role of a ‘data recorder’ whereby the entire experiment could be captured, and the data set sorted and examined afterward without loss of detail.
Part 2b
A web search was conducted in hopes of locating accounts of network security tools being exploited. When no direct evidence was discovered with this method; the search was expanded by utilizing well known ‘security issue’ and ‘bug tracker’ databases. These databases were queried using the names of common security tools as search strings. A number of positive matches were found; this was repeated until it was determined a significant sampling of tools had been examined. Additionally, the philosophy behind the ‘Open Source’ movement was examined via literature and web documents; a comparison was then made against the practices of proprietary software vendors. Finally, research was done on ‘code review’ techniques, and effective methods for safe software deployment.
Results and Discussion
Part 1
A complete listing of the results of this section is contained in Table 1. Generally speaking, we determined that sniffers are passive by nature, simply listening to network traffic without participating; conversely, most scanners were found to send a ping to a series of IP addresses and ports in order to elicit a response. A demonstration of this occurred when running both NMAP (a scanner) and Wireshark (a sniffer) at the same time. It was quite noticeable that NMAP produces a large amount of network traffic, scanning the specified series of IP addresses within a very short period of time. Netdiscover was also found to send a series of ARP pings across the network in a similar fashion. In theory, this activity could easily be detected by an Intrusion Detection System (IDS) or an administrator watching a packet sniffing application.
We classified password crackers categorically into ‘online’ password guessing and ‘offline’ password cracking. It is important to realize that ‘online’ password crackers send repeated password guesses over the network to the target machine. When testing Brutus (an online password cracker) against a login and password query, Wireshark displayed considerable traffic across the network. These repeated attempts to gain access over a short time period could certainly draw some attention. ‘Offline’ password crackers require certain files to be extracted from the target machine or sniffed off of the network before being analyzed. Because of their lack of network activity, they are passive by nature. If physical access to the target machine is available, it’s possible to breech a Windows system using a Linux distribution on a bootable medium. From there, disk imaging, file carving, or simply copying select files can be accomplished with leaving little trace of the event. The files necessary to obtain passwords may also be extracted for later use in this manner. Because of this, most digital forensics tools would be classified as passive. Fuzzers can operate similar to online password crackers by sending random and unexpected data to network-facing applications. As with ‘online’ crackers, they too generate considerable network activity. Therefore, we would classify these network based fuzzers as ‘active.’ Finally, much like password crackers, ‘offline’ type applications exist for fuzzers: these are categorically ‘passive’ in nature.
Spoofers are another general tool category that we would consider ‘active.’ Spoofing involves changing the source IP address of the packets being sent across the network. Though spoofers may hide the identity of the system that sent the packets, the network traffic can still be seen.
Some tools such as Cain and Abel perform multiple functions and therefore their classification will depend on the manner in which they are used. Similarly, wardriving can be broken down into both an active and passive reconnaissance classification. For example, NetStumbler sends probe packets to find wireless access points, thus participating in the network, whereas Wellenreiter listens for SSID beacons passively.
There is yet another category of tool: non-technical attacks. These would include attacks that require physical access to the network hardware ( layer 1 ) or people ( layer 8 ) involved in the network (such as social engineering). Whether these actions are active or passive will depend largely on the surreptitious nature with which the action was completed. Arguably from a physical evidence standpoint, Locard’s Exchange Principle will still apply.
Means exist by which to hide active reconnaissance from Intrusion Detection Systems. For example, NMAP scans can be slowed so that the network traffic produced will be lost in the vast amount of other network traffic. Since an IDS must examine all network packets within a limited RAM buffer and compare them to known attack signatures, spreading the scanning action out over a long period of time allows individual scan packets to ‘pass out’ of the comparison window. We present that an ‘active’ tool may approach ‘passive’ classification when its presence can no longer be identified on the network by a scanning signature alone: essentially a shift from a ‘risk’ to an ‘invariant risk’ characteristic classification. This characteristic shift is solely dependent on the tools ability to be slowed below the threshold of detection on the target network. Certainly, we can conclude that the degree of tool’s ‘passiveness’ increases as the speed of its scanning decreases.
Part 2a
The initial use of NMAP and NESSUS against the target machine was straightforward. Both programs correctly identified the targets operating system as Microsoft Windows XP Service Pack 3. Both found the same variety of open ports and services, the main exception being NESSUS found an NTP server on port 123, which NMAP did not. NESSUS also warned that unprotected SMB shares were being advertised, which NMAP made no note of. The scan of the entire network proceeded in a similar fashion, although some hardware, such as the Linksys router was incorrectly identified as a ‘Netgear’ device ‘with one hundred percent accuracy’ by NMAP.
An interesting side note: on an early ‘pre-test’ run of NESSUS against the Windows Vista ‘attack’ machine, the surprising discovery of a live web server and SMTP service being offered was discovered. This was traced to an ‘uninstalled’ copy of VMware Server 2.0, which had failed to remove the JSP server administration client-it was still running on an open port. Additionally, the SMTP service was also traced to VMware related program, specifically to the ‘VMware Authorization Service’ process. This proved to be an eye-opening event: installed software, even from respectable vendors, should not be relied upon to behave in a security-minded fashion.
The attacker-target-observer setup, in regard to the ‘meta exploit’ test proved rather disappointing. It should be noted that almost no traffic was recorded by the ‘observer’ during the ‘target’ only scan. Much more traffic was recorded during the full network scan, but the majority of this traffic was specifically addressed to the ‘observer’ host from the ‘attacking’ host. Essentially, due to the nature of the switched LAN setup, only broadcast and multicast packets were visible to the observer (besides those specifically addressed to the ‘observer’ host, of course). This most certainly limits the usefulness of the ‘meta exploit’ concept, as the huge bulk of network traffic being generated by the ‘attack’ host was invisible to the ‘observer.’ It should be noted that in the full network scan, in general, only hosts running a NetBIOS service actually appeared in the ‘observer’s’ list of hosts; a second FreeBSD machine without NetBIOS capability (i.e. Samba not installed) did not appear to advertise its presence on the network via broadcast or multicast packets.
The ‘lanmap’ observer experiment proved similar to the previous results, although it was noted that some operating system passive fingerprinting was done by ‘lanmap’ (with less than satisfactory results). One additional host, namely the gateway router, was found and noted by ‘lanmap’-but this is unremarkable, as the gateway router is known throughout the network via DHCP advertisement. We must conclude that a ‘meta exploit’ scenario using a third host to observe the attack is of limited usefulness on a switched network.
Conceivably, this type of ‘meta exploit’ setup would be of much greater utility on an older ‘hub’ based network, as all traffic is simply ‘repeated’ out to each line of the star-based network topology. Additionally, it may be of some use on a wireless network; as due to the nature of the transmission medium, all traffic can be monitored. A theoretical wireless based ‘meta exploit’ setup could involve an ‘active’ wireless host used in a disposable way: i.e. it would probe other hosts on the network until it was found and silenced. The information from this active attack could be passively monitored and recorded by a wireless sniffer, and the data used to dissect the network. On the other hand, a less complicated and equally effective setup would involve a disposable wireless attack host with a secondary means (such a cellular modem card) to relay the information gained in the attack elsewhere in real-time: here to, we wonder about the ultimate utility of the ‘meta exploit’ concept in general.
It was noted that an important part of narrowing down on the ‘attack opportunities’ was fingerprinting the target’s operating system, and determining the services running on this system. As NESSUS contains a vast number of exploit plug-ins, and as these are generally grouped by operating system and services, a large number of the vulnerability tests can simply be bypassed on these two criteria. For instance, if the target is determined to be running Microsoft Windows, and found to have no ports offering web services, only the operating system vulnerabilities affecting Microsoft Windows need be tested, and all general web server vulnerabilities are eliminated as potential opportunities also. In fact, this eliminates a huge number of NESSUS tests, as observed by examining the NESSUS plug-ins listing ( http://www.nessus.org/plugins/index.php?view=all).
In our research, we arrived at the conclusion that both NESSUS and NMAP contain substantial bias in application to UNIX like operating systems. This is obvious in the sheer number of vulnerabilities addressed among the various operating systems by NESSUS. For instance, AIX, IBM’s commercial UNIX product, has a list of nearly five thousand vulnerabilities which are checked. Microsoft Windows, on the other hand, is evaluated against a list of about fourteen hundred. Significantly, even FreeBSD is checked for more security issues (approximately eighteen hundred) than Microsoft Windows. This seems far out of proportion, as it is well understood that the number of machines worldwide running Microsoft Windows is ‘by far’ in the majority. We believe two primary reasons exist for this, namely, software origins and deployment philosophy. Both NMAP and NESSUS arose from within the UNIX-based community; therefore it is unsurprising that they remain largely UNIX-based tools with a ‘server’ rather than a ‘workstation’ emphasis. Secondly, due to the nature of UNIX-like distributions, the source code for the core system and ‘user land’ programs are often regarded as a single unified ‘distribution.’ This is not true for Microsoft Windows systems, where the vast majority of ‘user-land’ applications are developed and maintained by third parties. Hence, we see that one philosophy allows for near total oversight of a system’s vulnerability, where in the other, such an approach is unrealistic.
Finally, we submit that when compared to the OSI network model, the vast majority of the vulnerabilities examined by NESSUS must necessarily be patterned on the upper layers of the model. While it is true that some vulnerabilities exist within the network stack implementations of various operating systems (layers three and four), the overwhelming majority will lie in topmost layers (specifically, layer seven). A number of reasons exist for this, the first being the issue of compounded complexity. As the information ‘rises’ in the OSI model, each layer introduces new functionality built in the operating system. If complexity increases, so too, the potential for error increases: thus we see more opportunities for exploitation of flaws on the higher levels of the model. Additionally, higher level services are subject to a greater variety of implementations, with new applications being introduced at a frequent interval: we term this ‘application mutability’. Conversely, this is cannot be said to be true of lower level entities, such as the network socket API, which remains relatively static over the course of an operating systems life cycle. Thus we see new opportunities for exploit born at the higher levels of the OSI on a regular basis.
Part 2b
Well known are the cute little flash games that open back doors in systems and the free computer scans that infect computers with more malicious software than a Russian porn site. The possibility of applications from even the most renowned software vendors being exploited is a calculated risk. But what about the tools used by those assigned to protect the network? We know that the tools themselves have a dual-use nature, that the same utilities are often used both for offense and defense (Parks and Duggan 2001). Are these tools safe, or do they expose systems to unacceptable levels of risk?
Again, we accept that any application, due to the inherently flawed nature of software design, carries the risk of exploitation. Most penetration tools are open source, and therefore share characteristics common to all open source software. So the discussion of security of open source penetration tools really comes down to a discussion of open source software in general.
The argument for closed source software appears to be the idea of ‘security through obscurity.’ Additionally, it is claimed that those who write and review proprietary code are ‘experts’ as opposed to the millions of ‘amateur eyes’ in the open source community (Whitlock 2001). Whitfield Diffe counters this argument for ‘security through obscurity’ by pointing out that in cryptography the algorithms that are considered to be most secure are open, and that secrets are not desirable (Diffie 2003). David Wheeler states that many eyes observing the code, expert or not, may actually lower the possibility of the insertion of malicious code into an application; he points out a backdoor slipped into a Borland database as proof of what can happen even in proprietary environments (Wheeler 2003).
The experts appear to be fairly evenly divided on the issue, so perhaps the best method is to seek out instances of exploited open source tools. A cursory search with Google turned up nothing blatant, and generally leads the researchers to ‘how-to’ guides for the various tools. The next step was to look for tools in specific security databases. The researchers looked for the common tools NESSUS, Wireshark, Nmap, and the Backtrack suite in: The Security Focus Incidents, Vulnerabilities, and Bugtraq databases as well as the US-CERT Vulnerability Notes Database and the Secunia Advisory and Vulnerability Database. The following is a list of what the group found:
- An unconfirmed report of Nmap changing mstask.exe on scanned computers from Security Focus (SecurityFocus 2009).
- Cross-site scripting, activeX and arbitrary code execution vulnerabilities in older versions of Nessus from Security Focus, US-CERT, and Secunia (Department of Homeland Security 2009; Secunia 2009; SecurityFocus 2009).
- An infinite loop vulnerability in older versions of Wireshark, that essentially blinds the tool, From Security Focus (SecurityFocus 2009).
- Vulnerabilities in Wireshark that may cause the application to crash and/or execute arbitrary code, and buffer overflow vulnerabilities from secunia and US-CERT. (Department of Homeland Security 2009; Secunia 2009)
Admittedly, this list of tools is small, and there are several reputable vulnerability databases that were not queried. Backtrack is a suite of tools, and the individual tools may contain vulnerabilities that were not uncovered here. The widespread use of the tools reviewed and the good reputation of the databases in question are adequate to provide a sampling of data for the purposes of this work. While several vulnerabilities were reported for the tools, it is notable that there was only one vague and unconfirmed report of the exploitation of a tool. The research demonstrates that there is no reason to suspect that open source security tools present a greater risk than any other application, open source or proprietary.
While this is the case, there are several steps security professionals can take to mitigate risk that penetration tools, and for that matter, that applications in general add to the network. It is vital that all software be subjected to some scrutiny, as anything released into a production system untested may have catastrophic consequences.
Be vigilant: Consult vulnerability and security databases like the ones above on a regular basis. Watch for potential problems with the tools you already use. Look for issues before testing a new tool.
Know the source: If at all possible, get tools only from reputable websites. Sourceforge is a good example. While this will not prevent the tools from having vulnerabilities (nothing will), the professional will not have to worry about the site purposely corrupting the tools. This may not be the case for the original coder, however.
Audit the code: The only way to be sure that an application does not contain malicious code is to examine it personally. This is easy if the item in question is a script with fifty or so lines, but the task becomes more daunting as the script grows, and requires nearly impossible effort as the complexity increases further. This is where (hopefully) the many eyes of the open source community come in. If the code is reviewed, anything harmful will turn up, and either be reported and removed, or the application abandoned entirely (Wheeler 2003, p. 12). Admittedly, it is impractical to properly audit every piece of code that enters into a production system. An operating system alone would take thousands of man-hours. This is why the other mitigation strategies listed here become so vital.
Use a hashing utility: If two sources for the same code are available, use a hashing utility. Any difference will change the hash. There are even utilities available that will highlight the differences in the code, making auditing easier.
Test in a sandbox environment: It is good practice with any software to deploy first in a test environment before releasing it into a production system. If there is a problem, it can be easily contained, and will not impact business.
Furthermore, it is recommended that any practitioner develop standard operating procedures that allow for a sensible combination of the above methods. Additionally, a rational policy for the introduction of software into any production environment should be created, and it should be enforced.
The team’s research shows that there is no reason to assume that increased caution is necessary when dealing with open source penetration tools as compared to any other software. Even though this is the case, it is strongly recommended that a course of due diligence be taken when introducing any software to a system. Furthermore, reasonable steps can and should be taken to mitigate the risk involved.
Problems and Issues
Part 1
No problems or issues were encountered in this phase of the exercise, with the exception of ambiguities in the classification of multi-role tools.
Part 2a
A few problems were encountered in the execution of the active scanning/observer test. It was discovered that the KnoppixSTD distribution (initially targeted as the ‘observer’ entity) had only a NESSUS client available. Installation of the NESSUS daemon package failed for unknown reasons. This was addressed by using a different host and operating system for the NESSUS install. This actually proved to be a positive aspect, as the Microsoft Windows based NESSUS binary was considerably more polished in its graphical user interface. This prior issue directly led to the parallel install of the NMAP/Zenmap package for Windows, which was notable in its greater utility than the console based version. Additionally, a significant problem was encountered with the install of ‘lanmap’ on FreeBSD: it simply would produce no output graph. This was remedied in an ad-hoc way by employing the nUbuntu distribution in a virtual machine, since it was known to have a working install of ‘lanmap.’
Part 2b
No major issues for this part of the lab were encountered, although it was at first disheartening to find no incidents directly involving penetration tools.
Conclusions
In summary, we have accomplished a number of significant things. We have defined ‘passive reconnaissance’ to have three dominant characteristics: that of uncertainty, invariant risk, and limitation of scope. We have classified a significant sampling of known security tools by this definition, and arranged them according to both the extended OSI network layer construct and the McCumber security model. Additionally, we have determined that the speed at which a tool is executed in attack can change its classification from ‘active’ to ‘passive.’ Furthermore, we have run a series of experiments using live hosts on a physical network, examined the concept of the ‘meta exploit,’ and found it to be of limited use in application on a switched network. Moreover, we have indicated that certain biases are found in both the number of operating system vulnerability tests and in the targeted network layers of two common security tools: those being NESSUS and NMAP. We have proposed that an overtly UNIX-like operating system bias is due to the issues software origin and deployment philosophy; and that the notable OSI ‘high layer’ target bias proceeds from the concepts of compounded complexity and application mutability. Also, we have demonstrated that the risk of compromised security tools is no more than that found in any other application. Finally, we have proposed specific methods by which software’s integrity can be verified; that is, by using hash checks, source code reviews, and sandbox testing: all of these, along with vigilance, are the basis of a solid institutional deployment policy for software.
Charts, Tables, and Illustrations
Table 1: Passive Exploit Tools
Layer 0 |
Vendor Instruction Manuals |
Technology, Processing, Confidentiality |
Layer 1 |
Schematics (know the system) |
Technology, Processing, Confidentiality |
Layer 1 |
Specifications (fault tolerance) |
Technology, Processing, Confidentiality |
Layer 1 |
Case studies (similar systems) |
Technology, Processing, Confidentiality |
Layer 1 |
Blueprints (as-builts) |
Technology, Processing, Confidentiality |
Layer 1 |
Multi-spectrum data recorders (“black box” reverse engineering) TEMPEST |
Technology, Processing, Confidentiality |
Layer 1 |
NetStumbler |
Technology, Transmission, Confidentiality |
Layer 1 |
Aircrack |
Technology, Transmission, Confidentiality |
Layer 2 |
Kismet |
Technology, Transmission, Confidentiality |
Layer 2 |
Ettercap |
Technology, Transmission, Integrity |
Layer 3 |
Fragrouter (debatable: for use in defeating IDS systems, essentially making active tools passive) |
Technology, Transmission, Availability |
Layer 3 |
Tcpdump |
Technology, Transmission, Confidentiality |
Layer 3 |
Wireshark |
Technology, Transmission, Confidentiality |
Layer 3 |
Lanmap |
Technology, Transmission, Confidentiality |
Layer 3 |
DNS-ptr (Layer classification unclear: is layer 7 protocol, but returns information usable at layer 3) |
Technology, Transmission, Confidentiality |
Layer 3 |
DNS walk (See above note) |
Technology, Transmission, Confidentiality |
Layer 3 |
DNS mapper (See above note) |
Technology, Transmission, Confidentiality |
Layer 3 |
DNS predict (See above note) |
Technology, Transmission, Confidentiality |
Layer 3 |
Dig (See above note) |
Technology, Transmission, Confidentiality |
Layer 3 |
DNS enum (See above note) |
Technology, Transmission, Confidentiality |
Layer 3 |
tcpshow (tcpdump interpreter) |
Technology, Transmission, Confidentiality |
Layer 3 |
Netsed (multi-use, some active properties. Primarily in this application, inline packet examination, offline fuzzing) |
Technology, Transmission, Integrity |
Layer 4 |
ISNprober |
Technology, Storage, Confidentiality |
Layer 4 |
p0f (Passive OS finger printer) |
Technology, Transmission, Confidentiality |
Layer 4 |
C/C++ BSD socket API (promiscuous mode) |
Technology, Storage, Confidentiality |
Layer 4 |
Pearl script with sockets (promiscuous mode) |
Technology, Storage, Confidentiality |
Layer 4 |
Python script with sockets (promiscuous mode) |
Technology, Storage, Confidentiality |
Layer 5 |
SMTP verify (debatable, varies with application) |
Technology, storage, confidentiality |
Layer 6 |
l0phtCrack |
Technology, Storage, Confidentiality |
Layer 6 |
PSK-Crack |
Technology, Transmission, Confidentiality |
Layer 7 |
Ophcrack |
Technology, Storage, Confidentiality |
Layer 7 |
Pantera ( web application tester: can be used in offline testing role) |
Technology, Storage, Integrity |
Layer 7 |
Paros |
Technology, Storage, Integrity |
Layer 7 |
Scanhill (MS Messenger Sniffer) |
Technology, Transmission, Confidentiality |
Layer 7 |
Slurpie (Distributed pw cracker) |
Technology, Storage, Confidentiality |
Layer 7 |
VNCcrack |
Technology, Transmission, Confidentiality |
Layer 7 |
AIM Sniff |
Technology, Transmission, Confidentiality |
Layer 7 |
crack (UNIX based) |
Technology, Storage, Confidentiality |
Layer 7 |
gwee (offline fuzzing) |
Technology, Storage, Integrity |
Layer 7 |
THC-Hydra |
Technology, Storage, Confidentiality |
Layer 7 |
KRIPP |
Technology, Transmission, Confidentiality |
Layer 7 |
RainbowCrack |
Technology, Storage, Confidentiality |
Layer 7 |
Pwdump |
Technology, Storage, Confidentiality |
Layer 7 |
John the Ripper |
Technology, Storage, Confidentiality |
Layer 7 |
Cain & Abel |
Technology, Storage, Confidentiality |
Layer 7 |
Dsniff |
Technology, Transmission, Confidentiality |
Layer 7 |
Brutus |
Technology, Storage, Confidentiality |
Layer 7 |
Google Mail-enum |
Technology, storage, confidentiality |
Layer 7 |
GHDB |
Technology, storage, confidentiality |
Layer 7 |
Relay Scanner |
Technology, storage confidentiality |
Layer 8 |
Stealth |
Policy-practice, Confidentiality, Processing |
Layer 8 |
Surveillance |
Policy-practice, Confidentiality, Processing |
References
Department of Homeland Security. (2009). “US-CERT Vulnerability Notes Database.” Retrieved 6.25/2009, 2009, from http://www.kb.cert.org/vuls
Diffie, W. (2003). “Risky Business: Keeping Security a Secret.” Retrieved 6/27/2009, 2009, from http://news.zdnet.com/2100-9595_22-127072.html
Godefroid, P. (2007, November 6). Random Testing for Security: Blackbox vs. Whitebox Fuzzing. p. 1.
Jung, J., Sheth, A., Greenstein, B., & Wetherall, D. (2008, October 31). Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing. pp. 279-288.
Parks, R. C. and D. P. Duggan (2001). Principles of Cyber-warfare. IEEE Workshop on Information Assurance and Security. United States Military Academy, West Point, NY, IEEE: 4
Secunia. (2009). “Secunia Vulnerability and Advisory Database.” Retrieved 6/25/2009, 2009, from http://secunia.com/advisories/search/?
SecurityFocus. (2009). “SecurityFocus Database.” Retrieved 6/25/2009, 2009, from http://search.securityfocus.com/swsearch?
Skoudis, E., & Liston, T. (2006). Counter Hack Reloaded – Second Edition. Upper Saddle River: Prentice Hall.
Wheeler, D. A. (2003). Secure Programming for Linux and Unix HOWTO.
Whitlock, N. (2001). “The Security Implications of Open Source Software.” Retrieved 6/27/2009, 2009, from http://www.ibm.com/developerworks/linux/library/l-oss.html?open&I=252,t=gr,p=SeclmpOS.
This group did not follow the requirements for post submission. The tags were not included. In the abstract, the group mentions creating a definition of active reconnaissance. This should have done in the second lab report. The group should be looking at creating a definition of passive reconnaissance. APA 5th edition citations were not used in the literature review. You need to have author name, year, and page number. In the abstract, the group talks about ‘meta exploit’ I think they meant meta sploit. The literature review was not cohesive. The literature review read like a list. First the group discusses one article, goes through the list of questions that were required to answer. Then the group moves on to the next article and goes through the same list of questions. In the second article by Godefroid, the group used many lines from the article, and I see no citations for the direct lines from it. Unlike the first article the group talked to, the review of Godefroid’s article was not reviewed very thoroughly. None of the questions were answered that are required. The group should have gone into more detail about Godefroid’s article. I would like to have seen the group research into the conference that Godefroid presented this proposal at. This would be part of the literature review into the supporting data of the article.
It was interesting to see that this group decided to not use the Citrix environment and used their own ‘real’ equipment. I think that the group should have at least attempted to use the Citrix environment for this lab. From the other group’s reviews, they were all able to perform this part of the lab using the environment. I would liked to have seen more detail in the methodology section so that this lab report could be handed over to other people and have them be able to recreate the same process and perform the lab experiment. For some reason the group decided to have each part of the lab experiment to have a separate results section. For more cohesiveness, the group should put all of the issues or problems into one paragraph. I do not feel there is a need to have any issues separated by parts. Citations found in the results and discussion, like the literature review did not follow the APA 5th edition format. I found that conclusions were discussed in the results section of the lab report. As far as the table is concerned, I do not see how vendor instructions can be considered to be kinetic. I hope the group can further clarify this for me in a comment to this. I would also like to see how the group defined stealth as a layer 8 attack tool. I think stealth can be passive or active. I can sneak around and punch someone, as long as I was stealthy. One more thing I would like to have seen the screenshots from their procedures.
The introduction section for this lab report was excellently written and set out the principles that would be considered in the lab write up in terms of scope and definition. I was hoping this would lead into a literature review of passive reconnaissance terminology and literature. The literature review only addresses each of the articles given out with the lab exercises. While they are each evaluated extensively with analysis of the methods and application to the lab exercises of the course, they aren’t contrasted against each other. Missing from the literature review is any other discussion on other methods of passive reconnaissance.
The methodology for part one only discusses breaking down the list created in lab one and categorizing the tools that were found in that list. It would’ve been nice to see some other sources evaluated. The writeup for 2a was detailed but didn’t talk enough about how the traffic between the two segments was going to be configured so that the “observer” could see the entire “conversation” between the two hosts. The method of searching security lists and bug tracker databases was innovative, web searches didn’t turn up much and I hadn’t thought about those sources.
I disagree with the classification of password crackers as reconnaissance in the results. Grabbing the password off of the wire would be passive in nature and classify as reconnaissance but cracking an encrypted password grabbed off the wire falls out of the realm of reconnaissance into simply cracking. This section also mentions the use of digital forensic tools and that since they can be used in such a way they leave “little trace of the event” they can be considered passive. I believe the observer effect comes in to play when dealing with manipulating files on a machine you have physical access to. Simply loading the tool fundamentally alters the state of the machine such that the activity can be considered active.
The findings of part 2b opens like a literature
review. Would’ve been good to see it further up and contrasted against the other types of passive reconnaissance. The mention of the “many eyes” principle when talking about source code and cryptography shows a good depth of research that should be discussed in the literature review. The findings cover a wide range of issues regarding the issue of possibly malicious exploit tools, one that is missing is monitoring the output of the tool. Sandboxing is mentioned but with no specific details as to what would be monitored, if a tool is maliciously sending data to its author, this would be a channel that could be monitored and scrutinized.
In the findings for 2b, source code auditing is mentioned along with the many eyes principle. In the table created for part 1 of the lab, blueprints are mentioned as a layer one exploit tool. Would source code fit in this category as well? Certainly if you can review the code for errors or security flaws from a defensive perspective, it could also be done from an offensive perspective.
Team three presented an abstract that was complete and within the bounds of the syllabus, it did not meet the length requirements, but otherwise explained what was going to be performed in the lab, and did not read as a list of objectives. I agree with the introduction, and consider it to be the best introduction in the group of lab three assignments. Team three explains the definition of passive recon in terms of active recon and how, and how they differ. This makes for an approach that while requiring the reader to understand the previous lab does aid in understanding. I find it simpler to understand active recon over passive recon and this explanation can be rather beneficial. Team three then goes on to present their review of the literature of the week. Since there were only two articles to be read for this week, creating a cohesive literature review should be a simple task, however team three’s literature review is nothing more than a list of readings with APA five citations. The literature review does cover both articles, and relates them to the lab as well as answer the questions in the syllabus, however it cannot be ignored that there is no cohesion between the articles themselves leaving their analysis lacking. After the literature review team three follows the lab format provided in the syllabus and lists their methods. Their methods do show the strategy and technique that team three followed to complete their lab. However the methods themselves are broken into sections that correspond to the sections of lab design document. There is no unification in team three’s methods causing them to appear to lack academically. Team three presents their findings in the same manor as their methods, split by lab section. In part one they explain their reasoning for selecting the tools they selected as passive based on the definition presented in the introduction. While I agree with their definition, and see how they follow that definition to their results, but must disagree with some of those part one findings. They list password-cracking tools that perform offline cracking after gathering file data from a target system through either packet captures or booting a target system from a Linux live boot disk and gathering the file data in that manor. I question that approach as being passive, physical access to a system and performing a boot from live boot media would cause someone other than the attacker to notice thereby increasing risk and becoming an active recon tool. I agree with team three’s findings in sections two-A and two-B. However, as stated above, the findings, as well as issues sections are not unified. This is outside the format of the syllabus, and does not present a lab that is academic in nature. I agree with the conclusions drawn by team three, especially with the UNIX systems bias in NMAP and Nessus. This is in direct disagreement with team one, and leaves me questing that team’s findings. In team three’s passive recon table I see netstumbler and aircrack listed in layer one. I question this placement of the tools. 802.11 while explaining the use of radio as a transmission medium, is primarily a layer 2 technology. Finally, in layer 8, they list stealth as a passive recon tool. While I understand the placement, stealth seems to me to be a tool that aids passive recon not a tool that performs passive recon.
This group wrote a good abstract however did they mean meta sploit instead of meta exploit? I also liked their introduction. This group did not seem to follow the requirements for post submission. I noticed in their literature review that they only used page numbers in their citations. They need to make sure they include the author, year, and page number.APA 5th edition citations were not used in the literature review. The literature reviews were not very thorough and none of the required questions were answered. I did like that they separated parts 2A and 2B; however it was confusing how they listed their results after each part instead of into one section. They listed tools that have been exploited but I did not see any specific case studies.
I find it hard to believe that they couldn’t find any incidents (cases) relating to penetration tools being exploited.
This group’s abstract very briefly explains the purpose of this lab. Then the abstract goes into explaining the different parts of the lab. Parts of the abstract look like they were taken from the last lab and not changed. This abstract talks about the lab being about active reconnaissance when it is actually about passive reconnaissance. Next the group gives an introduction to the lab. In the introduction the group starts off nicely by tying this lab into the last. They introduce three properties of passive reconnaissance, just as they did with the last lab on active reconnaissance. The three properties that the group gives to passive reconnaissance are: uncertainty, invariant risk, and limit of scope. They then go into explaining each of the properties in detail comparing them to the properties of active tools given in the last lab. These properties were a nice addition to the lab paper. They add more to the explanation of both the active and passive tools used in these labs. The group then ties their explanation of passive reconnaissance tools into the lab and describes each step briefly. Next the group goes into their literature review of the articles given in the lab. The group starts off giving a good summery of Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing (Jang, et al, 2008). They then give a good explanation of the methodology of the paper by explaining how they did tests using the Privacy Oracle program as the blackbox, Wireshark to capture the output, and AutoIT to generate inputs to Privacy Oracle. The group then does a decent job of tying the article into the lab. The group then talks about the paper Random Testing for Security: Blackbox vs. Whitebox Fuzzing (Godefoird, 2007). They do a nice job on explaining what the paper was about and how it will tie into this lab. The problem with this groups literature reviews is that they are missing a few things. In each review the group does not talk about the theme or topic of the article, the research question, research data, or any errors or omissions. As for Godefoird’s paper, it is missing a lot of things and the group did not touch on these. Next the group starts in on methods of the lab. They start by explaining that they created a table of passive tools, from the table created in the first lab, using the definition of a passive tool defined in the introduction of this paper. Then the group explains how they tested some of the tools and finally how slowing tools down can reclassify them as passive. The group then explains part 2a of the lab. In this explanation the group does a very nice job of explaining how the whole scenario was set up. The group used their own network setup instead of the provided Citrix network due to its cumbersome nature. The group then explained how the test was set up on three machines. The group explains how the test was carried out but does not give details on how each program was set up for the test. The explanation of how the test was put together was very good but lacked in details like what commands were used in Nessus and Nmap and how Wireshark’s filters were set up. The group then explained how research was done to locate information on compromised network security tools. They explain that the information was not found easily until they looked into security issues and bug tracking sites. They gathered a good sampling of tools for examination. They also did research on the Open Source movement and comparisons were made against the practices of proprietary software vendors. Last they did research on code review techniques and safe methods of software deployment. Next the group discussed their results of each part of the lab. They started with the generation of the passive tool table. They explain that they found that sniffers did not add to the network traffic and that scanners did add to the traffic and could be detected. The group did a nice job in giving examples in each of these. Then the group does a great job in explaining the differences in types of ways that information can be obtained on a network and classifies them either active reconnaissance or passive reconnaissance. They use the example of gaining passwords using the different types of information gathering. They also talk about other tools that gather information and classify them also. I think that the group did a nice job of further explaining the differences between active and passive tools, but did not need to go into as much details on the active tools. Then in the last section of part one the group talks about how slowing down an active tool could turn an active tool into a passive tool. The group does a nice job in explaining how this slowing of an active tool will move it from risk to invariant risk. I would still say that the tool is still classified as an active tool though, because each packet that is sent out to the target is still seen by the target and the series of packets could (even if it is most unlikely) even be seen as an aggressive scan of the network if looked into further. The target still can detect the attack even though it might not determine it to be a risk. On the other hand you do have a point that the active tool would be viewed as a passive tool in that situation, but I would still classify it as an active tool and not a passive one. In the description of the results of part 2a the group talks about the results of the Nessus and Nmap scan of the target machine. They noticed that Nessus provided more information than Nmap even though Nmap gave more precise information. The group even made a discovery while doing the scans. They found that VMware had left some vulnerabilities on a Windows Vista machine after being uninstalled. The group then talked about how their set up of the third party observation of the scans did not provide the information they were expecting. This lack of information was due to their setup of their network on a switched network. In our set up we were able to capture any scan of any machine on the subnet and observe all the traffic sent from one to the other. They had similar results when running lanmap against their network. They then talk about how the passive analysis could have been done by introducing it onto an older hub base network or a wireless network. They go into an example of how a wireless network could be analyzed in a passive manner. The group then commented on how the exploits used in Nessus were classified into operating systems and services. This allows for a user to eliminate a large number of exploits just based on these criteria. The group then talks about how the Nessus and Nmap both have a bias toward UNIX based systems. They claim this is due to both these tools being UNIX based programs and that because of the UNIX-like distributions. Last in this part of the lab the group states that a vast majority of the tools in the Nessus tool reside in the top most layers of the OSI model. This, according to the group, is due to the issue of compounded complexity and that higher level services are subject to a greater variety of implementations. In this part of the lab the group did a great job in covering all the sections of this part of the lab. I didn’t find a lot that this group did wrong or left out. In the last part of the lab the group starts off with a nice introduction to the findings of this part of the lab. I really like the opening sentence. The group then claims that most exploit tools are open source. They say that so the discussion of security of open source tools comes down to a discussion of open source software in general. The group then goes on to show that the fight between the security of open source and proprietary software is split right down the middle. The group then comes up with a few cases of vulnerabilities in some tools but only claims that there was only one vague case of an exploited tool that they found, but do not tell what it was or explain it in any way. They conclude this section by saying that there is no reason to suspect that open source security tools present a greater risk than any other application. I think that this section was not looked into as thoroughly as it should have been. In our lab we were able to come up with a few exploited security tools. The group then does a nice job in giving some ways to ensure that the programs that are being used are not infected. Last in this section they give warning to give extra caution to any open source penetration tools compared to other software. Next the group covered some issues they had with installation of some of the software needed for the assignments. Last for the conclusion, the group wraps up what they learned in each section. It would have been nice to see some type of overall conclusion to this lab showing what they learned from the whole lab in general.
Team three gave an overview of the lab within the abstract. However, I was somewhat unclear when they stated “First, we develop a definition of ‘active reconnaissance’ within the scope of network penetration attacks”, for this lab dealt with passive reconnaissance tools. The introduction cleared this confusion by stating “If it is possible to classify an ‘active’ reconnaissance tool, then to, it follows that a ‘passive’ reconnaissance tool must be subject to some similar definition.” Group three went on to say that passive tools consist of three characteristics, which include: uncertainty, invariant risk, and limit of scope.
Team three’s methodology differed somewhat from that of the different teams. Team three installed the tools that were to be used in this lab on a real wired network setup, with actual physical host machines present. This method was chosen because as team three stated “This particular setup was chosen for ease of use (as the VMware Workstation setup over Citrix can become cumbersome to use) and ‘realness’ of application: we were not certain how precisely VMware’s virtual network hardware duplicates the characteristics of switched Ethernet LANs.”
Team three differed from the other groups in that they classified offline password crackers as a passive tool. The rationale behind this classification was that “ ‘Offline’ password crackers require certain files to be extracted from the target machine or sniffed off of the network before being analyzed. Because of their lack of network activity, they are passive by nature.”Team three went on to include live discs as a passive reconnaissance tool. They stated” If physical access to the target machine is available, it’s possible to breech a Windows system using a Linux distribution on a bootable medium. From there, disk imaging, file carving, or simply copying select files can be accomplished with leaving little trace of the event.” Team three also stated that “an ‘active’ tool may approach ‘passive’ classification when its presence can no longer be identified on the network by a scanning signature alone: essentially a shift from a ‘risk’ to an ‘invariant risk’ characteristic classification.”
Team three came to the same conclusion of team four about the biases of the Nessus and Nmap tools. Most of the other teams thought the tools were biased against windows, but teams three and four realized that the vast majority of plug-ins were for UNIX based systems. Team three went on to say “For instance, AIX, IBM’s commercial UNIX product, has a list of nearly five thousand vulnerabilities which are checked. Microsoft Windows, on the other hand, is evaluated against a list of about fourteen hundred.” Team three’s rationale for this phenomenon was “Both NMAP and NESSUS arose from within the UNIX-based community; therefore it is unsurprising that they remain largely UNIX-based tools with a ’server’ rather than a ‘workstation’ emphasis.”
Team three found vulnerabilities in the Wireshark, Nessus, and Nmap tools and came to the conclusion that open source tools were no more insecure than proprietary tools.
Team three did not include the risks running untested penetration testing tools could have on an enterprise network.
Upon reading this groups lab I was pleased to see the detail taken with-in the lab. Let us start with the abstract, it was to the point and explain the overall concept of the lab and what was going to be done. Then we move onto the introduction of their lab. After reading the section it made me want to know is there an in between for passive an aggressive or is just black and white. Or is there a point where they switch back and forth. Would it be possible to make an argument that there are passive tools that can become active? Upon reading their definition for “passive” action creates no “new” risk above already existing. Is there a better explanation for these two types of attacks? Many sites describe passive attacks that do no damage and gather information (http://www.itglossary.net/passiveatt.html). Then aggressive attack is the opposite and is an attack that is meant to cause issues. Next onto the literature review, the students did a good job at reviewing the literature. There were still items that could have been added to the review such as comparing any arguments between the papers and questioning author decisions. Part of a literature review includes contrasting arguments and asking questions to get the most out of the papers or articles. Next was the methodology section and it was straight forward and described what they did within this portion of the labs. Next the results/ findings section, and one thing I wanted to know the students thoughts on using the tools outside the lab environment in a corporate setting. What tools do they think would be useful and non-hostile to the environment and then what tools maybe hostile and should be not be used. One such tool that probably would not be good as described within the lab would be Nessus. But why should it not be used besides the network traffic it gives off? Could a user other than the red team be using a packet analyzer and also gain the same information that is being tested for? I know vulnerabilities where found in both the windows environment and the UNIX environment but what where the different vulnerabilities? Where the ones on windows more serious than the UNIX vulnerabilities? Then came the issues section and their issues were described well. Lastly came the conclusion and it described what was done in the lab and what their views on this lab and how attacks should be handled. Overall they did a good job, and just putting some of their input of the authors and what they found will help readers understand their point of view. These will then lead to more discussion of not only the topic but on their views and make it a more interesting read.
I think that group 3’s write-up for lab 3 was very good for most sections and fairly poor in others. The abstract and introduction for this lab was very good. The literary review was somewhat poor. Group 3 did not answer all of the required questions for the literature review. They did not explain the research methodology, if there were any errors or omissions in the readings, or whether or not they agreed with the readings. The group did, however, explain how the readings relate to the laboratory. All of the citing for the literary review was present, but not proper throughout the lab. The literature review was cited properly, except when including page numbers. The author and year of the reference should be included in addition to the page number. For part 2A, the lab envoironment was not setup as per the syllabus. While I don’t believe that the setup used (Windows Vista/FreeBSD) really changed the results, the Citrix environment should be used. If the group does not wish to use Citrix, then the correct virtual machines should be used and IP’d properly (statically and not via DHCP) to prevent confusion. Also the group indicates that they are unsure of the actual validity of VMware’s switching capabilities as opposed to using physical hardware. This brings up two questions. Did the group actually research this, and why couldn’t the group use the physical adapter on the host machine (ridged mode)? Is this any different than using the adapter from the host machine? In the results and discussions section for part one, the analysis was done very well. All of the questions for this section were answered and answered well. In the results and discussions section for part 2A, the analysis starts out well. Shortly after, there is an “interesting side note” that doesn’t relate to the laboratory and makes me wonder why they felt the need to include it. The group goes on to explain that the results of the test were disappointing, when it seems that the actual test is the disappointing part. The group was not able to capture the attack packets and did not include any analysis for why this is the case. The group hints at the possibility of different results on different hardware, but does not provide any depth to this theory. Does this depend on the switch being used? Does this mean that a NIC in promiscuous mode does NOT capture all packets on the LAN? What about mixing this with OTHER security tools (I believe this is the point of the lab)? Since the beginning of this course we have been researching security tools. Doesn’t a tool like Ettercap strike you as an important tool to try? If the packets are not being broadcast across the network, then what if the observer was in between the traffic (this is why it’s called a Man-In-The-Middle attack)? What traffic can you see when you ARP poison one-way? How about both ways? When discussing the biases in Nessus and Nmap, the group answered the question well. Also, their analysis of how these vulnerabilities fit into the grid was accurate as well. For the results of part 2B, the findings are discussed well and accurately answer all of the required questions. The issues and problems section was done well, with the exception of a part in 2A where the group indicates that the GUI version of Nmap is better, with no evidence of why. Is Nmap easier to script as a GUI application or as a console application by using a shell scripting language? The conclusion to this laboratory was also well done because it accurately sums up their procedures and findings.
The team started with a strong abstract and indicating key points of there laboratory. They covered different tools that the team was going to use for scanning packets. There literature review was in very depth.
Under the Methodology and Procedure for Part 2a the team reports using Windows XP 32bit Service Pack 3, Microsoft Windows Vista 32 bit Service Pack 2, and FreeBSD 7.1 Release i386. Lab 1 required the build of two Windows XP machines, one Windows Server machine, and one Linux machine. Simply wondering where the Windows Vista machine appeared and why? Was there a advantage in using Windows Vista or a Windows XP machine for this lab? Then the team talks about not using the Citrix virutal machine because it was becoming cumbersome, however this part is better suited in the problems and issues section of the lab. Like other teams Nessus and Nmap was combine into one system. The teams finds was very in depth, they broken up there finding into the difficult sections.
The problems and issues section was broken up very nicely. The team had the main problems in part 2a. The team reports that the Nessus daemon was attempted to be installed on KnoppixSTD but failed. What was the error that was given? Did it simply just return a basic fail error, such as “this should never have happen” which appears when booting Backtrack into RAM with the machine have less then two gigs of RAM.
Generally: the comments about the ‘active’ definition in the abstract; this was an editing error and should have read ‘passive.’ Apologies for any confusion this caused: it will be watched for in the future. To those who questioned the use of ‘real’ hosts and networks: I see no instruction which forbids this, and would argue a ‘real world’ based test ALWAYS trumps a simulation in credibility of experimental results, so why not use the better opportunity if available?
@mvanbode, mafaulkn: The term ‘meta exploit’ is taken directly from the lab 3 instructions, it is in no way meant to refer to the ‘Metasploit’ framework.
@mvanbode: The ‘Vendor Instructions’ (for SCADA devices: perhaps this should have been stated more clearly) are included in layer zero for the same reason ‘Specifications’ are included in layer one as ‘passive’ reconnaissance means. Installation manuals for devices such as these contain a wealth of information about device operation and setup, for example: http://www.hitachi-ds.com/en/download/plcmanuals/ . I believe much of this disagreement may be based around the ‘layer zero’ definition, which is really a philosophical dispute.
@jverburg, nbakker in regards to ‘cracking’ and forensics: As I cannot directly perceive the information in the binary signal going down ‘the wire,’ I rely on such tools as Wireshark to ‘decrypt’ this information for me. I think this same relationship holds for the cracking of sniffed passwords, and therefore ‘crackers’ fit the same role Wireshark serves in reconnaissance. The forensic and boot disk tools, I admit, are controversial. If used on the target’s premises, I, too, would agree that they are ‘active.’ However, the only time I have ever used these tools has been in ‘cracking’ used workstations which I had acquired through legitimate means: workstations which were never wiped of the prior user’s data. I think this in an important ‘passive’ or ‘offline’ use of these tools in reconnaissance.
@jverburg: Is source code similar to blueprints? Sure, in some ways; but I think UML documents are closer to blueprints than source code. Source code is the ‘boards and nails’ of software construction. If ‘source code’ is included as a passive tool, in which OSI layer does it belong? To further complicate the matter, the mixing of hardware function and software ‘code’ often becomes so blurred, that no practical difference exists: is microprocessor RTL a ‘source code’ or a hardware specification? Is human DNA a ‘program definition’ or a type of ‘biological hardware?’ Interesting question, I don’t know that I have an exact answer, however.
@shumpfer: I think that I found a question about our definition of ‘passive’ presented, correct? Foremost, I must emphasize that we were defining ‘passive reconnaissance’ and not simply ‘passive,’ this is key in that ALL THREE criteria must be met in order for a tool to qualify in this category. Additionally, what exactly IS a ‘passive attack?’ It seems a strange union of antonymic terms. Finally, I found the definition referenced in the weblink to be so vague as to be unusable. What is the ‘system’ referred to in this definition? If it is the entire network and all hosts operating on it, consider that any host is changed simply by the act of running software. Would this then make Wireshark an ‘active attack’ as it is being run on a host in the ‘system?’ If I were to be pedantic, by quantum theory just the act of observing ‘changes the system,’ therefore it seems ‘passive reconnaissance’ is impossible from this definition, then.
@prennick: I would judge a ‘man in the middle’ or ARP poisoning attack NOT to be passive reconnaissance methods, but active. Sure, we probably could have done a number of ‘active’ things to circumvent the switched network limitation, but that was not the point of this exercise in ‘passive reconnaissance’ of a running ‘active’ tool. If you use an active tool to monitor a running active tool, what have you gained? A single active tool without any ‘observer’ would be just as effective, with half of the risk. Additionally, do you really think it necessary to explain the difference ‘in hardware’ of switches versus hubs, or what broadcast and multicast packets are? Many of the ‘captive audience’ are IT professionals: I thought it might be pointless and even insulting to rehash such basic concepts. I felt we related only the necessary details, with the rest being ‘well understood’ by the target audience.
@vanbode – Yep, we didn’t include the tags. Our bad. Did it cause you undue difficulty in finding the submission?. There were never any instructions that said Citrix HAD to be used, but I concede that results will vary with a different set up.
@jverberg – yes, analyzing output from the tools would be another method of audit. I kind of thought of it as part of the sandbox process, but it is specific enough to be mentioned on its own. I disagree with my teammate about source code audit being layer one. I think that’s a good extension. I think that a case might be made for it at both layer one and layer 7 though.
@nbakker – What is your obsession with abstract length?I can understand if you said that the lab is hard to read the way it is divided, but how is it not academic?
@mafaulkn – you didn’t appear to find any cases of exploited tools either.
@jeikenbe – why do you think it is that the Godefroid article is “missing” so many pieces? What exploited tools did you find? You discussed some vulnerabilities, but didn’t report any actual cases of exploitation. There’s a difference between potential and actual. Perhaps you should go back and reread section 2B again, you appear to have misunderstood it.
@tnovosel – from our lab: “anything released into a production system untested may have catastrophic consequences.”
@chaveza – please have someone review your writing before you submit it.
I think it was interesting that the group in general jumped on switched network and either suggested the attack could go forward with ARP poisoning or could not go forward as it was not a broadcast domain. Thus in the latter case dismissing meta exploit (exploit of an exploit) without considering several interesting items of real world information technology infrastructures. In the first case wireless is a broadcast domain and unless traffic is fully encrypted can be subject to sniffing through wireless means. Further since wireless access points are “exposed” getting access to the wireless backbone could provide the equivalent of a broadcast domain. In general though there is some providence that must be accepted. In pyramid hierarchies of access to distribution to core of information technology infrastructures the higher you can find the more likely you will see the audit device. Auditors are primarily lazy and instead of moving around on a switch creating VLANS they will often find a central point to plug into and run all scans from that singular location. Their choice of procedure dictates a vulnerability to the passive scanner. In the end even a passive attack will often have some element of risk to detection. In some ways passive capability is trading off physical access and active capability is decreasing the need of that physical access.