Abstract
The act of performing network reconnaissance can mean the difference between a successful penetration test, and a failed one. Without taking the time to gather as much information as one can on a target network or system the attacker is going in blind, and might be caught in the process. In order to alleviate that concern the question is not whether or not to perform network recon; but whether or not to perform active network recon or passive network recon. While laboratory two dealt with the topic of active network recon, laboratory three focuses on the topic of passive network recon, meaning performing information gathering on a target network with limited possibility of being discovered.
In lab three we will be looking at passive network recon. We will be discovering the tools and techniques that will allow us to gather information without interaction with or destruction of the target network. This will be accomplished first through a current literature review on the topic of black box fuzzing. Second by creating a chart of passive network recon tools, and aligning them to the McCumber cube. Third by running a simulated attack with penetration testing tools, and finally though a collection of case studies examining penetration testing tools that where used against the tester all while answering a series of specific questions on the topic.
Literature Review
The focus of the readings for week three were on fuzz testing, both black box fuzz testing, and white box fuzz testing as well as information leak detection through an extension of fuzz testing. According to Godefroid, fuzz testing involves taking well-formed input data and randomly mutating that data to create various different inputs that may effect the test application in various ways. This process can be effective at detecting security vulnerabilities (Godefroid, 2007). This type of fuzz testing, known more specifically as black box fuzz testing, seems to be the approach taken by a program called Privacy Oracle. The creators of Privacy Oracle, Jung, Sheth, Greenstein, Wetherall, Magnais, & Kohno claim that it is a system that takes standard applications that pass network traffic, and detects leaks of personally identifying information through network flow captures (Jung, Sheth, Greenstein, Wetherall, Magnais, & Kohno, 2008). The programs that are presented for testing in the Privacy Oracle are the top 26 downloaded applications from download.com. These programs consist of instant messengers, media players, and various other utilities such as anti-virus software. In order to test whether or not these selected applications leak personally identifiable information black box style fuzz testing takes a well-formed set of names and other information such as zip codes and genders and creates random mutations of these values that could considered anagrams. It then submits that information to the application, one iteration at a time and captures the transmitted packets, converts them into source – destination net flows using the NetDialign algorithm, for analysis of possible leaked information (Jung, Sheth, Greenstein, Wetherall, Magnais, & Kohno, 2008). In order to maintain effective and controlled test conditions the creators of Privacy Oracle ran the tests inside a VM that was possible to be recreated to the exact same conditions prior to test application install each of the 26 times. Running the tests in this manor insured that the only variable was the application being tested, and not fluke operating system conditions changing from iteration to iteration. Also in order to perform the most complete tests AutoIT was used to write simple BASIC like code that controlled the installation of each of the 26 applications (Jung, Sheth, Greenstein, Wetherall, Magnais, & Kohno, 2008).
Godefroid has also created an application known as Scalable, Automated, Guided Execution or SAGE (Godefroid, 2007). SAGE makes use of white box fuzz testing to increase the number of possible automated application inputs past the known constraints of black box fuzz testing. SAGE accomplishes this by gathering test application constraints directly from the application, and solving those constraints using a constraint solver. The goal generating a new larger list of inputs that more effectively test all of a subject applications execution paths (Godefroid, 2007).
SAGE and Privacy Oracle could be considered to be complimentary to each other. The current limitation of Privacy Oracle seems to be the use of black box style fuzz testing based on pre-defined input lists, and subject the current constraints of manual input lists. It seems to this researcher that the reason only 26 applications could be tested were because of these limitations. Making use of VM based deployment of Privacy Oracle and SAGE together running on VMware ESX with VMware view auto spawning operating systems for test applications could greatly benefit all of the authors. This would allow a much greater range of applications to be tested for information leaks in a hands-off automated way more completely than Jung, Sheth, Greenstein, Wetherall, Magnais, & Kohno ever considered. All while starting with a much smaller manually created input list.
The two articles for lab three’s reading had similar, but different topics. The topic of the article by Godefroid, on black vs. white box fuzz testing, is concerned with improvements made to dynamic symbolic execution and test generation (Godefroid, 2007). These improvements make white box fuzz testing possible. The topic of the article on Privacy Oracle by Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno deals with the creation of software that, by using black box fuzz testing, checks for leaks of personally identifiable information (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008). The underlying theme of both of these articles deals with testing applications for various security vulnerabilities, be they leaks of information or mutilated applications output. These topics and themes work into lab three by showing us one possible way to gather vulnerability information in a possibly remote and passive way. The research question presented by the team behind Privacy Oracle is based around the belief that users would prefer to be aware of the information exposed by their applications so that they can assess whether or not it meets their needs (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008). There is no research question presented by the Godefroid, as such it is not possible to compare the two. There is also no data presented by the Godefroid article, this is because the article is nothing more than an abstract to introduce a topic discussed at the Second International Workshop on Random Testing. The Privacy Oracle researchers on the other hand present a myriad of supporting data. The team presents their goals, their parameters, and then proceeds to explain their analysis and implementation. That implementation consists of their inputs, and execution. Once the reader is made aware of their chosen system, the study of the actual testing of Privacy Oracle is shown in detail. The selection of the applications is presented along with which applications leaked information in clear text, which applications leaked information through inference, and finally which applications leaked to third parties. Their discussion consisted of Efficiency, limitations, and finally related work (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008). There was no research methodology for black box vs. white box fuzz testing, for the same reason as above, it is just an abstract. The research method for proving Privacy Oracle was based on an empirical test driven observation of a controlled lab experiment as described by the article. This is the direct method of completing the lab for lab three, performing a test, based on the lab design, and recording the results presented by the test. The creators of Privacy Oracles were extremely complete in their experiment and article. This researcher did not find an apparent errors or omissions in their experiment or conclusions. There were however many problems with just presenting the Godefroid abstract. There is no research question, methods, or data to support the information that Godefroid suggests she will be speaking on. The long abstract explains what white box fuzz testing is, that is better than black box, but does not present any data to support that claim. While both articles presented information that could be used in the completion of the lab, the article on Privacy Oracle will most likely be more useful.
Methodology
As in all previous labs, the first step in the completion of the process dealt with a study of existing literature on black and white box fuzz testing. This literature was presented by the professor, and was reviewed in the form of a literature review listed above. The articles were compared and contrasted to each other, and questions, as presented in the syllabus, were answered.
The second step in lab completion dealt with, again as before, the creation of a table listing passive reconnaissance tools. In order to complete the table, tools were found both by drawing on existing research in network recon, namely the two tables created previously. The original source of these tools being the tools categorized as information gathering on the Back Track Wiki. Each Back Track tool was researched and considered as either being passive or active recon based. Only passive tools were added to the table. Additional tools were also considered and added to the table based on independent research into passive network monitoring through the Google term “layer x passive network recon tools” layer x referring to layers one through seven of the OSI reference model. Along the same lines as creating the passive network recon table, considering tools that can recreate a packet stream passively were also contemplated. That consideration did not require and additional research. Thought was also given to how to manipulate the time it takes for an attack to run, and if slowing down or speeding up an attack would make it hard or simpler to detect, and if that is passive or not. This question required additional research into how intrusion detection systems work.
For the third part of the lab we picked the Windows XP Sp0 to install Nessus and Nmap. Nessus is available for download at http://www.nessus.org/download/index.php. From there accept the agreement, fill out the form, select the Windows MSI for the Nessus client and register for an Activation code. The activation code is sent to the email provide in the form. Install the Windows Nessus Client which will also install the server. Start the Nessus server and inter the activation code and click register shown in figure 1-1. After the activation is complete the Nessus Plug-ins will download and install. After that is complete click on Start Nessus Server. This is necessary when running the Nessus client. Next start the Nessus client. Click the positive on the lower left-hand side to start; this will bring up the target menu for machine(s), an example is shown in figure 1-2. Select single host and enter an IP address of the target machine such as 192.168.2.1. Next, click Save and return to the Nessus client menu. Under the First Positive on the left is a button labeled Connect. At the pop-menu select local-host as the Nessus server. The Nessus client needs to connect to the Nessus server. For first time log-ins a Certificate is accepted as shown in figure 1-3.Then click on the positive in the middle under Select a scan policy. Under options place check marks in all the boxes as shown in figure 1-4. Under the policy tab change the policy name to something different that fits the purpose the policy such as Lab 3. Then click Save. Next click on the scan policy and then click Scan Now at the bottom. Next the tab will change for Scan to report while Nessus scans the single host as shown in figure 1-5. When Nessus has completed the scan, click on Export to view the report as an html document. Next install Nmap for Windows which can be obtained from http://nmap.org/download.html. The installation required the host machine to have Microsoft Visual C++ 2008 Redistributable Package which can be obtained from http://www.microsoft.com/downloads/thankyou.aspx?familyId=9b2da534-3e03-4391-8a4d-074b9f2bc1bf&displayLang=en
When installing Nmap, leave all the default setting, such as when selecting the components to install, were left as they were as shown in figure 1-6. When the installation is complete put the IP address of the target machine in the Target field. The command field will auto change with the addition of the IP address. When completed click the Scan button. When the scan is complete Nmap will display all the open ports of the target computer as shown in figure 1-7. Under the host details, nmap shows additional information about the target host as shown in figure 1-8 and 1-9. Nessus does scan for several types of vulnerabilities and is able to put them in a format that is broken down into sections. This gives the reader easier and quicker access to the data the user requires. If Nessus exploits were put into a grid there would be several similarities. According to Carlos Sarraute and Javier Burroni “Nmap database contains 1684 signatures, which means some 1684 different operating systems versions” (C. Sarraute and J. Burroni, 2008). On a different machine Wireshark was used to capture packets that were sent by Nmap and Nessus. Wireshark was started and open capture interface as shown in figure 1-10. Select the interface that is connected to the network that packets are desired for capture. Nessus and Namp were executed while wireshark was listening and packs were captured as shown in figure 1-11. The method that Nmap and Nessus were that they send ARP packets to the target hosts and information that wireshark return about the ARP packet was asking how is . The software sends these packets and waits for the return information from the target machine. From this view, it shows how these programs use small amount of packets to gather large amounts of information such as open ports and vulnerabilities which can help attackers gain entrance to the target machine.
The target machine is a Windows XP Sp3 machine with little user use. This makes breaking into more difficult then if having additional software installed and user surfing the web and exposing there machine to harmful site. Nmap show that the target machine had three open ports, 135, 139, and 445. Port 135 is for Microsoft Windows RPC as indicated by nmap. Port 139 is for NetBIOS and port 445 is for a Microsoft service. Nessus has a section for Microsoft port 445 being SMB detection and displays the risk factor as None, shown in figure 1-12. Nessus also picked up the other ports that Nmap reported as open and these ports where also given a risk factor of none. Nessus and Nmap both give information about possible operating system the target machine was running. However Nessus narrowed the search down to Windows XP SP 2 or 3. Nmap claimed the target machine could have been Windows XP SP2, SP3, or Windows Server 2003.
Nessus does scan for several types of vulnerabilities and is able to put them in a format that is broken down into sections. This gives the reader easier and quicker access to the data the user requires. If Nessus exploits were put into a grid there would be several similarities.
The fourth and final part of the lab dealt with network penetration testing tools that had been exploited themselves in order to turn the attacker into the attacked. We created two case studies dealing with common passive network recon tools that had at one point been the subject of exploits themselves based on research into Trojan horses and backdoors as provided by the Symantec security center. These case studies dealt with TCP wrappers and TCPdump. These two case studies were compared to each other based on how they were exploited and what the exploit did. This comparison generated thinking to how we know the tools we have chosen to use for this course, are not themselves the victim of hostile intent. This lead us to source code auditing as one possible method to insure safe execution, and if such a process is a possibility in the enterprise. As the final section of part four, consideration was given to the effects of using untested and possibly exploited penetration testing tools in the enterprise.
Findings
For the second part of the lab, the primary findings are listed in the Passive network recon table, in the tables & findings section. This table lists tools used in passive network recon, where they align with the OSI model, and McCumber cube. Tools that can recreate the packet stream passively include the tools that run at layer three of the OSI model primarily. These tools include Wire shark, ethereal, ehterpeak, and other protocol analyzers not covered here. The most effective way to make use of such tools is either through a wiretap, if you do not have access to a network or through the use of a switch port analysis or SPAN port on any layer 2 or above switch. The speed at which a particular attack runs can play a huge role in the ability to detect an attack or not. According to professor Liles himself, if an attack is run over a long period of time very slowly in the range of hours or days most automated discovery tools such as intrusion detection systems especially ones that act on rules and signatures will fail to detect the attack due to it being lost in the “noise” of the standard amount of network packets that traverse a network in that time frame. If an attack is sped up, that attack will be much simpler to detect because the entire attack will run its course in a matter of milliseconds making it easy to find based on rules and signatures, especially with the use of tools like wire shark and a human reviewer. The act of reviewing packets of the course of a network attack is passive; the act of speeding up or slowing down a network attack is not passive. As such if detection alone is the key, passive analysis might be the ultimate goal. Re-running the captured flows (read packets) of network traffic at higher speed would better show the source, and destination of an attack, and what was compromised, making tools that passively re-create a packet stream are of high value.
For the fourth and final part of lab three, we created two case studies. These case studies were of passive network monitoring tools that were themselves the victim of exploits. The purpose here being to take tools that are normally used in network penetration testing, and turn them into spring boards for counter attack by the attacked. While this is more of a possible concept, and quite likely a good idea, it would seem that the actual intent was less noble and more nefarious. The tools we chose were TCP wrappers and TCPdump. Both of these tools are commonly used to intercept information, in a passive manor, off of a network. According to Wietse Venema, TCP wrappers is a simple UNIX program that intercepts TCP traffic destined for an application or service. It records the remote host that is attempting to make the connection to the service and then allows the connection to continue. The way Venema puts it: move the vendor provided network server programs to another place, and install a trivial program in the original place of the network server programs. Whenever a connection was made, the trivial program would just record the name of the remote host, and then run the original network server program (Venema, 1990). The program was first used to monitor a specific cracker gaining network access on a regular basis to university systems where Venema was a student. Then in 1999 the source code was modified, and a backdoor Trojan installed into the original TCP wrappers program on the primary distribution servers. This Trojan allowed root access to systems running the infected version of TCP wrappers. The problem was however identified in hours and resolved through moving the primary download servers (CERT, 1999).
The second case study presented for lab three is on TCPdump. According to Symantec, on November 13, 2002 it was discovered that the then current version of TCPdump contained a Trojan horse. The Trojan would, upon installation of TCPdump, attempt to connect to host 212.146.0.34 on port 1963. Once connected it would attempt to load a remote shell for the attacker to use (Symantec, 2002). According to the MAN page for TCPdump, TCPdump prints out the headers of packets on a network interface that match a Boolean expression (Adams, Solnik, & Stout, 2002). TCPdump is a staple application for network packet capturing on UNIX based systems.
Both of these case studies contain common issues and patterns between them. The first of these issues being that the exploits themselves are old, and are based on changes to the C based source code that need to be included during compilation and installation of the source program. The second issue being that these exploits are fairly simple and rudimentary and could’ve possible generated much worse outcomes with more ingenuity on the part of the exploiter. The common patterns on both are that once the exploit has been successfully started they create a vector for root access to a system. The problem with the way both of these exploits go about offering root access is not very hidden. The TCP wrappers root access is accomplished by opening port 421 on the compromised system (Venema, 1990). TCPdump attempts to make a connection every 3600 seconds to a static IP address offering root access (Adams, Solnik, & Stout, 2002). This is very simple in terms of attack vector and easily traceable.
In order to insure that the tools that we use in our professional careers as penetration testers are not in fact infected with backdoors or Trojans there are steps that should be taken before using a particular release of the tools of our trade. There are three methods that primarily come to mind to insure the tools used for penetration testing are safe. The first method is running the tools on isolated virtual machines containing the fuzz testing software listed in the literature a review. The second method is running the tools on isolated virtual machines with packet capturing software, and the third method is source code auditing. The first method, involves running the Privacy Oracle and SAGE together to test the output of the selected tool to insure that said output is not representative of nefarious output. The second method, involved running tools like wire shark, and running the exploit tools against a target, much like part 2A of the this lab. Based on the wire shark packet captures analysis should be possible to determine if the penetration testing tool contains a possible exploit. The third option is source code auditing. Source code auditing is a comprehensive analysis of source code in a programming project with the intent of discovering bugs, security breaches or violations of programming conventions (Ounce Labs). Source code auditing is an attempt to find and correct bugs in the source code of penetration testing software before they are used on a live network. According to Ounce Labs there are three types of source code audit. Manual code auditing is having an independent set of eyes look at the code to determine in the code is bug free. Fuzz testing; like in the literature review is the second type. And the third type of source code auditing is using automated tools that perform the audit based on predefined rules and signatures (Ounce Labs). Source code auditing is a viable in an enterprise, and larger enterprises perform source code auditing to insure that their code is relatively bug free. Gartner estimates that if 50% of bugs are resolved before code is released to the public, 75% of patch management and incident response will be averted (Ounce Labs).
If an organization hires an independent firm to perform a penetration test, and that firm fails to insure that their selected tools are exploit free, that firm could be liable for many negative actions. Such actions include the exposure of sensitive information to anonymous third parties, as well as those tools leaving tested systems with root kits or other Trojan horse infections. This would allow the systems of the tested company to be used as part of a botnet for the original infectors and allow an attacker a non-traceable path to launch many other attacks.
Issues & Problems
There were two major issues with this lab. First it is apparent that the list of passive network recon tools is much smaller and harder account for than active network recon tools. Second, the VMware lab system was unavailable for the majority of the weekend due to Citrix acting unexpectedly.
Conclusions
Lab three presented many useful insights for penetration testing. While locating passive network recon tools might be more complicated than location active network recon tools, the passive tools could be much more useful. By not giving up location while gathering information, an attacker does not need to concern themselves with hiding their location and can observe a target for a much longer period of time. The second part of the lab, during the process, revealed an idea that was not immediately clear at the onset of the lab. By making use of passive recon tools, a network defender can actually turn the tools of an attack against that attacker, or against the consulting for hired to audit a network. By gathering packets and information quietly a security administrator could use that information to determine if the tools of an attack are themselves an attack vector and then exploit that vector to determine the end game, location, and other personally identifiable information of said attack and possibly even shut them down from performing future attack. All in all, lab three while slightly more difficult to complete due to the ability to get limited good information on passive network recon, was a real eye opener, and a very important building block to the rest of the semester.
Tables & Figures
Passive network recon table
OSI Layer |
Passive Recon Exploit |
McCumber Coordinate |
8 |
|
Confidentiality |
8 |
Lexus Nexus |
Confidentiality |
7 |
Stealth Watch+ |
Confidentiality |
7 |
lanmap |
Confidentiality |
7 |
Network Miner |
Confidentiality |
6 |
Ettercap |
Confidentiality |
6 |
snort |
Confidentiality |
5 |
Telchemy |
Confidentiality |
5 |
StableNET |
Confidentiality |
4 |
TCPdump |
Confidentiality |
4 |
Locality Buffering |
Confidentiality |
3 |
Wireshark |
Confidentiality |
3 |
Etherape |
Confidentiality |
2 |
NetStumbler |
Confidentiality |
2 |
NetDiscover |
Confidentiality |
2 |
Kismet |
Confidentiality |
2 |
KisMAC |
Confidentiality |
1 |
WireTap |
Confidentiality |
0 |
Binoculars |
Confidentiality |
0 |
Telescope |
Confidentiality |
0 |
Police Scanner |
Confidentiality |
Screen shots from hands-on lab activity
Figure 1-1 Entering the Activation Code in Nessus 4 for Windows XP
Figure 1-2 Nessus Target menus, Single Host, IP Range, Subnet, and Hosts in a file can be selected.
Figure 1-3 Accepting a new certificate for first time connections to Nessus server.
Figure 1-4 Editing the policy list to include certain options not enabled by default.
Figure 1-5 Nessus scans single host for vulnerabilities
Figure 1-6 Leaving the default settings for installing nmap for Windows
Figure 1-7 Shows three open ports for target machine. Port 135, 139, and 445
Figure 1-8 Shows Host Details part 1
Figure 1-9 Host Details Part 2
Figure 1-10 Select interface from which packets to capture from
Figure 1-11 Packets that were captured during the Nmap and Nessus scan
Figure 1-12
Works Cited
Adams, R., Solnik, M., & Stout, S. (2002, November 13). Latest libpcap & tcpdump sources from tcpdump.org contain a trojan. Retrieved June 27, 2009, from Latest libpcap & tcpdump sources from tcpdump.org contain a trojan.: http://www.hlug.org/trojan/
CERT. (1999, January 21). CERT Advisory CA-1999-01 Trojan horse version of TCP Wrappers. Retrieved June 27, 2009, from CERT: http://www.cert.org/advisories/CA-1999-01.html
Godefroid, P. (2007). Random Testing for Security: Blackbox vs. Whitebox Fuzzing. Proceedings of the Second International Workshop on Random Testing (p. 1). Atlanta: ACM.
Jung, J., Sheth, A., Greenstein, B., Wetherall, D., Maganis, G., & Kohno, T. (2008). Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing. ACM , 249-288.
Ounce Labs. (n.d.). FAQ: Code Audit. Retrieved June 27, 2009, from Ounce Labs: http://www.ouncelabs.com/resources/155-faq_code_audit.
Sarraute, C. and Burroni, J. (2008) Using Neural networks to improve classical Operating System Fingerprinting techniques from Electronic Journal of SADIO http://www.sadio.org.ar/SADIO-Files/4_sarraute.pdf
Symantec. (2002, November 13). TCPDump / LIBPCap Trojan Horse Vulnerability. Retrieved June 29, 2009, from Symantec: http://www.symantec.com/security_response/vulnerability.jsp?bid=6171
Venema, W. (1990). TCPWRAPPER Network monitoring, access control, and booby traps. The Netherlands: Eindhoven University of Technology.
The first item I noticed right away with this lab’s submission was that it was missing the proper tags that each lab report is supposed to have per the requirements. I found one particular sentence in the first paragraph hard to interrupt what the group was trying to say. This sentence was “While laboratory two dealt with the topic of active network recon, laboratory three focuses on the topic of passive network recon, meaning performing information gathering on a target network with limited possibility of being discovered”. The group needs to thoroughly read the sentences to make sure that the wording sounds correct before submission, and look for things that Word might not find. The formatting of the citations in the literature review does not follow the APA 5th edition formatting. Page numbers are to be included while citing a source. I do not agree on what the group stated about Godefroid article. In this article Godefroid presents the downside of black fuzz testing and instead proposes to use white fuzz testing instead because of the downsides stated. The group keeps saying “personally identifiable information” and I think they mean personal identifiable information. Once more reading the literature review I found that numerous words were missing for the sentences making them harder to read.
Instead of listing each author of the Privacy Oracle paper, the group after citing the authors for the first time and put Jung et al to save some space and not have to repeat all of the authors’ names each time in each sentence. The last part of this group’s literature review looked like it was just a list completing the required answers as per the syllabus. For the next literature review the group should put these items into their discussion over the articles and not throw everything else together in the end. Make the literature review more cohesive. I would liked to have seen the group research more into Godefroid’s article to look into the references that the author used and maybe even the conference itself that the argument in the article presented. This would be the supporting data for this article. Do MORE research for literature reviews.
The group had a very thorough methods section but the screenshots would have been better in the methods section instead at the end of the lab report. According to what the group said about getting their tools for the table from the first two labs, they should have been able to subtract the 2nd lab’s table from the 1st and that should have given them the 3rd lab’s table. I agree with the group’s point of view on whether it is harder to detect an attack is it takes longer. The group hits on the major point that recreating the packet stream is difficult because of the size of the attack and it can hide in all the “noise”. The screenshots the group had were very hard to read, it looks like they were shrunk down from a different size making them harder to read.
The literature review is very narrowly focused. Instead of evaluating the topic of passive reconnaissance in light of the literature provided along with other literature found about the topic, the concepts of blackbox and whitebox fuzzing are expounded in excruciating detail using only 2 sources. Following an summary of each of the articles, the reader is shown a brief comparison between the two testing methods but never any ties to the lab exercises for this particular lab. Also, one minor quibble about the citations of the Privacy Oracle’s authors, with that many authors, it should be cited as (Jung et al., 2008).
The methodologies section for part 2a shows promise with screen shots. The Nessus section lacks detail. What host was it installed on? What is ultimately disappointing with the level of detail given is the lack of actual data that was captured by the host running Wireshark. The screen shot provided as figure 1-11 shows broadcast traffic that was sent out through the virtual switch. None of the core traffic of the scan was captured by this host due to the switched network they were connected to. Only capturing ARP traffic and SMB browser announcements hardly accounts for capturing attack traffic. How would this be differentiated in any way from normal network traffic? Is any broadcast traffic then to be classified as possibly malicious? One of the unspoken requirements of this section of the lab was overcoming this limitation of our virtual environment and the statements made in this section show a lack of basic networking knowledge. The targeting of the XP SP3 VM is incorrectly detailed. The authors state that the machine has “little use” making it more difficult to break in to. What the authors don’t say they considered is the fact that the XP SP3 VM has the firewall turned on by default making reconnaissance against it much more difficult than the RTM one.
The findings of part 2b regarding the exploited security tools. The authors state that the purpose of the exploited tools is to “…turn them into spring boards for counter attack by the attacked.” While this is certainly possible and is detailed in this article on security focus, http://www.securityfocus.com/infocus/1857, the primary goal of the programs studied was to load a Trojan horse and surreptitiously notify the tool’s author of the infected host so they could use that machine. Counter attack was never an objective nor was it seen in either of the case studies. The finding that both of these exploits were made possible due to “changes in the C based source code that need to be included during compilation…” doesn’t make sense. This would be the method by which the exploit was done but hardly indicates a pattern between the two.
The methods suggested for auditing security tools are sound and look at all of the aspects of the problem from the network traffic generated by them after the attack and the auditing of the source code. One thing that is not addressed is source code availability, what if the tool is commercial and closed source?
Team two did an excellent job with this week’s lab. Most of the faults I can find are minor. In your abstract, the team says that reconnaissance is necessary so an attacker won’t get caught, but couldn’t an attacker get caught in the reconnaissance process? The team says passive means “performing information gathering on a target network with limited possibility of being discovered.” I took passive in this case to mean something that is acted on rather than acting. If you definition were used, you would need to include several tools normally considered to be active. You also state that you will be gathering information without interaction with the target network. Is that really possible? Do all of the tasks in this week’s lab relate directly to passive reconnaissance or are there multiple learning objectives?
The group’s literature review is excellent, but you leave me hanging. How can both articles be used? Why is the Privacy Oracle article more useful? Is it maybe because what they did amounts to a very complex form of passive reconnaissance? Would this actually be useful, or are there easier ways to do this? What gaps does it fill?
In the methods section, the team states that you considered every tool in backtrack. Are they all reconnaissance tools? Wouldn’t it have been better to just use those in the relevant groups? When running the scanning experiment, I like that you describe how to install and use the tools in question, except Wireshark. Are you certain that NMAP and NESSUS are both based solely on ARP packets? Would you have gotten better results if your target were not essentially idle? Did the packets captured by Wireshark mean anything? When discussing the safeguards for using open source tools on an enterprise network, your group mentions fuzzing. Isn’t fuzzing really just a form of code audit on the quick? Also, you suggest using Privacy Oracle. Where are you going to get Privacy Oracle and how are you going to determine that it will detect efficiently across your systems since, it is its self experimental? The discussion of your three (actually two) safety measures is fairly flimsy. Tell me more about each. What are the pros and cons? All technology based. Are there other methods that could also be applied? Is code auditing practical in the enterprise? You never answer this question, which was required as part of the lab.
In your issues section you state that problems with Citrix limited your ability to get results in your lab. When you had problems with Citrix did you contact anybody? The technology not functioning should never be an excuse from technology students, especially graduate students.
I found this lab write-up to be presented in a visually pleasing manner and also significantly informative in content. The abstract was particularly well worded (however, I would say this section read more like an ‘introduction’ than an ‘abstract’ proper: but this cannot be faulted by the ‘canonical’ lab structure). The literature review was in depth and quite lengthy. The procedure section was detailed, and the results discussed and clearly illustrated by screenshots. Also, I found the ‘passive’ tools selection in the results table to be informed, if brief. Furthermore, I thought the discussion of ‘compromised’ tools to be interesting and well documented. Finally, the section addressing the ‘safety of penetration tools’ discussion was concise and well laid out.
Some significant omissions and oversights did exist with this write-up, however. Of first concern, the literature review appeared disorganized both in structure and conceptual placement. The overly long first and last paragraphs were distracting, and the individually definable sections of the review, while reasonably good by themselves, were diminished by the patchwork nature of the body taken in its entirety. In sum, the review lacked internal consistency and organization: something which can be easily remedied in future write-ups, however.
I found the ‘Methodology’ and ‘Findings’ sections to be intermixed, with a discussion of results proper being done in the ‘Methodology’ section. Here to, better organization would benefit the content presentation in this write-up. I would also submit that the overly detailed description (which for all purposes appeared to have been a slightly modified copy-and paste from an instruction sheet) of the Nessus installation was relatively spurious in the scope of the exercise, and could have been greatly condensed. Also, the question of patterns in Nessus exploits is ‘given a hand wave,’ but amounts to only a long worded ‘yes,’ with no description of what these patterns might be.
The discussion of the findings (in the methodology) raises some serious questions about the conclusions drawn. The assertion is made that: “[Nessus and Nmap]… use small amount of packets to gather large amounts of information such as open ports and vulnerabilities…” I would ask, as witnessed by the screenshot provided (1-11), how can ‘connection-based’ services be engaged by the packets shown, which are obviously ‘connectionless’ in nature? I am certain that the correct data was gathered, but the conclusion drawn is wrong. Could it be that the actual ‘connection-based’ traffic is invisible to the third observer (and therefore unrecorded), rather some ‘incredibly efficient ‘capacity inherent in the two scanners coming in to play? I think it becomes fairly obvious that only broadcast and multicast packets are being recorded by the third party: the bulk of the traffic remains ‘unseen’ to the remote observer. This, of course, makes any conclusions drawn from this ‘meta exploit’ experiment of a dubious nature. To be fair, this team indirectly addressed this ‘switch hiding’ issue by referring to “the SPAN port,” or port mirroring setups later in the lab: but the reason for use of these types of ports was never explained, and certainly not applied back to the experimental setup.
Furthermore, I wonder at the inclusion of ‘Locality Buffering’ in a listing for ‘passive’ penetration tools. According to a quick Google search, ‘locality buffering’ is simply a buffer algorithm for increasing the ‘speed’ of packet analysis: it does not really appear to be a passive reconnaissance ‘tool’ in and of itself. Additionally, tangentially related, the compromised ‘TCP Wrappers’ program mentioned later is essentially a firewall application: is this really an exploited ‘network penetration tool?’ Finally, I would note that the topic of the ‘operating system bias’ of the two penetration tools used in the lab is never raised.
I thought their abstract was well written and documented what they planning to do in Lab 3. I did find some grammatical mistakes in their writings. This made it very hard to follow. Be sure to check for these mistakes before posting your paper to the blog. The formatting of the citations did not follow the APA 5th edition formatting. They neglected to include page numbers in their citations.
This groups Methods section was very thorough and included screen shots that were helpful in understanding what they were writing.
The group did not organize their paper by separating out parts 2A and 2b. This made it hard to find and follow which tools they found that were originally intended for ethical use but eventually used for unethical purposes. Once I found this information they did a good job of describing the tools that were exploited, how they were exploited, and provided 2 good cases studies.
They did a good job of explaining how tools can be tested for exploits before using them for penetration testing. I agree with their conclusions and would overall that their paper was well written.
Team 2 begins their lab report by defining passive reconnaissance as “performing information gathering on a target network with limited possibility of being discovered”. They then described what they are going to do in this lab. I did notice the word “where” where I believe they intended the word “were” in the second paragraph of this report. They also used the word “though” where I believe they meant to use the word “through”. There were a few other typographical and grammatical errors disbursed throughout the document, but wanted to specifically point these out as examples.
Team 2 proceeded with a literature review on the two articles that were our assigned reading this week. They described the Privacy Oracle system, which tests common applications to determine if they are sending private information over the network. One issue here is that they state that AutoIT “controlled the installation of each of the 26 applications” when the article they referred to states that AutoIT automated the data input for the applications (Jung, Sheth, Greenstein, Wetherall, 2008, p280). It didn’t control the installation. The article on Blackbox vs. whitebox fuzzing was also discussed. They describe the use of SAGE to gather constraints from the application so that all of the application paths can be tested. I disagree with the statement that SAGE and Privacy Oracle could be considered to be complimentary to each other. They are two completely different testing environments with two completely different goals. The goal of Privacy Oracle is to test for private information that is deliberately sent over the network from the application, without the user’s knowledge or consent. SAGE however, is a system to analyze program code to determine all possible execution paths. This is done so that fuzzing will follow all of the possible paths of execution. Although these two techniques are not necessarily mutually exclusive, their goals are vastly different. The sentence “These topics and themes work into lab three by showing us one possible way to gather vulnerability information in a possibly remote and passive way” needs explanation. What possible way are we talking about? This literature review never actually tells us how these articles relate to our laboratory assignments, only that they both have to do with conducting an experiment in an enclosed environment.
The methodology section reads like a very detailed how-to for installing and running nmap and Nessus. In my opinion this group should have put less effort into describing how to install and run the tools, and more effort into describing the information that was obtained when the tools were run.
Their final section discussed their research into network penetration tools that had been exploited. The two tools that they mentioned are TCP wrappers and TCPdump. Although TCPdump is best described as a packet sniffing program, TCP wrappers does not fit into the classification of a penetration tool. TCP wrappers is better described as a network security program for Linux. It provides control over network services and which hosts are allowed to access them (see ITSO: TCP Wrappers – http://itso.iu.edu/TCP_Wrappers ).
Team 2 lists three methods for verifying the safety of their tools; fuzz testing, measuring output with a packet sniffer, and code auditing. The second two methods are fairly straight forward concerning what they intend to accomplish. In their first method however, they suggest using Privacy Oracle and SAGE together to test the tool. Privacy Oracle is a method for determining if a program is sending personal information over the network to a third party. It does this by repeatedly running the program in a virtual machine, inputting different data into the program each time, and capturing the output. The outputs are then compared to each other to discover changes that are caused by varying the input. SAGE uses a system of whitebox fuzzing to ensure that it follows all of the execution paths and tests the entire program. Most of the scanning tools that we’ve used so far simply require a target IP address, or range of IP addresses, and perhaps some configuration parameters. What do we expect to accomplish by fuzzing the IP address or configuration parameters, and how does this ensure that the program is safe? If we remove the input procedure from the first method (Privacy Oracle and Sage), we are essentially left with the second method (virtual machine and packet sniffer). I just feel that this needs some further explanation.
I believe that Team 2 needed to spend a bit more time performing research to verify the statements they made in this lab report. Some of their conclusions needed further explanation. They would have also done well to proofread the report to eliminate the grammatical and spelling errors. In my opinion, they spent far too much time and effort creating a how-to for the installation and running of nmap and Nessus and not enough effort into documenting what knowledge was gained by conducting this experiment.
Group two did a nice job in creating an abstract for the third lab. They start off the abstract with an introduction to why we would use passive reconnaissance and what passive reconnaissance is. They transition from the second lab into this lab nicely. Also in the abstract they do a good job in briefly explaining what is going to be done in this lab. Next the group goes into the literature review. This literature review was well put together. The group started off by explaining each of the papers and comparing them together. The group talks about the methods used, or not used, in each of the articles. They describe the programs used in each test, in a detailed manner. The group also gives their opinion on how the writers could have used a virtual environment to help in the testing. The group then goes into a more detailed comparison of the two papers and ties them into the lab. They also do a great job on discussing the methodology, research data, and research question of each paper. The group stated that they found errors and omissions in the paper on the Privacy Oracle program, but they found a lot missing from Godefroid’s paper. Next the group started their discussion of the methodology of their lab. They quickly covered the literature review first. Then they talked about the second part of the lab. In this section they discuss how they put together the table that was required by the lab. They did a good job in explaining how they did the search using Google. They also explained in a detailed manner on what was going to be included in the table. They then quickly discussed the other questions of slowing down the programs and how that would affect the detection of the attack. They could have expanded on this by explaining what they researched for and how they did the research. Next the group goes into a detailed explanation of how they installed and ran Nessus. They gave step by step installation instructions complete with screenshots that were included at the end of the lab paper. Then the group goes into an explanation of how Nmap was installed and ran. Because of the simplicity of Nmap this explanation did not take as long. Next the group very briefly talks about how there would be similarities in Nessus exploits if put into a grid and how Nmap would have a low bias on operating systems. This part of the lab was lacking. The group seemed to just skim over these questions. They did not explain what patterns would occur if the exploits in Nessus were put in a grid. They just simply said that there would be patterns. Also the group just gave a brief explanation of how Nmap has a low bias toward operating systems, but they did not mention Nessus at all. The group could have done a much better job on these questions. After the questions above were addressed the group goes into a decent explanation of how they used Wireshark to capture the packets sent and retrieved by Nessus and Nmap. They also gave a brief explanation of the types of packets sent and the information that was retrieved by these packets. Next the group discussed the results of using Nessus and Nmap on their Windows XP SP0 machines. They mention what ports were exposed and what operating systems were found. Then they explain that Nessus broke down its findings to make it easier for the user to read. They also state that if Nessus exploits were put in a grid, they would create a patter, but they do not go into any type of details on the pattern that would be discovered. Last in the methodology the group discusses the last part of the lab. The group did a nice job explaining what this part of the lab was about. They also gave an explanation of what case studies they were going to use and why. Then they explain that they were going to go into how source code auditing would be used to detect if a particular program was being used by an attacker in a malicious manner toward its user. Next the group goes into their findings. The first part of their findings talks about what is included in the table of passive reconnaissance tools. They mention that tools that recreate the packet stream passively are mostly found in layer two or above of the OSI model. This explanation seemed very vague. They could have expanded on this part some more. Then the group goes into explaining what happens when a script or tool is slowed down. They give a good explanation of why slowing down an attack can prevent detection. Also they do a good explanation of the difference of an active attack and a passive analysis. They explain how tools that passively re-create a packet stream are of high value. Last the group talks about the last part of the lab. In this section the group talks about choosing two tools that were used to do counter attacks toward the user of that tool. The group chose TCP wrappers and TCPdump for their case studies. They started with the explanation of the TCP wrappers. They gave a good explanation of what a TCP wrapper was. Then using a case study that involved a program developed at a university, they demonstrated how the TCP wrapper was altered to allow root access to any system using the TCP wrapper program. Then they use a second case study to explain how TCPdump was infected to create a backdoor in a system and allow a remote shell to be installed on the compromised system. The group then talks about how each of these compromises show common issues and patterns in the exploits. This section was a nice way of explaining what to look for in a compromised program. Next the group explains three ways that a tool is not infected with some malicious code toward the user. These three detection methods are using a virtual environment with fuzz testing software on it, using an isolated virtual network with passive tools on it, and the last was to use source code auditing. The group then does a nice job of explaining the first two steps. They tied all the previous parts of the lab into this section, which nicely showed how the rest of the lab could be applied. Then the group does a detailed explanation of what source code analysis is. They give three ways of doing source code analysis. Then the group explains how source code auditing is viable in an enterprise environment, and backs it up with some data. Then they explain how a contractor that is assigned to penetration testing is liable for any negative effects of their software. The group next explains that they had a couple of issues with finding passive tools and access to the Citrix network. In the conclusion of this paper the group explains that even though passive tools are not as prevalent as active tools the passive tools are more powerful. Also the lab revealed to the group that attack tools could be turned against an attacker or vice versa. They also showed how passive tools are valuable in gathering information on what is running on a network. They also explained how this lab was valuable to the rest of the semester. At the end of the paper was the table that was created. The table was fairly short as the group said. The table did reveal that most of their tools were located in the data link layer of the OSI model and that all of them attacked confidentiality, transmission, and technology. The table even included some nice tools in the kinetic and people layers of the extended OSI model.
Team two gave a good overview of passive reconnaissance in the abstract section of their laboratory report.
Group two did an effective job summarizing the articles in the literature section and addressing the faults in both articles. Group two pointed out what other groups did not in regards to the Godefroid article. Group two stated “The long abstract explains what white box fuzz testing is, that is better than black box, but does not present any data to support that claim. “
In the methods section, I was surprised to find that Group 2 had to go through many other steps besides installation to get Nessus to work. My group also downloaded Nessus onto Windows XP, but the version we used did not require all of these additional steps. I have to disagree with the statement “Nmap database contains 1684 signatures, which means some 1684 different operating systems versions” because Nmap located numerous open ports in the same operating system, so there are a certain number of signatures per operating system version. When the group stated “On a different machine Wireshark was used to capture packets that were sent by Nmap and Nessus”, which virtual machine did the group place Wireshark onto? I had to partially disagree with the statement “This gives the reader easier and quicker access to the data the user requires because the more plug-in s that are installed, the longer the program would take to gather the information about the vulnerabilities. In the statement “If Nessus exploits were put into a grid there would be several similarities.”, of what similarities was the group referring to?
In the findings section group 2 came to a similar conclusion to group 1 when they stated that” The act of reviewing packets of the course of a network attack is passive; the act of speeding up or slowing down a network attack is not passive. “I had to partially disagree with the statement “The speed at which a particular attack runs can play a huge role in the ability to detect an attack or not” because while slowing the speed down may make an attack harder to detect, some attack tools produce certain types of network traffic that could be distinguished from legitimate traffic regardless of how fast or slow the attack tool is running.
Group two was able to identify two types of penetration tools that were compromised; TCP wrappers and TCPdump. According to group 2, “Both of these tools are commonly used to intercept information, in a passive manor, off of a network. “These tools that were identified were both plagued by Trojan horses.
Group two did not address what risks would there be to a production network if untested penetration tools were executed upon it.
In the Issues section, when the group stated “First it is apparent that the list of passive network recon tools is much smaller and harder account for than active network recon tools”, I did not see this as an issue for some tool types are more plentiful than others.
Team 2 did an overall good job and this lab made me think about the items within the lab other than the grammar and editing. They started of with their abstract and it explains what was going to be done and what they will be looking for. They then go into their literature review and do a good job at comparing and contrasting the papers, as well of explain what happened within the documents. One thing that could had made any arguments stronger was in the future if there are only to articles do not be afraid to go research additional papers or articles to prove ore disprove the authors and this will make the review even stronger. When having just the two papers it makes it hard to argue any other point because it almost makes it one side or the other. Do you think that Godfried could have done something or waited to release more information within the abstract to support his argument? The authors then go into the methodology explain what was going to be done and explains what resources where going to be used. After this section they go into their findings and results from the lab. There where a couple things that where left out, such as is there a biases towards either of the systems. Secondly would any of these tools be useful in a corporate environment, or would they all be prohibited from use. One thing that stuck out and that gave me a question was about the slow attacks. Would there be a way to prevent these types of attacks? Are their any tools currently that will help monitoring this type of traffic coming into a network? This made me research further and gave some results upon doing a Google search. This brought a few different papers and tools that would assist in these types of attacks. A company called EiQ puts out a tool that helps assist in these types of attacks (http://www.eiqnetworks.com/solutions/Low_and_Slow_Attack_Detection.shtmlTheir). Their tool works by monitoring the system over a period of days and correlating the information. Would this be a useful program or would it take to long to detect a “slow” attack? Then the authors go into the issue section and it is understandable that the passive tools be difficult to account for? Is it because when attacking many people think the best way is just by going all out at a system? Or could many of these tools be self-made by the “elite-hackers” and not distributed among the many script kiddies? This then leads to another question would a “home brewed” tools be more effective when attacking a system then one that is already made? This section really sparked thought and then they went onto their conclusion and described what was done. Is it harder to find passive recon information because it has not been explored as much? This team is not the only one too have troubles with finding additional information and it is agreed that this is a big step towards the class and what is to be learned in the future labs. This team’s lab was good and pushes to expand the readers mind and want to know more.
I think that group 2’s write-up for lab 3 was good but lacking adequate assessment in some sections. The abstract for this lab was good and accurately described the laboratory. The literary review was good. Some of the diction in the literary review was repetitive at times (black-box, white-box, fuzzing), but it didn’t really make the section difficult to read. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done correctly. For part 2A of the lab, the initial description of installing and using Nmap and Nessus is very accurate and easy to follow. Information about the vulnerabilities Nessus scans for was shown and cited correctly. However, once the group started discussing packets that were captured in Wireshark from Nessus and Nmap, everything changes. Grammatically, it would seem as if that portion of the lab was outsourced to South Korea and then translated back to English using http://babelfish.yahoo.com/ (example: “The method that Nmap and Nessus were that they send ARP packets to the target hosts and information that wireshark return about the ARP packet was asking how is ”). Grammar aside, there is another obvious problem with the methodology used to analyze the packets captured during the test. ARP packets are NOT the packets used by these tools. For instance, Nmap uses packets such as TCP SYN/ACK, UDP and ICMP. ARP packets are automatically sent between hosts to determine the MAC addresses of other hosts to match their IP addresses. I’m not sure how this was overlooked, because even looking at figure-1-11 (also note, when changing the size of a screenshot, the pixels-per-inch should also be increased so it’s not so fuzzy when you shrink the images) the ARP packets are being generated from hosts other than the ones being tested. Analysing the actual results of Nmap was good. When discussing Nessus, the group assessed that the vulnerabilities would fit into our grid, but gave no indication how or why. The assessment about the likeliness of attacks being discovered based on the timing was done well and accurately answers the question. The last part of the section was done well and covered many vulnerabilities in security tools that pertain to the lab. Finally, the conclusion was written well and accurately sums up the laboratory.