Abstract
Unlike the last lab where we analyzed different means of actively retrieving information from a network, in this lab we are going to look into passively analyzing a network to gain knowledge of the system in question. Passive reconnaissance is the gathering of knowledge from a target without the intended target being aware.
This lab will look at tools that can be used to analyze traffic that is flowing across a network in a passive nature. This lab will also explore how passive reconnaissance can also be used to analyze how active reconnaissance tools perform their many tasks by running passive analysis tools on a network with active tools performing scans on a target. This knowledge can be used to gain information on the active reconnaissance tool and how the target of that tool reacts to the various requests. Lastly, an analysis of a set of case studies that show how even reconnaissance tools can be compromised to ether gain control of a target or causing that target to become available. An analysis of ways that network analysis tools can be examined for malicious activities will be done. This analysis will show how analyzing these tools are not viable in an enterprise environment.
Literature Review
The articles that were included in the lab assignment shared a common theme of vulnerability discovery in applications. The Godefroid article explained the significance of whitebox fuzzing and how it was used to find vulnerabilities in Windows applications, while the Jung et al. article focused on blackbox fuzz testing to find personal information leaks by popular application programs.
Fuzz testing is an e?ective technique for finding security vulnerabilities in software, which is a form of blackbox random testing which randomly mutates well-formed inputs and tests the program on the resulting data( Godefroid, 2007, p.1).Limitations of blackbox testing include random testing usually providing low code coverage( Godefroid, 2007, p.1).An alternative approach to blackbox testing is whitebox fuzz testing ,which builds upon recent advances in dynamic symbolic execution and test generation ( Godefroid, 2007, p.1). Whitebox fuzz testing symbolically executes the program dynamically and gathers constraints on inputs from conditional statements encountered along the way (Godefroid, 2007, p.1). The collected constraints are then systematically negated and worked out with a constraint solver, producing new inputs that exercise different execution paths in the program ( Godefroid, 2007, p.1).This process is repeated using a novel search algorithm with a coverage-maximizing heuristic designed to ?nd defects as fast as possible in large search spaces( Godefroid, 2007, p.1). SAGE(Scalable, Automated, Guided Execution), a tool based on x86 instruction-level tracing and emulation for whitebox fuzzing of ?le-reading Windows application, had discovered more than 30 new bugs in large shipped Windows applications including image processors, media players and ?le decoders( Godefroid, 2007, p.1). Several of these bugs are potentially exploitable memory access violations (Godefroid, 2007, p.1). The article related to the lab in that it presents whitebox fuzz testing as a mean for doing penetration testing in Windows environments. The article did not appear to have any errors or omissions.
Privacy Oracle is a system that reports on application leaks of user information via the network traffic that they send (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008, p.279). Privacy Oracle treats each application as a black box, without access to either its internal structure or communication protocols, which means that it can be used over a broad range of applications and information leaks (Jung et al., 2008, p.279). Privacy Oracle is designed to detect accidental information exposure from standard development practices (Jung et al., 2008, p.280). The three broad categories of test parameters used by Privacy Oracle to detect information exposure include personal data, Application usage data, and System con?guration information (Jung et al., 2008, p.280). The key to Privacy Oracle’s success is the NetDialign ?ow alignment tool that proficiently isolates differences amongst network messages (Jung et al., 2008, p.288).
The methodology used to test Privacy Oracle was differential black-box fuzz testing, which is embodied in the Privacy Oracle system. Applications are treated as black-boxes, to gain broad applicability by remaining agnostic to its internal structure and communication protocols (Jung et al., 2008, p.279). the applications are tested with different inputs, mapping input perturbations to output perturbations to infer the likely leaks of personal information(Jung et al., 2008, pp.279-280).To make the problem tractable, the authors focused on information that the device exposes explicitly, whether over a radio or wired network connection(Jung et al., 2008, p.280). Information was ignored when a device might accidentally expose via side-channels like timing variations or packet sizes (Jung et al., 2008, p.280). For an analysis of applications running on Windows, Privacy Oracle used many existing tools, each execution of the target application is performed in a virtual machine that is check-pointed and rolled back to an initial state prior to each application execution, the Interactions with speci?ed test inputs were automated using a Windows test automation tool called AutoIT and Application-generated network traffic was collected using wireshark (Jung et al., 2008, p.283). In addition to capturing network traffic, HTTPS traffic was also captured before SSL encryption using HTTP Analyzer, which enabled detecting the exposure of sensitive information by applications that use SSL. (Jung et al., 2008, p.283). To detect whether an application exposes user-speci?c information about usage, the authors focused on applications that provide a search interface .The authors discussed their findings in terms of three classes of exposure: user-entered contact information sent in plaintext, system con?guration information actively gathered by applications and information sent to third parties such as advertisement servers and marketing research ?rms. (Jung et al., 2008, p.283). The authors then evaluated twenty applications from download.com that were listed as the most downloaded applications (three million in total) (Jung et al., 2008, p.284). These applications fall roughly into six categories: anti-virus software, peer-to-peer clients, utility software (for ?les, diagnostics, updates, and browsing), media players, communicators (for instant messaging and chat), and media tools (for manipulating videos and viewing images) (Jung et al., 2008, p.284). The article related to the lab in that passive reconnaissance tools such as Wireshark could also be used to analyze the same application programs to determine if valuable information is being leaked.
There were a few issues that were listed in the article itself. One issue was that output changes can be caused by many factors besides input, such as the environment or remote parties that interact with the application (Jung et al., 2008, p.280). Another issue was how to match the changes in output in ways that are most likely to re?ect semantic changes in input (Jung et al., 2008, p.280).
Methodology
The first part of the lab required the team to identify and tabulate passive reconnaissance tools in relation to the extended OSI model and McCumber’s cube. The group also addressed the concept of slowing down scripts or tools.
The second part of the laboratory assignment involved setting up a test environment for Nessus, NMAP and Wireshark. The tools could then be analyzed to determine the relationship between the amount of vulnerabilities analyzed and the sieve rate, vulnerability patterns within Nessus, biases with the tools, and what was learned about performing the tests. The second part of the lab also required the team to find cases studies about tools that were exploited and develop patterns between the exploits. The group also developed ways to insure the tools are not exploited, analyzed source code auditing as a method for reducing the threat of exploited tools, and identified risks that an exploited or untested tool could have on an enterprise network.
OSI Layer |
Exploit Method |
McCumber |
Layer 8/People |
Follow company employees on social networking sites, Screen watching |
Confidentiality, Process, Human Factors |
Layer 7 /Application |
Xspy |
Confidentiality, Processing, Technology |
Layer 7 /Application |
MetaGooFil. SEAT, DIRE |
Confidentiality, Storage, Technology |
Layer 6 /Presentation |
Dsniff, SmbRealy3, Aircrack-ng, Airsnarf,AIM Sniff |
Confidentiality, Transmission, Technology |
Layer 6 /Presentation |
Amap, httprint, HTTSquash, ike-scan, psk-crack |
Confidentiality, Process, Technology |
Layer 5/ Session |
Nbtscan, rstatd Vulnerability, Showmount Request, Session Fixation |
Confidentiality, Process, Technology |
Layer 5/ Session |
Ettercap, Hunt, Juggernaut, T-Sight
|
Confidentiality, Transmission, Technology |
Layer 4/ Transport |
p0f, procecia, Unicornscan, Xprobe2 |
Confidentiality, Process, Technology |
Layer 3/Network |
Tcpick, Wireshark, IRPAS, lanmap,IPTraf,ntop |
Confidentiality, Transmission, Technology |
Layer 3/Network |
Dnsmap, dnsmap – bulk |
Confidentiality, Storage, Technology |
Layer 3/Network |
protos |
Confidentiality, Process, Technology |
Layer 2/ Data link |
MAC duplication attack |
Integrity, Transmission, Technology |
Layer 2/ Data link |
CIScan, Scanning Telnet messages, Gobbler, VLAN hopping, ARP spoofing, EtherApe Promiscuous mode card and driver
|
Confidentiality, Transmission, Technology |
Layer 2/ Data link |
Spoofed IP 5.1 |
Confidentiality, Processing, Technology |
Layer 2/ Data link |
ARP spoofing |
Confidentiality, Transmission, Technology |
Layer 2/ Data link |
MAC poisoning attack |
Integrity, Transmission, Technology |
Layer 2/ Data link |
MacChanger |
Integrity, Storage, Technology |
Layer 1/ Physical |
Logging Keystrokes |
Confidentiality, Transmission, Technology |
Slowing down a process can provide a way to obfuscate an active reconnaissance on a target. When an active tool scans a target, multiple packets are sent to probe that target for any weaknesses. If a passive reconnaissance tool is used on that network, the person using that passive reconnaissance tool will be able to see the multiple packets being sent to the target as a string of requests sent out consecutively. If the requests from the active reconnaissance tool were to be slowed down and to allow other network traffic to be placed in between each request, this would obscure the intended scanning of the vulnerabilities of the targeted computer. One way that can be used to slow down a program would be to introduce delays into the program. If a delay is introduced into an active reconnaissance tool, after each request and reply, this would allow other network traffic to flow while the delay was going on. Even if this obscures the attack on the target this type of reconnaissance would still be considered active, because a request is still being sent to the target, thus informing the target you are there.
In the second part of the lab, the team had set up a test environment for Nessus, NMAP and Wireshark. The tools could then be analyzed to determine the relationship between the amount of vulnerabilities analyzed and the sieve rate. The testing of the tools also allowed vulnerability patterns within Nessus to be developed, biases with the tools to be identified, and allowed the group to summarize what was accomplished performing the tests.
To set up the test, three of the four virtual machines were started and configured to be able to communicate with each other. The virtual environments that were used included the Backtrack VM, Windows XP SP 3 VM, and the Windows XP SP 0 VM. Nessus was installed on the Windows XP SP 3 VM, Nmap was installed on the Backtrack VM, and Wireshark was installed onto the Windows XP SP 0 VM. First Nessus was examined. Nessus was configured to run an active scan of the Backtrack VM. Then Wireshark was configured to scan the network with some filters to reduce the number of packets, that were captured, down to the packets that we wanted to analyze. Wireshark was then ran to start analyzing the network and then the Nessus scan was started. When the Nessus scan was completed Wireshark was stopped and the output was analyzed. After this Nmap was configured to scan the Windows XP SP 3 VM and the same process was done on that scan.
When there is a great amount of data that is produced as an output from a program, gathering bits of useful information from that large mass of information can be quite intimidating. Having the ability to scan a network for all possible vulnerabilities can seem more productive. That is until you have to sieve through that information for what you want. There are scripts and filters that can help with the sorting of the information, but there still tends to be a lot of data to sort through to get what information you will need. Also the more tools that are present to scan with, the longer the scan will take. An example would be if a couple of tools are used to do a port scan using ICMP packets and TCP packets, the scan wouldn’t take as long. If you were to perform the scan with more packets like the ones mentioned above plus a SYN packet scan and a UDP packet scan, this would take a greater amount of time. Some programs help alleviate this problem by doing parallel scans to get the job done faster.
One of the first patterns that were discovered was that Nessus did security checks on various operating systems checking for patches. In relation to the OSI model, most of the vulnerability checks fell within the Application and Presentation layers. In relation to McCumber’s cube, most of the vulnerability checks fell within Integrity.
Upon examination of the vulnerability checks list section, Nessus seemed to be diverse in its vulnerability checks for multiple operating systems. However, it did tend to extensively focus on UNIX based operating systems. In regards to NMAP, it also seemed to be diverse in its vulnerability checks for multiple operating systems. Due to the setup of the command environment, NMAP appeared to be designed for those with experience in UNIX/ Linux environments.
By conducting these tests, the group was able to gather that Nessus used port 4482 to request data about the scanned system. Wireshark showed what type of request packets the active reconnaissance tools sent to the destination host and then how the destination host replied. This process showed how one could gain knowledge about how active tools operate against a target host on the network. Some of the information gained from passively analyzing traffic between active reconnaissance tools and its targeted host are port numbers used by the active reconnaissance tools, types of packets sent, how many packets are sent and the types of responses given back by the target host.
In the next section, the team found cases studies about tools that were exploited and develop patterns between the exploits, developed ways to insure the tools are not exploited, analyzed source code auditing as a method for reducing the threat of exploited tools and identified risks that an exploited or untested tool could have on an enterprise network.
Back in 2008, Wireshark 1.0.4 was discovered to have a flaw in the function processing SMTP protocol, which enabled hacker to perform a DoS attack by sending a SMTP request with large content to port 25(Insecure.org, 2008, p.1). The application then entered a large loop and could not do anything else. (Insecure.org, 2008, p.1)
Nmap is allegedly prone to a potential insecure file creation vulnerability (Symantec, 2004, p.1). A local user may exploit this vulnerability to cause files to be overwritten with the privileges of the user running Nmap (Symantec, 2004, p.1).
Snort has been exploited via DoS attacks. The algorithm used to backtrack during Snort rule processing can be exploited to cause a denial-of-service and potentially allow an attacker to evade detection( Public Safety Canada, 2008, p.1 ).Also, an integer underflow error in SNORT can be remotely exploited to cause a denial of service( Public Safety Canada, 2008, p.1 ).
Ettercap is susceptible to a remote format string vulnerability, which is due to a failure of the application to properly sanitize user-supplied input prior to utilizing it in a format specifier as a formatted printing function (Juniper, 2009, p.1). The vulnerable function is called whenever a protocol dissector attempts to log a message to the user of the application (Juniper, 2009, p.1). One particular case that the attacker directly controls the logged data is when a protocol dissector logs usernames and passwords that have been sniffed from the network (Juniper, 2009, p.1). To exploit this vulnerability, an attacker would create network data that will result in one of the protocol dissectors logging usernames and passwords (Juniper, 2009, p.1). This vulnerability allows remote attackers to modify arbitrary memory locations, resulting in the control of program execution, leading to the ability to execute arbitrary machine code in the context of the affected application (Juniper, 2009, p.1).
Dsniff, fragroute, and fragrouter, if downloaded on or after 17-May-2002, could have contained a backdoor that was installed as part of the configure file (IBM, 2009, p.1). The Web site hosting these tools (monkey.org) was compromised on 17-May-2002 and the configure file that these tools use was replaced with a Trojan (IBM, 2009, p.1). This vulnerability could result in a complete system compromise for the affected users (IBM, 2009, p.1).
There appeared to be a pattern of tools being subject to Denial of Service (DoS) attacks that cause the tool to become utterly useless. Another way that these programs were being compromised is that they allow ways for the attacker to utilize the tool to gain control or inject arbitrary code into the local computer.
In order to determine if a tool is hostile or not several methods could be used. The tool could be tested in a virtual or live test environment to see how it interacts with the system. Vulnerability scanners could be ran against the tool in question to determine if the tool contains any malicious content. Passive scanners such as wireshark could be used to capture packets that are not intended to be transmitted if the tool was legitimate. Active vulnerability scanners could identify what ports are being used by the tool. The source code, provided that it is accessible, could be viewed and analyzed for malicious arbitrary code.
Source code auditing could be a great help in identifying common mistakes, or in evaluating the security of software(Heffley &Meunier,2004,p.1 ).Source code auditing software can be static or dynamic and most auditing software uses static analysis(Heffley et al.,2004,p.3 ). Static analysis “aims at determining properties of programs by inspecting their code, without executing them” (Heffley et al., 2004, p.3). However, for the purpose of evaluating software or for trying to do quick security audits by third parties, vulnerabilities are generally undetectable because they are buried in false positives by the current auditing applications (Heffley et al., 2004, p.7). It is also difficult to determine whether the inputs for all calls to a function have been validated in previous code or have been generated in a safe manner (Heffley et al., 2004, p.6).
Source Code auditing would not be feasible on an enterprise environment, for testing should never be done directly on the production network. A test environment would be the ideal location for performing source code auditing. It requires personnel that are familiar with the coding, which may not be applicable for certain organizations. It must be carefully inspected to see if the values could be manipulated in such a way as to produce malicious effects, which means that the process would be extremely time consuming(Heffley et al.,2004,p.1 ). A testing environment is needed to do this analysis of the software. This involves setting up an isolated network that can be a virtual network or consist of physical hardware. Companies are most likely not going to dedicate their time and money to setting up this environment and dedicating personnel to this task. The company will most likely contract this testing out.
There are several types of risks that could affect the enterprise by using untested or exploited tools in penetration testing. As pointed out in the case study section, certain distributions could install backdoors when installing the tools. The effectiveness of the tool would be largely unknown if untested and could give the network a false sense of security if the tool does not reveal any vulnerabilities. An untested tool could affect the stability of the system that it is being run on.
Issues
In this lab, the team came across a few issues. The virtual machines acted sporadically when accessed off campus. We had troubles with the installation of Wireshark and Nessus on the virtual machines. There were some issues with getting the right commands for Nessus and Nmap to do a proper scan. With Wireshark we had problems with it overflowing the memory and locking up the VM. To solve the problem we reduced the Nessus scan and filtered out unwanted packets. We originally had Wireshark loaded on Windows Server 2003, but found out it did not have an adequate amount of hard drive space. We also had difficulty locating information on exploits in attack tools.
Conclusion
In conclusion, this lab has shown how to identify what a passive reconnaissance tool is. After doing a taxonomy of the passive reconnaissance tools, using tools we compiled in the first lab, it was discovered the passive tools were mostly located in the upper layers of the OSI 7 layer model. The passive tools also mostly attacked the confidentiality of information being transmitted on the network. It was shown that even if an active reconnaissance tool was slowed down to obscure its actual intent it still was considered an active tool and not a passive tool.
This lab demonstrated how a passive reconnaissance tool could be used to not only to gather information from transmissions on the network, but also to analyze how an active reconnaissance tool can be analyzed to gather information on how that tools operates and how the target responds to that tool’s requests.
Lastly, a set of case studies showed how penetration tools are subject to exploitation. It was also shown how this malicious activity could have been detected by introducing the tool into a testing environment and examining it closely to spot any unintentional activity under different circumstances. It was also shown how testing tools that will be used in an enterprise is not viable because of restraints on personnel and resources.
References
Godefroid,P. (2007). Random testing for security:blackbox vs. whitebox fuzzing. ACM.
IBM.(2009). Fragroute-host-download-backdoor (9272). Retrieved June 23, 2009 from,
http://xforce.iss.net/xforce/xfdb/9272
Insecure.org.(2008). [SVRT-04-08] vulnerability in wireshark 1.0.4 for dos attack. Retrieved
June 24, 2009 from http://seclists.org/bugtraq/2008/Nov/0164.html
Jung,J. Sheth,A.,Greenstein,B., Wetherall, D., Maganis, G., & Kohno,T. (2008). Privacy Oracle:
a System for Finding Application Leaks with Black Box Differential Testing.ACM.
Juniper.(2009). Ettercap remote format string vulnerability. Retrieved June 23, 2009 from,
http://www.juniper.net/security/auto/vulnerabilities/vuln13820.html
Public Safety Canada(2007).Denial-of-service vulnerabilities in snort. Retrieved June 23, 2009
from, http://www.publicsafety.gc.ca/prg/em/ccirc/2007/av07-006-eng.aspx
Symantec (2004). Nmap potential insecure file creation vulnerability. Retrieved June 23, 2009
from, http://archives.neohapsis.com/archives/fulldisclosure/2004-08/att-0563/nmap.pdf
The first item that I found glaring at me was the format of this lab report. In numerous places question marks appeared in the middle of words for example in the first sentence of the second paragraph in the lit review was “e?ective”. What happened to these words? Was there an issue posting to the blog and did this not show up when the group previewed their post before submitting? In other places, the question mark was followed with “le”. These errors made reading the first part of this group’s lab report very difficult. For the future, please make sure that everything looks good before submitting the post. The literature review reads like a list. The group talks about one article, and then talks about the next. There needs to be more cohesiveness within the literature review. Do not make the literature review should like you are answering the list of items that are required to be put into it. You can answer the items just by reviewing it instead of saying “The article related to the lab” and stating it. Integrate everything together. The group quoted items from the articles a lot. The only items about the articles that were in their own words were the list of items that they had to answer. Read the articles and understand the process. Jung et al reviewed 26 applications and not 20. I don’t know if because of the formatting issue that appeared that the group actually put the word twenty six or just put twenty.
The group did not separate the methods section from the results or discussion section. It appears that all of their information was part of their methods section and that there were no results. The first two paragraphs of the methods section read more like an abstract stating what was going to happen in the lab and not the actual steps done to get the results. I liked that the unneeded technology column is finally gone from the table of tools and how they relate to the McCumber cube. In the methods section, while the tools are being discussed, the group once again has basically everything that was done is taken from a source. The information found seems to be just sentences from other places. The group needs to use their own words more. The group does not discuss how to slow down the tools. This question was completely missed. I would like to have seen more detail in their results. This is one of the first labs that this group actually had issues with the lab. Problems are a way to learn from our mistakes. Did this group ever think about performing this lab on actual equipment like team 3? Team 3 thought that there might be problems using the Citrix environment so they used their own equipment. For future labs, please make sure that the formatting is correct before submission and make the literature review more cohesive.
The literature review still treats each of the articles separately and does not relate any of the content to the activities of the lab (except for one mention of Wireshark) nor the topic of passive reconnaissance. The beginning of the literature review contains a direct quote from one of the assigned readings for the lab. Instead of copying the text with quotes surrounding it and a citation, it appears that the text was copied via OCR software turning the “ff” in “effective” into a “?”. This shows a lack of proof reading, surely that wouldn’t have passed a built in spell checker. The same problem is seen later in the paragraph for more text that was directly copied from the source material. Overcitation is a major issue, especially in the paragraph about the methodology used in (Jung et al., 2008). If the source material is being quoted that often, this is more of a literature summary than a review.
The methodology for part one of the lab doesn’t have enough detail. Where are the tools coming from? How did you find them? How are you classifying them? Links to the tools in the table would have been helpful too. The answer to the question in the lab of the effect of time on the process seems out of place and doesn’t really relate with any of the surrounding materials. The methodologies of part 2a of the lab don’t say how you overcame the limitations imposed by utilizing a virtualized switched network. The traffic wouldn’t have been visible to the other hosts on the virtual switch.
The findings section seems to be included under the methodologies heading. The findings for part 2a of the lab don’t discuss the vulnerabilities found in the machines that were scanned. It would’ve been helpful to see the vulnerabilities found in each machine that was scanned compared and contrasted with each other, particularly the two different service pack levels of Windows XP. There weren’t really any methodologies present for part 2b but there was an extensive list of tools with security flaws, only a few of those listed actually contained exploits from malicious tool authors intended to compromise a host. The discussion on methods of verifying tools mentions source code auditing and says that it would be difficult in an enterprise but the authors fail to identify the issue with availability of source code. Some commercial tools won’t have source code available, if you’ve made the decision to audit the code of all tools, how will this be dealt with?
Team four begins their lab with an abstract that meets all of the requirements of syllabus. It is the required length, and explains the topics will be covered in the rest of the lab. The literature review presented by team four does not represent a scholarly or academic literature review. While both of the required readings are analyzed and the questions presented in the syllabus answered, the literature review lacks any kind of cohesion and is nothing more than a listing of the articles to be reviewed, with the reviewer’s comments and APA style citations. Team four needs to do a better job in this area. We have completed three of seven labs this semester and thus far all three literature reviews have been completed in this manor. There were also a large number of in text citations, and did make the literature distracting and slightly difficult to read. I must question the statement that there were no errors or omissions in the Godefroid article. Since that article was nothing more than a reference to what MS. Godefroid was going to speaking about, there is no data given to support her conclusions. This should in and of itself constitute an omission. Team four lists a unified methods section that makes the lab simpler to read. However, there is a lack of a findings section, and all of the data should have been listed in the findings section is scattered without explanation in that methods section. While the methods presented are lengthy, they do not list strategy or technique used to complete the lab, but rather just what was performed. This does not represent a scholarly or academic methods section. The table that is presented should also not be in the methods section but should be listed at the end in a figures section with reference given to the table in the methods section. The table itself while shorter than the active recon table presented by team four, which on the surface suggests an evaluation of the entire toolset, but upon further examination it becomes obvious that team four just removed the active tools that considered to be active tools without much consideration or actual use or layer of the OSI model they work at. The tools listed in layer 6 relating to HTTP are questionable since HTTP is actually a layer 7 protocol in the OSI model. I also question the layer 5 protocol nbtscan which is not passive in nature and would be discovered by most intrusion detection tools as well as an IP spoofer at layer 2. Last time I looked, IP was a layer 3 protocol, not a layer 2 protocol. The lack of a findings section makes the lab difficult to read and understand and also calls into question the scholarship of the lab and the team. Team four did however agree with team three and disagree with team one on the bias shown by NMAP and Nessus. Team four has stated that there is a UNIX system bias for these tools a conclusion that I also agree with.
Team four’s abstract was unclear. The muddled writing style makes it difficult to sift out the content. Where did your definition of passive reconnaissance come from? The first sentence of the second paragraph is unclear. Is the tool passive or is the traffic flowing passively? I don’t understand the meaning of the phrase “Cause the target to become available?” Are you saying that vulnerabilities in tools can be exploited to gain control of the system? The last two sentences of the abstract are also unclear. You are analyzing and analysis, but what for?
The literature review, though verbose, offered very little. I could have (and did) read the articles. A shorter synopsis would have sufficed. You don’t relate the Godefroid article to the lab at all, and give a really thin explanation of how the Oracle Privacy article is related. You don’t evaluate either of the articles. What’s good about them? What is bad? Are they useful? How? If I didn’t know better I would think you are trying to cover a lack of understanding of the article with extremely long retellings in the hope that no one would notice the missing evaluative content.
The group’s methods section is weak. The first paragraph is vague and in no way gives a repeatable methodology. Several of the tools listed in the table are suspect. Do they all have a reconnaissance function or are some used for other things? You have an IP tool in layer 2, why? You discuss slowing down an attack. How did you get your information? You mention the sieve rate when discussing part 2A. What is it? Where did you get this term?
Several issues come to mind in regard to part 2A. Do you think running NESSUS against Backtrack had anything to do with the fact that the vulnerabilities were focused on UNIX based operating systems? Why does it matter that NMAP is designed for “those with experience in UNIX/Linux environments? It should be a common skill set for graduate students in an information technology program. You didn’t really need to conduct the tests to find out what port NESSUS uses, it’s in the documentation. Would the information gained from passively scanning a network while an active tool is running be at all useful?
In section 2B, how did the team find its case studies? You say that the cases you detail show a pattern of DoS vulnerability, but only one tool is listed as being vulnerable. How is that a pattern? You offer several vulnerabilities but are they all “incidents”? Is there a difference? You state that you are trying to determine whether or not a tool is hostile. Can a tool be hostile? Hostility implies intent. How would a vulnerability scanner detect malicious content? Vulnerabilities, yes, but would it show an altered tool? Why all the quotes about source code auditing? Was there a point to that paragraph? Are there alternatives to outsourcing source code auditing that might be more viable for an enterprise? It seems like auditing every piece of code would be expensive, outsourced or not. How do you verify that the contractor was diligent if you do outsource?
In your issues section you state that problems with Citrix limited your ability to get results in your lab. When you had problems with Citrix did you contact anybody? The technology not functioning should never be an excuse from technology students, especially graduate students. What trouble did you have installing the tools? What problems did you have with the commands? If you add detail here, it helps if someone tries to repeat your work.
Your conclusion says that the tools are mostly in the upper layers of the OSI model, but your table does not reflect this. You assert that the tools mostly attack confidentiality. Doesn’t reconnaissance by definition attack confidentiality? The last paragraph makes no sense. I get that you are trying to recap section 2B of the lab, but it is unclear what you are trying to say.
Group 4 begins with an abstract that compares the lab 2 assignment to the lab 3 assignment and describes passive reconnaissance. They then explain each part of the lab assignments. They list passive recon tool selection, how passive recon tools can be used to test active recon tools, and case studies of recon tools that had been exploited. They further describe what knowledge is to be gained from each one of these activities.
Next, Group 4 included their literature review. They begin their lit review by briefly explaining the articles that were read this week and the common theme between the two. The first article they reviewed was “Random Testing for Security: Blackbox vs. Whitebox Fuzzing” ( Godefroid, 2007) , although they didn’t actually list the title in the review. Although this review covered the article well, in my opinion, too much of the wording was copied directly from the article rather than written in their own words. Next, they reviewed “Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing” (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008). Although this article was very thorough as well, again much of the wording appeared to be copied directly from the original document. They then relate the article to our labs by simply stating that “Wireshark could also be used to analyze the same application programs to determine if valuable information is being leaked”. Although I believe that this is part of how it applies to our labs, I believe there are other inferences in the article that apply as well. One example is the use of virtual machines as a testing environment to isolate the tests from outside influences. Another example is the use of snapshots, so that the system can be returned to its previous state before another test is performed. A third example would be how seemingly innocuous applications can send private information over the network. Therefore, passively listening on the network may reveal this private information.
Group 4 then described the methodology of the lab. A list of passive recon tools is created. They created a very valid explanation of how slowing a tool can make it more passive. They still, however, consider it an active recon tool because it sends information over the network. Group 4 gives a detailed explanation of the test environment that was used in this lab. They further described the processes that they used to accomplish the lab. They continue with a detailed discussion of issues that they encountered, such as the slowness of the scanning tools and the large amount of data that has to be sorted in order to arrive at a conclusion. They discuss what was discovered from this portion of the lab assignment. For example, Nessus uses port 4482 when requesting information gathered from the network. They also discovered the types of packets sent over the network when the scan was being performed.
Group 4 was able to find five recon applications that had been exploited, Wireshark, nmap, Snort, Ettercap, and Dsniff. They further discussed procedures that could be used to test for these vulnerabilities. They concluded with a discussion of what knowledge was gained in this lab.
There were some issues with this lab as I’ve previously stated. One issue that I didn’t mention is the number of question marks that seem to be randomly placed throughout their document. This, admittedly, may be an issue with WordPress. In this report they alluded to a dislike for the command line interface of nmap. I recommend using Zenmap, which is a graphical front end for nmap. This will help to avoid using the messy command line interface.
I would like to comment that I thought this lab exceptionally well worded and organized. The literature review was an excellent summary of the articles under review. The discussion of results was ‘extensive’ with content which was generally descriptive and informative. I thought the section on ‘exploited’ security tools very well done: case studies and example were well chosen. I judged the conclusions drawn to be substantially perceptive; especially with regard to ‘tool’ operating system bias and exploit layer patterns on the OSI model. I believe this group to have addressed directly every area which was required in the lab research: well done.
Upon examination, some issues and questions can be found with this write-up, however. There appeared to be some character encoding problems which made certain passages hard to decipher: more care in using WordPress should eliminate this flaw. Additionally, while the literature review presented an excellent summary of the articles, this is really mostly ‘all’ that it did. A significant comparison of concepts common to both articles, and contrasts if found; further, more than a trivial reference to application in the lab exercise: these inclusions would improve this section. Also, it appears that the ‘Results’ heading is missing, possibly due to WordPress submission issues.
I searched for a clear definition of what this team defined ‘passive reconnaissance’ to be, and found a vague description in the abstract section set out as “the target being [un]aware” of the reconnaissance. This definition seems to be further refined in the discussion of ‘slow’ active tools, where the team asserts that ‘active’ implies that “[data is] being sent to the target” and this characteristic is not changed by the rate at which it is performed. This leads to the converse property, that under no circumstances may ‘passive’ type tools send data to the target. This is initially presented, as I am uncertain as to the rationale of the inclusion of some tools in the ‘passive’ tools table, since they do not fit the definition being evolved within in this write-up. For instance, why are ‘Nbtscan,’ ‘Unicornscan,’ ‘Spoofed IP 5.1,’ and ‘ARP spoofing,’ included in this list? These all violate the implied ‘no data sent to the target’ definition, and therefore would fall under an ‘active reconnaissance’ definition. This is by no means the ‘only’ problems seen with the tool list: there are a fair number more.
In a further discussion of the results, I would agree with the assertion that ‘Nmap’ and ‘Nessus’ are primarily biased toward UNIX-like operating systems; however, the reason given for ‘Nmap,’ specifically that the “setup of the command environment” implied that it was UNIX biased is unsatisfactory. In my experience, no consistent patterns really emerge in regard to command-line executable parameters and operating systems (especially in UNIX-like systems), if this is what is meant. If it is being implied that the ‘use’ of the command line is ‘UNIX’ indicative, Microsoft’s ‘Powershell’ relies on the “setup of the command environment” exclusively for its operation (and is in itself a powerful security testing implement). Does this imply that it is also biased toward UNIX like systems? I do not see a significant correlation between these two independent attributes.
Finally, I would comment that the description of the ‘meta exploit’ tests were rather vague. From the description of the setup, it appears the test was done correctly. It is obvious from the write-up that an entire network scan was done. This certainly would generate a large amount of captured data: what is missing is a description of the hosts and connections found in the traffic encountered. I believe that the data which was analyzed most likely to be majorly the product of the single ‘monitoring’ host being scanned by the ‘attacking’ host. I am certain if the traffic was filtered by ‘conversation’ filter in ‘Wireshark ,’ significantly different interpretations would be made. I will not recount the issue of ‘switched networks’ once again in this review, but if curious, it is addressed in (our) team three’s write-up. I would submit that since the bulk of information being gained in the ‘observer’ and ‘attacker’ scenario is from the single event of the ‘observer’ host being remotely scanned, and as the attacker already ‘owns’ the ‘observer’ if ‘Wireshark’ is being run from it, then no real information of use is gained in this situation. This is similarly true if a system network administrator is ‘observing’ from this machine: the only thing learned is that a certain machine is scanning the network, but by this time the IDS would be far into alarm mode already.
I thought their abstract was well written and explained what they were planning to do in Lab 3. Reading through the post I saw a lot of formatting errors. There were question marks in words and grammar mistakes. I think this group should make sure their paper is spell checked and grammatically correct before posting it to the blog. It makes it difficult to read when there are such errors. I liked the way the organized their lit review. I thought it was easy to read and they tied the articles together nicely with the Lab exercise.
The group talked about the methods used to test Privacy Oracle. But then they had a separate Methods section. Shouldn’t this have been in the Methods section? They did not separate the methods section from the results or discussion section. I would have liked them to separate parts 2a from 2b. Again, they discussed issues in the literature review but then had a separate issues section. This was confusing to me. Their discussion of tools that have been exploited and the cases were good. Also, their discussion of source coding was well documented. Just a reminder that in the future be sure to spell check and do your grammar check before posting to the blog.
First thing that can not be ignored before I go into the abstract was what happened with the question marks in the literature review. They stuck out like a sore thumb and should have been caught before being posted. When reading the abstract I found that near the end the rights put something that looked like it belongs in a conclusion rather than the abstract with “This analysis will how analyzing these tools are not viable in an enterprise environment.” The reason behind an abstract is to setup the lab and then in the conclusion this sentence would make more sense because supposedly at this point it is just the setup for what is going to happen. Sorry if I am wrong but it just seemed out of place. Now back to the literature review. Again this section was hard to read because of the issue with the questions marks. This was just a review of the two pieces of literature and could have some more depth. Try finding more information to put within the literature review to help support or argue with the opinion of the authors. When writing the literature at the require size and just using the two pieces of literature it becomes extremely hard and the point will be repeated and beat to death. The literature review also needs to be more cohesive, it is still broken up and the pieces need to compare and contrast with each other and what is going on within the lab and any conflicts that may arise. Next we go on to the methodology section. In this section the methodology and findings section merged together. Part of the findings is what the student found out while doing the lab and not just we did this and then this. That just makes it sound robotic and that nothing is being gained from actually gaining hands on experience with penetration testing. Also in the finding section could have been what view do the students take and till make the viable or not viable argument understood for the enterprise environment. After this the lab goes into the conclusion section. I found that the conclusions section had some information could have been used in the findings. Also when reading it seems to drag on some using In conclusion and then lastly in this section made the reader believe that there may be another point. Also describe why the testing tools are not viable in an enterprise setting. Are there any tools that could be used in corporate environments? In my opinion there are tools that could be useful when working on a system. I would not likely choose to run anything like Nessus across my network due to the large amount of attacks it sends at the network. But there are other tools that can help find issues, and then the engineers can resolve them. Why would wire shark not be a viable tool if packets need to be analyzed going across the network? In what ways do these tools leave the environment exposed? What makes the tool hostile, non-hostile? Next time include not just how the tool may work but also ask questions to whether or not it could be used outside and environment.
I think that group 4’s write-up for lab 3 was good overall. The abstract for this lab was adequate in terms of length and consistency. The literary review was good overall. Group 4 answered almost all of the required questions. The group did discuss how the reading related to the lab, but did not discuss whether or not they agreed with each reading. All of the citing for the literary review was done well and the page numbers for the references were also included. Once again, I feel that the literary review was cited too much and seemed more like cliff notes rather than a comprehensive analysis. The thing that strikes me most when reading this lab is the formatting. The group did not check this over before submitting it. Spacing seems to have too many in some places and none in others. Random question marks and other characters are also common. While the content is good, the formatting makes it difficult to read. The group seems to have an accurate analysis for part one. It seems a bit short, but accurate nonetheless. For the methodology for part 2A, the group covers the process well and performs a good analysis. Group 4 decided to use filters in Wireshark to capture only the correct packets (good idea!) and determined that a longer attack is less likely to become detected based on the large amount of packets on the network. The group also assessed different types of scans and the relationship between speed and information obtained. When comparing the vulnerabilities to the grid, the analysis had the correct direction, but came up short. It seems for section 2A, all of the correct testing was performed and the analysis seemed to be very accurate; however, the group did not go very far in depth with their findings and could have had a very good analysis if they had elaborated more. For part 2B, the articles chosen were very good and pertain to the lab. The issues and problems section accurately described the issued the group faced and the conclusion summed up the groups findings well. Also, it seems that only the literary review suffered from the strange formatting.
The team started with a strong abstract and indicating what they were going to talk about in there laboratory. There literature review was in depth. They talk about what passive reconnaissance is and how they will use it. They covered different tools that the team was going to use for there reconnaissance and monitoring. In methods the second part of the lab the team indicated that they installed Nmap on Backtrack, Nessus on Windows XP SP3 and Wireshark on Windows XP SP0. For the instructions, were one machine was going to scan a target machine while another listened. Nessus and Nmap were to be installed on one machine. Backtrack, already has Nmap installed, how was Nmap installed on this VM or was Nmap updated to a newer version? How was Nessus installed? Other groups showed how the install of Nessus and Nmap were done. Wireshark seem to be the more popular scanning tools over the groups and mostly done in Backtrack. The team then talks Nmap being prone to potential insecurities. They get this information from Symantec and only mention that the problem exits. Does this problem exits on the local host if running on a Windows machine? Does the file creation vulnerability happen in Backtrack where the local hard began unmounted. The group then goes on to talk about Snort, Ettercap, Dsniff, fragroute, fragrouter. They mention that Dnsiff, fragroute, and fragrouter contained a backdoor if download on May 17, 2002. Source code auditing can be a great way to review mistakes that the coder had made in the software. In the teams issues they mention that wireshark was originally installed on Windows Server 2003 but ran out of hard drive space then witch to a different VM and then had later problems with overflowing the memory and locking up. Perhaps would it have been better to have used a different VM such as Backtrack? Could this have solved your memory problems since wireshark is pre-installed on Backtrack?