ABSTRACT
This lab will build on the research and findings from the previous labs. This lab will go into more detail about some of the exploits researched previously. The team will first work with Nmap. The team had the option of choosing either Nmap of NESSUS. Nmap was chosen for the team’s familiarity of Nmap. Next the team will find stand alone tools that work with Nmap. The team will then proceed to verify that the stand alone tools work by testing them against the system that was setup by the team in laboratory 1.
The next part of the lab was to research exploit codes and find any depositories that these codes can be found in. One specific rule was to not use any virus vendors. This will make finding some sites more difficult. Once the exploits are found, the level of the expertise needed to use the exploits will be examined. The final part of the lab will be to find any patterns with the exploits once compared to the OSI 7 layer model.
Literature Review
This lab has built on the previous laboratory experiments. In the past labs, the team has made charts and researched exploits. The readings for this week’s lab dealt with red teaming and penetration testing. Also the readings are about setting up test beds and plans on how to setup the penetration test. For this lab report, the team continued the research into different exploits and set up a method for testing some of the exploits found previously. In order to properly run a penetration test, one must have a plan on what to exploit and where the exploit will attack. All of these items are necessary to run a successful red teaming experiment.
The first part of this lab was to research exploits found in the team’s Nmap application. Once done, the team had to find standalone exploits that will work will Nmap. Next the tools found had to be tested and verified. This is very similar to the article called Red-Team Application Security Testing. In this article by Herbert Thompson and Scott Chase, the authors setup a plan for red teaming, just like the team did for this lab. Thompson and Chase state that “red teaming lets testers attack a system that same way an intruder would be able to” (Thompson et al, 2003, pg 18-20). The authors realized that firewalls and other pieces of equipment and software used for protection is not enough. Penetration testing is a way to further protect a system, or in other cases software. This lab follows the method setup by the authors in this article. The steps are: research, make a list of tools to use, test the tools, and write down any findings and research. One thing that is found interesting in this article is the fact that it has no references, only some code given to the authors from somebody else. Another article that follows a similar method of penetration testing is Network Penetration Testing by Liwen He and Nikolai Bode. The authors also believe, that just having protective equipment and software is not enough, “The current IP networks are not sufficiently secure, hackers are able to exploit existing vulnerabilities in the network architecture and obtain unauthorized access to sensitive information and subsequently disrupt network services” (He et al. 2005, pg3). The basic steps that the author used were: to research, make a list vulnerabilities and exploits, testing the exploits found, and finally making the documentation of the results found. The second part of the team’s lab was to further research some exploits and see how they fit into the OSI 7 layer model. This looks very similar to the team’s findings for part two of the lab. A brief synopsis of the tools found was the 2nd half of the author’s article. He et al’s research was just finding some exploit tools and stating what they are used for as well and the vulnerabilities of the IP backbone that they used. The authors did clearly state their goal to research vulnerabilities of the IP backbone that they used and document them so that in their future work that they can fix them and run the penetration test again.
The next part of the lab was to use the research that was performed to setup the rest of the penetration testing. In 2005, James Davidson wrote the article Vendor System Vulnerability Testing Test Plan. The author performed some penetration testing on the Idaho National Laboratory’s (INL) SCADA test bed. This is a test environment used by the INL to test all changes done to their SCADA environment before actually deploying to their live environment. Before the actual testing could be done, the author had to figure out who the customer is, what profiles are being tested, and how long the testing process will take (Davidson, 2005, pg 1-3). This is different from what the team is doing for their lab experiments. While the team performed some research on exploits and where they fit into the OSI 7 layer model, there is no customer to figure out who to help. But like the author , the team figured out what they were going to attack from our research and what the exploits will do once performed. The team realizes that if a schedule was made, the penetration testing would go more smoothly, but also the team realizes that there is not enough time to set all of the same items up and Davidson did in his article. This article is a great test bed and plan for penetration testing, but it seems that is actually less of a scholarly article but rather a document to be used by a penetration tester and can be used by any organization that uses a SCADA system. The lab reports made by the teams will be compiled into one document. This document should be able to be picked up by anyone and be able to perform a successful penetration test by following the methods used as well as look up the articles researched and form their own steps of the process.
Another part of this lab was to create a lists of targets based on the research performed and the exploits found. With the list of exploits, the exploits testing began and the results found went into our documentation and then discussed. The article Creating the Secure Software Testing Target List, talks about the importance of creating a list of exploits and what they attack before the penetration testing begins. The authors state “The key prerequisite for enabling and evaluating these capabilities is the development of standardized knowledge to describe the types of security weaknesses to test for in software and the types of attacks to simulate in order to test attack resistance” (Martin et al, 2008, pg1). This means that in order to properly test for security weaknesses is to research the weaknesses, see what exploits can be used against them, and this will help the team to understand them at a higher level. Without researching different exploits and comparing them to the OSI 7 layer model, as well as other models, the team would not fully understand how the exploit works, which in turn could mean that the team will not be able to protect from the attack exploit properly, due to the lack of understanding. This article seemed to be just an introduction to the research performed by the authors. All of the articles that were required readings for this lab experiment related to exactly what the steps of the process were and showed the team what needs to be done before the actual penetration testing could begin.
Methods
Part 1
For this section, Nmap was chosen for the application analysis. The exploits used in Nmap were researched and recorded with their relationship to the OSI 7 layer model. Using a virtual machine running Ubuntu Linux to run an Nmap scan against a Windows XP SP3 virtual machine, the effectiveness of these exploits were tested via the Citrix environment. All of the traffic was then captured on another virtual machine running BackTrack 3 using Wireshark. Next, the operating systems that could or would be affected by these features were researched. Next, stand-alone security tools were researched to determine if they could work with Nmap. The stand-alone tools found were then tested against working systems to verify the success of the tools.
Part 2
The second part of this lab was to research exploit codes and any depositories of this information. Once the team was finished with researching the exploit codes and any databases that contain them, the team then evaluated them for the level of expertise. Next the team compared the exploits found to the OSI 7 layer model and tried to find any patterns and discuss the conclusion of the patterns found.
Findings
Part 1
After researching Nmap’s exploits, it became apparent that Nmap, while it performs many functions, does not actually perform many “exploits”. According to http://insecure.org, Nmap does not run exploit code on a system to determine vulnerabilities (insecure.org, 2009). The reasoning is that the tool is not indented to compromise a system but rather gain information by testing with legitimate packets (for the most part). A computer exploit can be defined as “a known method to use a vulnerability in an operating system or software that would allow an unauthorized user the ability to have greater access than they normally would” (Bleeping Computer LLC., 2009). This suggests that the majority of the functions offered by a simple scanning tool, such as Nmap, cannot be classified as an “exploit”. This is true because the main function of Nmap is to simply send common packets, such as ICMP and SYN packets, to a host to determine its services.
This type of active reconnaissance attack is about on par with an operation such as connecting to a web site via a web browser to determine if port 80 is open on the site. Consider the following example. Someone attempts to break into another individual’s home, first by knocking on the door to see if anyone is home. Then, if the individual is gone, he or she climbs in through a window that was left open. The act of climbing into the home resembles and exploit. This is true because the open window is a vulnerability that allows someone to bypass all other security measures (i.e. locked doors) and have full-access to the home. This kind of “exploit” could then attack the confidentiality, integrity or availability of the homeowner or home depending on what the attacker chooses to do (this could be considered the payload of the exploit). The act of knocking on the door resembles an active reconnaissance attack. This attack is a legitimate operation that does not rely on a vulnerability, but simply allows an attacker to gain information to about which exploit to use. Therefore, for the most part, Nmap does not actually perform exploits against a target machine but rather uses legitimate network traffic in order to determine the availability of hosts and services. When considering the McCumber Cube, Nmap does not attack confidentiality, integrity or availability but rather confirms availability (for the most part).
When considering the Nmap scanning tool, there are some features of tool that can be classified as exploits. For instance, in order for Nmap to determine the operating system of a host, it must modify standard TCP packets. Modification of packets can classify this as an exploit but in most instances it is not, due to the legitimate data being sent. Nmap will often gain an operating system fingerprint by “setting flags in the header that different operating systems and versions respond to differently” (Long, et al., 2006). Setting flags in the packet header is part of normal network transmission and doesn’t really classify it as an exploit. However, in some instances operating system fingerprinting can be considered an exploit due to the fact that “some older operating systems AIX prior to 4.1 and older SunOS version have been known to die when presented with a malformed packet” (Long, et al., 2006). This is true because if a malformed packet is used and results in failure of the operating system, then this process can be classified as exploit that attacks the availability of a system. Of course, this is most often a side effect of reconnaissance and can greatly increase the likeliness of detection when scanning a system for vulnerabilities.
Nmap also has two other features that can be classified as exploits. Nmap has the ability to spoof MAC addresses and IP addresses. This can be considered an exploit, because of Nmap’s ability to forge the packets to indicate a source IP address and/or MAC address that is defined by the attacker, rather than the actual IP address and MAC address of the attacker’s machine. When considering the McCumber Cube, this exploit can be classified as an attack against integrity. This is true because clients on a network assume the validity of the information in packet headers. When this information is spoofed, the client may accept the uninvited data as a valid security test from the owner of the network rather than an attack.
When considering the systems that are vulnerable to these “exploits”, the list is very large. It would appear that all systems using the TCP/IP protocol suite are vulnerable. Therefore, operating systems including Linux, Microsoft Windows, FreeBSD, OpenBSD, Solaris, IRIX, Mac OS X, HP-UX, NetBSD, Sun OS, Amiga, AIX, BeOS and more are all vulnerable. While the effectiveness of port scans can be minimized or avoided completely with proper firewall rules, the ability to detect IP and MAC spoofing is almost impossible. Since it’s just the packets that are manipulated and not the system itself, detection requires specific environmental circumstances. For instance, if a port scan is detected on the network, the node can be located on the subnet by tracing the traffic back to a physical port on the switch. If that port is registered with a MAC address and IP address that is different from the addresses specified in captured packets, it could only mean that it is either the wrong node or IP/MAC spoofing has occurred. It cannot be determined that the packet itself has been spoofed or not; however, security professionals should always assume the possibility of IP/MAC address spoofing when attacks are detected.
When searching for stand-alone security tools that could be used with the Nmap exploits, the list is covers almost all tools that operate over TCP/IP. Because Nmap is a network-scanning tool, the information it gathers can be used in any security tool that requires information such as the IP address or port that a target node possesses or uses. Nessus contains integration for Nmap to be used as the actual port scanner while Nessus itself scans for vulnerabilities on those ports. When considering command-line tools, Nmap can be integrated into the tools by using shell scripting. For instance, the tool Ettercap allows for ARP poisoning of hosts to perform Man-In-The-Middle attacks. While the tool offers it’s own ping sweeping tool for discovering hosts to attack, it is very limited. The attacker has no control over the type of scan, timing or spoofing source IP/MAC addresses. To show how these two tools can be integrated together, the following shell script was created:
#!/bin/bash
# Script written by Nick Prendergast
#####################################
#Check to see if the user is ROOT
if [ “$USER” != “root” ]
then
echo “You have to be root to use this script!”
exit 127
fi
# Begin Nmap Scan
echo -n “Do you want to scan the subnet for hosts? (y/n):”
read nmap
if [ “$nmap” = “y” ]
then
echo -n “What is the subnet or host? (subnet ex. x.x.x.0/24 host ex. x.x.x.x):”
read subnet
echo -n “Enter a spoofed IP address of your choice (ex. x.x.x.x):”
read ip
echo -n “Enter a spoofed MAC address of your choice (ex. 00:DE:AD:C0:DE:00):”
read mac
echo -n “Enter the ethernet adapter you wish to use (ex. eth0):”
read eth
echo -n “Press enter to start the scan:”
read junk
clear
#Runs the desired Nmap Command
nmap -sS -O -S $ip –spoof-mac $mac $subnet -e $eth > /tmp/nmap_output.txt
#Checks for errors
if [ “$?” != “0” ]
then
echo “Ooops! Something went wrong…”
exit 127
fi
fi
#Begin Ettercap ARP Poisoning
echo -n “Enter the victim IP:”
read victim
echo -n “Enter the gateway IP (enter nothing to poison all hosts):”
read gateway
echo -n “Enter the interface:”
read if
echo -n “Messages will be saved to /tmp/messages.log. Press enter to start the attack!”
read junk2
#Runs the desired Ettercap command
ettercap -T -q -i $if -m /tmp/messages.log -M arp:remote /$victim/ /$gateway/
#Checks for errors
if [ “$?” != “0” ]
then
echo “Ooops! Something went wrong…”
exit 127
fi
exit 0
The above script allows an attacker to run an Nmap scan, which offers better results and firewall evasion, instead of using Ettercap’s built-in host discovery. When running the above script, the user is prompted to include IP information about the subnet and the desired IP and MAC address. Once completed, the scan will output the results of the scan to a file so the attacker can analyze then and copy the IP address of the host he or she wishes to attack into the script and Ettercap will continue to attack the host. The script was run in the test environment and is tested to be working. For instance, the following packets were captured during the Nmap scan, which indicate the use of IP and MAC address spoofing:
Frame 5 (60 bytes on wire, 60 bytes captured)
Ethernet II, Src: 00:de:ad:c0:de:00 (00:de:ad:c0:de:00), Dst: Vmware_76:2e:9a (00:0c:29:76:2e:9a)
Internet Protocol, Src: 192.168.1.250 (192.168.1.250), Dst: 192.168.1.1 (192.168.1.1)
Transmission Control Protocol, Src Port: 63066 (63066), Dst Port: domain (53), Seq: 0, Len: 0
The packet above originated from an Ubuntu Linux virtual machine and was captured on a virtual machine running BackTrack 3 using Wireshark. The IP address of the Ubuntu Linux machine is 192.168.1.2; however, the captured packet indicates that it was correctly spoofed to the attacker-defined address of 192.168.1.250. The MAC address was also spoofed to read “00:DE:AD:C0:DE:00” which clearly does not contain the VMware MAC address section of “00:0c:29” like the destination MAC address does.
From the previous research, there is much to be understood about these exploits. First, most of the features of the tool do not constitute as actual exploits but rather attacks used to gain information about a target host without compromising it. After researching the tool, it was apparent the types of exploits that Nmap can perform and how they work. IP/MAC address spoofing is an attack against network transmission integrity and OS fingerprinting can be an attack against data storage, processing and transmission availability. The successes of these attacks were tested using the Citrix environment to confirm system vulnerability. The strategy used to gain this knowledge consisted of researching the tools used, the technologies they exploit, the types of attacks these exploits cover as well as testing the exploits in a virtual environment. It would appear that these exploits are effective because of several reasons. First, the problems associated with the TCP/IP protocol suite allow Nmap to spoof the source IP and MAC addresses while being undetected. Since operating system detection feature can allow for the remote shutdown of older AIX and SunOS systems, it can be determined that it has the ability to attack the availability of those systems (though it’s hardly a useable attack as during the actual operating system fingerprinting process the system may die before the attacker is aware of the existence of the host). This can, of course, lead to easier detection of the attack. Finally, the Nmap exploits are evidently effective due to the results of capturing the scan in Wireshark. The packet capture shows that the data transmission indicates that the packets were sent from a node with an IP address and MAC address that differs from the host’s actual IP and MAC address, which proves the success of the data transmission integrity exploit.
OSI Layer |
Exploit |
McCumber Cube Relation |
Layer 8/ People |
|
|
Layer 7/ Application |
|
|
Layer 6/ Presentation |
|
|
Layer 5/ Session |
|
|
Layer 4/ Transport |
Fingerprinting |
Confidentiality |
Layer 3/ Network |
IP Address Spoofing |
Integrity |
Layer 2/ Data Link |
MAC Address Spoofing |
Integrity |
Layer 1/ Physical |
|
|
Layer 0/ Kinetic |
|
|
Part 2
Our research has produced the following sources that provide published exploits.
Public Advisories List/iDefense (http://labs.idefense.com/intelligence/vulnerabilities)
The level of expertise needed to use the exploits published by this source include:
In-depth knowledge of networking protocols in general and TCP/IP.
In depth knowledge of the inner-workings of operating systems in general and UNIX & Microsoft.
In depth Knowledge of the inner-workings of SQL in general and Oracle, DB2, and mySQL. .
In-depth knowledge of advanced programming concepts in data structures and algorithms
In-depth knowledge of advanced programming languages such, but not limited to, C, Perl, JavaScript, PHP and Python.
In-depth knowledge of the inner-workings of web servers in general and Apache and IIS. .
Microsoft TechNet(http://www.microsoft.com/technet/security/current.aspx)
The exploits detailed on the sources site pertain to Microsoft products. An attacker with the skills to successfully exploit any of these vulnerabilities could execute arbitrary code and take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.
The level of expertise needed would be in the realm of Microsoft’s operating systems, applications and servers.
Secunia (http://secunia.com/)
The level of expertise needed to use the exploits published by this source require that the user is extremely experienced and highly skilled in the art of reverse-engineering and “speak” assembly, C, and C++ like a second language. The user also needs to understand compiler specifics, operating system concepts, security models, and the causes of most vulnerabilities and how to exploit them.
Preferably, users should have been able to code and disassemble software on both Windows and Linux or other *BSD and Unix variants.
Security Focus (Bugtraq) (http://www.securityfocus.com/)
The level of expertise users need to use the exploits listed on bugtraq must possess a thorough working knowledge of common commercial and/or open source vulnerability assessment tools and techniques used for evaluating operating systems, networking devices, databases and web applications.
Users should be familiar with certification and accreditation processes in general; experience with the NIST 800 series of documents would be advantageous.
Thorough working knowledge of Web Applications and services, with Web Development experience or Java Programming is needed. Database (Oracle and SQL) skills are also needed
Argeniss Information Security (http://www.argeniss.com/research)
The level of expertise needed to use the exploits published by this source would be in understanding SQL Injection, uploading binary statements, and also being experienced in writing Visual Basic scripts and java scripts. For these exploits an attacker could use SQL injection vulnerability as a rudimentary IP port scanner of the entire network or internet.
Application Security Inc. (http://www.appsecinc.com/resources/alerts/oracle/)
Level of expertise needed to work with exploits published on this site would include expertise of HP, IBM, McAfee, Microsoft, Oracle, Sybase, and others. In addition knowledge of Buffer overflows and SQL injections is also needed.
Red Database Security (http://www.red-database security.com/exploits/oracle_exploits.html)
Red-Database-Security GmbH is specializes in Oracle security only. This specialization helps to develop a deep knowledge and to provide better services and solutions for Oracle customers. The level of expertise for using the exploits published on their site requires that they have knowledge of the variety of Oracle technologies as developers and DBA’s.
SecuriTeam (http://www.securiteam.com/exploits/)
SecuriTeam is a group within Beyond Security. The level of expertise needed to use the exploits published on their site would include experience and knowledge with networking, and working with protocols such as: H323, ModBus, DNP3,SSH, LDAP. Experience working with Sniffer, writing scripts in Perl and Python working with C++, MFC, WIN 32, STL, Sockets, DLLs, and multi threaded environments is needed.
Security Docs.com (http://www.securitydocs.com/Exploits)
SecurityDocs is the largest repository of information security resources anywhere on the web. They primarily publish exploits related to Denial of Service (DOS), Social Engineering, and SQL Injection.
Denial of Service: One should be schooled in the use of the different types of DoS attacks, ICMP, UDP, TCP, and TCP SYN.
Social Engineering: The level of expertise needed for social engineering begins
with the building of trust. If a hacker can build up some trust with a user or befriend them they’re going to find it easier to manipulate them to do what they want.
Social Engineering is a skill, and like any skill you have to practice to get it perfect. In some cases the Hacker has been building trust with the users he wants info from for weeks even months, sometimes you can have a target do what you want while looking genuinely like a friend in a space of a hour, it all depends on the targets awareness.
SQL Injection: The level of expertise needed in SQL injection begins with expertise in SQL Manipulation, Code Injection, Function Call Injection, and Buffer Overflows.
After researching the various exploits the team felt that the common thread between them all is they primarily affect layers 6 and 7 of the OSI 7 layer model. A lot of the exploits dealt with SQL injections, buffer overflows, cross site scripting, denial of service and application exploits. From a networking standpoint if one could unplug a device from a network or otherwise physically alter it, communication stops. If there are errors at the physical layer, the layers above cannot typically recover, and must either re-transmit or fail. If a hacker can physically access a device, it is nearly impossible to prevent data loss or disclosure. All of the above layers depend upon the integrity of the physical layer.
Suppose the team were to apply good through underlying layers but then we are deficient on our application layer security (layer 7 and often layers six and five), using un-patched server software and poorly written application, in a pure 7 layer model the team would be hard pressed to defend against this at lower levels, as the control at the lower levels would only be able to control their respective layer of protocol.
Issues
For this lab, this team did not have many issues. One issue was finding the exploits for part 2. Trying to find the correct phrase to put into Google, took a lot of time. It was difficult to find non-vendor specific depositories. Another issue that this team has was time. This team would have loved to have had the time to be able to perform part 3 of the lab. The team is going to try to finish part 3 of the lab sometime before the end of lab 7.
Conclusions
As stated in the team’s findings section, the common trend between the exploits found in the research. Most of the exploits found were at layer7 with some at layer 6. This is because most of the exploits found were SQL injections. This has been the trend that in the past few lab. While performing the steps in part 1 of the lab, the team found that working with NESSUS and NMAP, many of the exploits where “man in the middle” attacks. This means that these exploits are not to really attack the system, but rather as a means of information gathering. While the first 3 labs were researching exploits and putting them into the OSI 7 layer model, this lab was a more in depth research into some of the exploits found previously. The team has noticed that each lab is building on each past one.
Bibliography
http://www.appsecinc.com/resources/alerts/oracle/
http://www.argeniss.com/research
Bleeping Computer LLC. (2009). http://www.bleepingcomputer.com/. Retrieved 7 3, 2009, from http://www.bleepingcomputer.com/: http://www.bleepingcomputer.com/glossary/definition253.html
Davidson, J. (2005). Vendor System Vulnerability Testing Test Plan.
He, L. B., Nikolai. (2005). Network Penetration Testing.
insecure.org. (2009). Fingerprinting Methods Avoided by Nmap. Retrieved July 3, 2009, from insecure.org: http://nmap.org/book/osdetect-other-methods.html
http://labs.idefense.com/intelligence/vulnerabilities
Long, J., Bayles, A. W., Foster, J. C., Hurley, C., Petruzzi, M., Rathaus, N., et al. (2006). Penetration Tester’s Open Source Toolkit. Rockland, MA: Syngress.
Martin, R. B., Sean. (2008). Creating the Secure Software Testing Target List.
http://www.microsoft.com/technet/security/current.aspx
http://www.red-database security.com/exploits/oracle_exploits.html.
http://www.securitydocs.com/Exploits
http://www.securiteam.com/exploits/
Thomspon, H. C., Scott (2003). Red-Team Application Security Testing.
Team one begins their lab with an abstract that meets the requirements of the syllabus, both in explaining what is going to be done, as well as in length. The abstract listed all the steps of the lab that will be completed while not making it sound like it came directly from the lab design document. Upon completion of the abstract the team goes directly into the literature reviews. The introduction to the literature review does not seem to flow with the cohesion of the rest of the section. It explains how the literature review was used in relation to the lab and nothing more, explaining how NMAP was used over NESSUS which should be in the methods or findings rather than literature review. In the literature review team one does tie the lab to the individual articles, but there is little cohesion between the three (or four?) articles reviewed for this lab. They seem to use “et al” for any source that has more than one name associated with it. I’m not sure on this, but if there are two authors, “et al” might be overkill, unneeded and against APA styling rules. Please check the latest APA rules online and confirm this. The methods section is rather short almost to the point of nonexistence. They close out the literature review by saying that all articles relate directly to the steps of the process in the lab design. I disagree with these findings. The topic of the lab was on researching exploits, and while Herbert & Chase article as well as the Davidson article explain a process of documenting a path to performing a penetration test, the third article lists exploit tools primarily. While researching exploits should’ve led the team to the tools, they really do not directly relate to the results of the lab. The methods section of the lab is short almost to the point of nonexistence. This is not a scholarly or academic approach to listing the strategy and technique used in completion of the lab, the methods are also broken into section making cohesion in this section impossible. In the findings section I agree with their opening statement on how NMAP is a recon tool and not an exploit tool. I question however that it took team one until now to realize that, making them switch tools to ettercap mid-lab. Based on the previous labs they should have already known this and not even used NMAP as an exploit tool. They list strategy in their findings section related to running an actual test attack, which should be in the methods section. Their first table has no label making it hard to ascertain its actual use, and is mostly empty making me question their completion of part 1 of the lab. They list a number of supposed vulnerability databases in their part two research, but said research is not formatted well, nor is there a table supporting any conclusions based on sampling possible exploits. They fail to list the OSVDB in their list, since it was in the literature I question the actual research into selecting vulnerability databases to research. Their conclusions are complete, and I agree with them, except for the final part of noticing that each lab builds on the previous, that should’ve been apparent from the beginning.
Team 1 begins with an abstract of the lab that describes the tasks in the lab assignment. They list their objectives for this lab in two parts. Part one is to find standalone tools that work with nmap. Part two is to find depositories where exploit codes can be found, compare them to the seven layer OSI model, and find any patterns that may exist. They discuss their decision to use nmap, namely because the team members are familiar with its operation.
The next section is a literature review. It begins by giving a general description of the readings as dealing with red teaming and penetration testing. They begin with the article Red-Team Application Security Testing (Thompson, Chase, 2003). They make the statement “the authors setup a plan for red teaming, just like the team did for this lab”. I would have to disagree with this statement since this article discusses decomposing applications into functional components and testing each individually. That is something which we are not going to be able to do in this lab. In fact, the article states “Using firewalls and testing at the network layer is not the answer.” This statement goes against our method of conducting testing by performing network scans to identify vulnerabilities. Perhaps a better assessment might be that it establishes the need for testing at the application level, since we’ve found in this lab that this is where most of the vulnerabilities originate.
It continues by reviewing Network Penetration Testing (He, Bode, 2005). They relate this to part two of our lab by stating that the article lists one of the basic steps in red teaming is making a list of vulnerabilities and exploits. Part two of this lab was to research exploits. I believe there were several other ways in which this article can assist us with this lab. This most obvious is the list of published vulnerabilities web sites that are available (He, Bode, 2005, p 4). This article also includes a listing of penetration testing tools and known exploits that may assist us in future labs.
The next article that they reviewed is Vendor System Vulnerability Testing Plan (Davidson, 2005). I agree that this article differs from our current laboratory assignment in that it applies to SCADA/EMS systems. This article does serve as a good example for organizing, scheduling, and documenting penetration tests. These concepts will assist us in our labs and in ‘real world’ testing.
In the findings section, they discuss the conclusion that nmap does not actually perform any exploits. It just sends ICMP and SYN packets to a host to determine what services are available on that host. They compare nmap to “knocking on a door” to determine vulnerabilities. This, I feel, is a good analogy of port scanning. They include a brief explanation of how operating system fingerprinting works within nmap, and how this process can be considered an exploit (denial of service attack) in rare occasions because certain systems will fail if they receive a malformed packet. They also include a discussion of how nmap can spoof IP and MAC addresses. They proceed to include a script for using nmap and Ettercap to perform MAC address spoofing. Although they provided an interesting discussion concerning whether or not nmap can be considered an exploit tool, I believe they missed the point of the laboratory assignment. My understanding of the laboratory assignment was to use nmap or Nessus to scan the network and determine what vulnerabilities are present, then to locate and test “stand alone” tools that exploit these vulnerabilities. Although they state their objective at the beginning of this lab report is to “find stand alone tools that work with Nmap”, they didn’t find any exploit tools for the exploits that nmap discovered.
They then discuss seven websites that contain information about security exploits. For each, they give a brief description discussion of the skills needed to perform the exploits listed in the web site. They do not list any of the specific tools listed in the different web sites. They do state that most of the exploits are in layers 6 and 7 of the OSI model, however do not provide any statistics to support this claim.
This group starts off with an abstract that ties into the rest of the previous labs in class and briefly describes what is going to be done in this lab. The group decided on using Nmap for the examination of the types of exploits used in Nessus or Nmap. They chose Nmap because the group was already familiar with it. I believe that the group could have done less describing the lab and more of explaining why this lab is important and how is pertains to the whole class. Next the group goes into their literature review. At the beginning of the literature review the group does a nice job explaining how this lab ties into the previous labs and what this lab is about. They also tell how all the articles tie into each other by explaining that the articles are about setting up a penetration test. Throughout the paper I keep seeing grammatical errors, but not too many. The group steps through the lab and ties the papers into how they apply to that part of the lab. The group first describes how the article Red-Team Application Security Testing (Thompson, Chase, 2003) is similar to how the group is to test and verify the tools that they find in this lab. The group’s literature review continues on about each of the articles given in the lab. The group does a great job in explaining how each article pertains to the lab. They also briefly explain the research that the writers did on the article. The group also gave a description of the theme of the article along with the explanation on how it ties into the lab. The group did leave out if any of the articles had a research question, the methodology of the paper, and any errors or omissions. Next the group gives their methodology for this lab. They explain that in part one, the exploits used by Nmap were researched and categorized in accordance with the seven layer OSI model. The group does a good job in describing how they set up the machines that they were going to use to do a research on the exploits that are used in Nmap. They also did a decent job in explaining how they found stand-alone tools and tested them against their systems. They did not go into any detail how they went about the previously mentioned test. Then the group talks about how they went about accomplishing the second half of the lab. They missed out on some details on how they did this. The group could have described how they determined if a site was worth using. They also could have explained how they determined the level of expertise of the site. The group could have done a much better job in explaining the methodology of the second part of lab four. Next the group gives their findings. The group explains that Nmap does not actually use exploits, but rather gains information by sending ICMP and SYN packets and studying the responses sent back. The group then justifies that Nmap is not using exploits because it uses legitimate means to gather information that it needs to determine a way to use an exploit to gain access to that system. The group does a nice job in giving an analogy to this by comparing gaining access to a system using Nmap, with gaining access to a house by first knocking on the door to see if anyone is home. The group then explains how Nmap does use a limited amount of exploits in determining what operating system is on a system and spoofing MAC addresses and IP addresses to gain information. The group explains that in order for Nmap to determine the operating system, Nmap modifies a packet with cretin flags that different operating systems respond to in different ways, thus exposing the operating system. The group then explains how almost all operating systems would be vulnerable to Nmap’s exploits, because they target TCP/IP. The group also does a good job in explaining ways in how to avoid giving away information when Nmap scans a system and how to detect if Nmap is scanning a system. Next the group does a very good job in showing how Nmap can be incorporated into another program as a command line command in a script that does a specific job. The group shows this by creating a script that combined the use of Nmap and Ettercap together. Nmap is used in the script to spoof an IP address and a MAC address as the source addresses that Ettercap uses to do an attack with. The group then summarizes part one in at the end of the findings of part one. Then the group explains that Nmap is not a tool that uses exploits but attacks that accumulate information without compromising the target system. The exploit that Nmap does use is IP/MAC address spoofing. Also it was shown that the exploits were effective through several reasons that the group gives in the paper. Because of the small amount of exploits found in Nmap by the group the table that was created did not consist of very much. Next the group gave their findings on the second part of the lab. In this part the group gives eleven sites that provide lists of current exploits. For each site the group gives the level of expertise the user needs to know in order to use these exploits. This part of the lab could have been interpreted in a couple of ways. One way would be the way that this group interpreted the lab in that the level of expertise of the user of the exploit. The other way to interpret the step would be the level of expertise of the company or individuals that created the list of exploits and how they described the list of exploits. The group then finishes the results explaining that the exploits that were found affect layers six and seven on the OSI model. The group then points out that if you can affect the lower layers, like the physical layer, you can have more control of the system than if you were to affect the upper layers. On the other hand if the upper layers are not secured right, the lower layers will fail. The group had almost no issues. The only one was a problem finding non-vendor specific depositories for part two of the lab. Last the group gave a conclusion on the lab. The first line of the conclusion seemed as though it was not completed. They claimed that most of the exploits existed around the sixth and seventh layers of the OSI model. They also claim that most of the exploits used in Nessus and Nmap are “man in the middle” attacks and are not necessarily attacks but a way to gather information. They also showed were this lab fit into the class compared to the rest of the labs.
The literature review is a good improvement over previous labs but still has some issues. In the first section of the literature review the authors mention an article on red teaming and say it relates to the activities in the lab exercises but doesn’t go into enough detail about how researching exploits equivocates to red teaming. The relationship of the Network Penetration Testing paper to the lab activities was much better matching it to the specific part of the lab where it was useful. I think saying the Davidson article wasn’t related isn’t true. The Davidson paper, while analyzing a SCADA system, still provides a good framework for a test plan that relates to the plans presented in previous two articles. The review of the Creating the Secure Software Testing Target List, just misses the point of the lab exercises. It was even set up with the quote given about “standardized knowledge” regarding vulnerabilities, that’s precisely what a vulnerability database is. This section of the lit review could’ve been tied into the other papers and the lab exercises a little bit better.
The methodologies section was very brief. While there was a short mention of the activities that were going to be performed there wasn’t enough detail to make this reproducible. For part one, the use of Wireshark from the BackTrack VM wasn’t necessary for these lab exercises, if it was going to be used for a specific part of the vulnerability testing, it should’ve been mentioned. The second part was even more brief and missed the heart of the lab exercises, tying the information from part one to part two. A minor word choice error for discussing the vulnerability databases, “repository” would be a better choice than a “depository” though that could be because two of you are bankers.
The findings section had much more detail than the methodologies section. Quite a bit of the information found in this section could’ve gone in the methodologies section instead, particularly the script used for nmap. The use of nmap and the output described in the findings don’t really appear to fall into the scope of the lab. How can this be related to a specific vulnerability in a public vulnerability database? I agree with the statement that this isn’t actually an exploit but, rather, an information gathering tool. How is spoofing an IP an information gathering tool? If you’re spoofing the packet with another IP and MAC address how are you going to get the reconnaissance data back? The list of vulnerability databases was pretty extensive and many of them mentioned high levels of expertise in many areas. What could be concluded from this? The conclusions section is very brief and doesn’t tie a lot of the lab data together.
I found a number of positive aspects of this team’s lab write-up worthy of mention. I admire the bold move to analyze ‘Nmap’ as an ‘exploit tool,’ certainly this is a departure of note as compared to all of the other teams performing the exercise. The definition of ‘exploit’ and the rationalization of the term with respect to ‘Nmap’ showed a nice amount of creativity. I also thought the literature review to be reasonably well written, with tie-in to the laboratory exercise noticeable for each article. Finally, I found the inclusion of an ‘attack script’ an interesting detail: it was informative, perhaps in more ways than first apparent.
A number of problems were apparent in this report, though. I believe that as the team found little to address as far as ‘exploits’ within this exercise; the bulk of the write-up essentially became an exercise in ‘self-justification’ as to how this examination of ‘Nmap’ was significant with regard to the initial instructions given in the lab. I question the assertion that ‘setting the flags in a packet header’ could be considered an exploit ‘in some cases’ due to issues with ‘malformed packets.’ I think this is a stretch, as though a danger might exist for certain systems, the primary goal of ‘Nmap’s’ fingerprinting is not to take down the host: this is more or less an accidental occurrence. I think it even becomes more obvious that it is ‘reaching’ if one notes that the reason ‘Nmap’ is being run on a network is because the number and configuration of hosts on the network is at this point ‘unknown.’ If someone should connect to a busy server, and by doing so cause an overloaded service to crash; would this chance occurrence also qualify as an ‘exploit?’ I believe for an exploit ‘to occur,’ there must be intent associated with a known opportunity: else such things as cosmic rays corrupting memory might be considered to be an ‘exploit.’
The experimental setup I judge at the very least to be ‘contrived.’ The ‘Nmap’ documentation from http://nmap.org/book/man-bypass-firewalls-ids.html indicates that with a spoofed IP address “… you usually won’t receive reply packets back (they will be addressed to the IP you are spoofing), so Nmap won’t produce useful reports.” How then, is this script “tested to be working” as you describe? Does it work by using ‘Nmap’ to detect broadcast packets triggered by the port scan? If not, and the packets never return to the ‘Nmap’ tool host, how is it that a list of hosts is generated by which to target ‘Ettercap?’ While I respect the proficiency indicated by this script, I confess doubt of its real usefulness. For a better effect, one in theory could use another host which has the actual spoofed IP address to redirect the packets back to the ‘Nmap’ tool host. In fact, the use of the machine running ‘Wireshark’ might prove an ideal host to use in this ‘spoof and redirect’ scheme. As it is, with the limitations experienced in the last lab due to switched networks; I wonder how this team’s test configuration for this lab was actually implemented: this was also unexplained.
I also wonder at the assertion made in the last paragraph of the ‘Findings: Part 2’ section. The implication of the statement made is that ‘upper layer flaws’ of the OSI layer would be hard to “defend against …at lower levels.” Is this not the function of firewalls, where ports can be monitored and even completely ‘sealed’ from outside usage at the transport level? I don’t quite understand this section (perhaps due to the poor grammar); certainly, vulnerable ‘high layer’ services are a problem, but these problems ‘can’ be addressed from a ‘lower level,’ even if it means simply concealing the running service from exterior utilization.
In the abstract section of the laboratory report, the statements “This lab will build on the research and findings from the previous labs” and “This lab will go into more detail about some of the exploits researched previously” seemed too vague.
In the literature review section, team one did a good job analyzing the article Red-Team Application Security Testing. The team was able to point out an important omission, the authors did not have any references. The team was able to relate all of the articles to the laboratory exercise. In regards to the article Vendor System Vulnerability Testing Test Plan, I could not figure out why the team stated “it seems that is actually less of a scholarly article but rather a document to be used by a penetration tester and can be used by any organization that uses a SCADA system.”.The document had a methodology to it, but did not actually include the results of the experiments.
In the method section, I was not sure why group one chose to capture all of the traffic on another virtual machine running BackTrack 3 using Wireshark. This was a requirement for a previous lab, but it was not required for this lab exercise.
In the part 1 findings section I had to disagree with the statement “When considering the McCumber Cube, Nmap does not attack confidentiality, integrity or availability but rather confirms availability (for the most part).” A reconnaissance tool such as Nmap, which does not attack a system but reports information about it would still reveal what ports are open on a particular machine, thus reveal information that would be confidential. However, the group is right that is does not directly attack any of the three pillars of security. The findings section seemed contradictory at times. In the first part of the findings section, the group stated “After researching Nmap’s exploits, it became apparent that Nmap, while it performs many functions, does not actually perform many “exploits”” However, in the third paragraph, team one stated “When considering the Nmap scanning tool, there are some features of tool that can be classified as exploits.” The script that was created by team one was impressive it combined Nmap with Etthercap. The group stated “The above script allows an attacker to run an Nmap scan, which offers better results and firewall evasion, instead of using Ettercap’s built-in host discovery.”
In part 2 of the findings section, there was not a table that contained some of the exploits that were discovered on the numerous sites that were researched by the group. The group interpreted expertise as the skill level required to use the exploits. Other groups interpreted expertise as the quality of the description of the exploit, for some sites gave mediocre descriptions of the newly discovered exploits. Some of the sites that were chosen included Public Advisories List/iDefense, Microsoft TechNet, Secunia, Security Focus (Bugtraq), Argeniss Information Security, Application Security Inc., Red Database Security, SecuriTeam and Security Docs.com.
The team starts off with their abstract explaining what is to be done and the tools that they will be using. They decided to use NMAP as their main tool for researching vulnerabilities within systems. The team then goes into their literature review and describes how the lab and pieces of literature will combine. At the end of the literature review they describe that the articles were cohesive in how the lab was to be performed. They then go onto the methods and explain what tools where being used and against what system. What I did not notice here is that later in the lab they have bash scripting for the root user, but Linux is not described within their first part of the methodologies. They then go onto described what the second part of the lab will be and comparing the exploits to the OSI seven layer model. Upon going into the findings section for part one the testing that was stated in the first part of the methodologies changed from Windows XP sp3 to Ubuntu and more Unix/Linux environment. There was a brief description about exploits with all different operating systems, but what they said they where going to try and exploit just disappeared. Part of the findings almost felt like a review of NMAP instead of the purpose of finding the exploits for this section and using tools against the vulnerabilities. When they added the bash scripting it was a nice touch, but then questions came from that. Are there any other tools out there that can do the same thing within Backtracks list of 300 tools? The section just seemed to drop after showing the results from the script that was run. Is there anyway that this issue can be resolved and maybe deny the script from running against the targeted system? Next they go into their table of exploits. One thing that I noticed was missing was application layer exploits. Many exploits that are found on the databases they found list application exploits. Yes it was hard to fill the entire table with different type of exploits but attacking applications to gain access to other parts of the system is something that happens many times. Many times there are incidents with Internet explorer being exploited and users able to get to the core of Microsoft Windows directory. They then end with their conclusions and their thoughts on the attacks against targeted systems. Why would exploiting a system not “really” attack the system? The malicious user is knowingly going after these vulnerabilities would this not be considered an attack on a system. For example a thief on the street going up to a user and asking them how much money they have in their wallet before mugging them. That would be part of the thief’s attack strategy. Overall the team did a good job and did what they could for lab 4.
Team one submitted an awkwardly written lab that covers the basic points but suffers from inconsistencies and subdued but noticeable changes in writing style. The abstract adequately explains to the reader what will be presented. The team discusses the decision to use NESSUS here. This is oddly placed. It should be in the methods section.
The literature review appears to be an awkward combination of review and method that is hard to follow and inappropriately placed. I understand the group is trying to relate the articles to the lab, but some of the connections are dubious. Are Thompson and Chase talking about red teaming in the same sense as we use it, or are they considering it to be part of application development? You say that they follow the same steps as those in the lab. Are these steps analogous to best practice? You state that He and Bode “just” listed some tools, but do you think maybe there was something important about their methodology? Davidson’s “Article” is a test plan, that’s why it doesn’t seem to be much of an article. The team made an attempt to add evaluation into the review this time, which is good to see, but it needs to be more clearly stated and expanded upon.
The methods section is vague and unrepeatable. Perhaps this is because the methodology is strewn about the rest of the paper.
The findings for part one of the lab are very well explained. I especially like the fact that “exploit” was clearly and correctly defined, and that you back your findings with outside examples. Some of this information belongs in the methods section, however. I can separate out steps in your results that I could follow in order to duplicate your findings. For part two, the list of sites is fairly extensive. The group makes judgment calls about the level of expertise needed to use the “exploits” listed on each site. What are these based on? The sites are listed but not really discussed or evaluated in terms of usefulness. The use of the term “exploit” in this section somewhat contradicts the definition given in for part one. What does the term mean? I’m unclear as to how you got these results. Again, a more detailed methods section would serve you well.
The group’s conclusion does a poor job of summarizing the research. What did you learn, other than the fact that the labs are building on each other, which should have been self-evident? You mention here that the majority of attacks are at layer 6 or seven and deal with SQL injection. This should have been in the findings section, but I saw nothing to support the idea.
The team started with a strong abstract and indicating key points of there laboratory. They identified which tool the team was going to use Nmap or Nessus. The team decided on Nmap over Nessus however was wonder why the team decieded to go with Nmap over Nessus. The team also talks about researching exploit code but not from a anti-virus vendor. There literature review started off talking more about that lab methods. The team does cover the reading about verifying tools in the the article Red-Team Application Security Testing. They also talk about how firewalls and other services that help protect services are not the only protection a system and that penetration testing offers more of an idea of how an attacker would attempt to gain access. Knowing this information can better protect a organizations infrastructure. Penetration testing has the part of running exploits which the team does in part 1 of the laboratory.
In part 1 the team discovers that Nmap actually does not perform exploits. They backup there finding by providing a link to insecure.org which talks a lot about Nmap. They team goes on to talk about how Nmap works and how there is not a way to perform exploits. I do ask this question, is it possible to send a customized packet using Nmap to exploit a system? Nmap does send ICMP and SYN packets as the team describes as common packets. Then team then goes on to talk about how stand alone tools and a script was created to use Nmap and Ettercap to do ARP poisoning attack which will become a man in the middle attack. This script is well written, provides commits, and does checks. The script checks to see if the user is root and if not tell the user to rerun script in super user mode. The team also provides proof of this script functioning by showing a portion of what was captured in Wireshark on a different virtual machine. Then team then goes on to talk about the findings for the research done on exploit code. The team claimed to have found several SQL injection code. With this finding would the team believe that SQL is very vulnerable and unsecured enough to be exploited?