Abstract
In lab four, this group is researching vulnerabilities and exploits. In this lab the group learns how to utilize tools to discover what vulnerabilities are present in a target machine. This lab will utilize two tools in particular: Nessus and Nmap. This would be the first step in performing a penetration test on a target machine. This step would then lead into determining what tools will be needed to perform the penetration tests. There are many sites that provide up-to-date lists of exploits. These sites could be a great asset to a penetration tester in that they provide the tester with an up-to-date list of exploits to utilize.
Literature Review
In the article Red-Team Application Security Testing: Testing techniques designed to expose security bugs (Thompson, Chase, 2003), the author is trying to argue that instead of securing the network around a piece of software to make that software more secure, why not make sure the software in question is secure? The writers say that the people doing the penetration testing just run security software that only checks for vulnerabilities that are well known. The writers say that to test for vulnerabilities in software the penetration testers need to test like detectives. The writers purpose a methodology that will help organize application-penetration testing through decomposition of an application, ranking of features for potential vulnerabilities, and allocation of resources (Thompson, Chase, 2003). The writers state that there are aspects of software that have side-effects that attackers can take advantage of. These side-effects that lead to a security breach in the software, are not picked up when the normal tests are done on the software. This is where penetration testing and red teaming come in handy. Penetration tests and red teaming look for loop holes in the applications and find ways to take advantage of the software’s normal activities. In this lab we are learning to research exploits. This article talks about finding ways to exploit applications using their normal activities. When we do research on how to exploit some piece of software, we need to look at any unsecure activities that the piece of software is using. The writer purpose a method to set up a short, focused security assessment of an application to identify what areas of that application are unsecure. They first break an application down by function and score it on insecurity. With this information the team then assigns members roles to investigate components, execute tests, and find tools. The application’s features are partitioned into testing areas based on two questions: can the feature be handled by an individual or does it need a team to evaluate it, and is the functionality of that feature contained in the feature or does it interact with the rest of the application. Once the features are scored the features are handed off to testers who evaluate the components. The testers have two primary responsibilities: Determine what tools to use for the testing, and the technique that will be used for the test. The testers then are able to develop a report on the bugs and vulnerabilities of that application. The writers point out that these vulnerabilities are a great asset to organizations in that they provide a guide to help develop software that is less vulnerable earlier in the development stages.
In the article, Vendor system vulnerability testing test plan, the Supervisory Control and Data Acquisition (SCADA) Test Bed established at the Idaho National Laboratory (INL) was used in a penetration test. Prior to conducting the penetration test, the author recommended creating a baseline to establish a reference point for all-subsequent testing (Davidson, 2005, p.1). The series of baseline tested document factory delivered defaults, system configuration, and potential configuration changed to assist in the development of a security plan for in depth testing (Davidson, 2005, p.1). Prior to vulnerability testing, the system was also configured and checked for proper operation, thus simulating a real SCADA setup in the field (Davidson, 2005, p.3). A basic information technology (IT) assessment of the Vendor’s system was the first step needed in gathering the required data to perform all subsequent tests and included port scanning, vulnerability scanning, network mapping, password cracking, and network sniffing(Davidson, 2005, p.5). Some of the tests that were done included unauthorized access and escalation of privileges, gain control of the operators workstation, access the central database, changing alarms and commands, changing state in the RTU, access the developer’s workstation, which would allow for direct access to system resources, and control the RTU from the communication processor, which would allow the attacker to control a portion of the SCADA/EMS system (Davidson, 2005, pp. 6-22). The attacker’s goals were to impact specific portions of the transmission system by taking control of critical components and assets, for by taking control of these breakers, the attacker could isolate the assets downstream from power generation upstream(Davidson, 2005, p.2). This control could be obtained by direct manipulation of remote terminal units (RTU), a penetration of the system, or by causing the operator to control these breakers (Davidson, 2005, p.2).
In the penetration test that was conducted, the attacker had knowledge of the Vendor’s system (Davidson, 2005, p.2). Some other assumptions included all cyber testing is performed from the same network segment as the Vendor’s SCADA/EMS, the attack team will not have physical access to the servers, the attack team will perform some testing directly on the operator and developer consoles to demonstrate insider capabilities, and the operator and developers consoles have no removable storage (Davidson, 2005, p.17).
The methodology used in the article was an experimental approach, although the paper was an overview of what was going to be done. The article related to the lab in that the students already have an understanding of the systems that are being tested. The article may have had a stronger correlation with lab two since SCADA protocols were researched. Some of the assumptions or conditions seemed to be in error, for the article did not explain if it was standard for operator and developer consoles to have removable storage capabilities. Why would these limitations be set if the real world environment did not follow such stipulations? The point of being able to access the operator and developers consoles directly to demonstrate insider capabilities seemed moot, for these consoles control the SCADA equipment, it is implied who ever has access to the consoles would affect the SCADA equipment.
In the article Network Penetration Testing (He, Bode), the writers describe different ways that organizations can use penetration testing to discover vulnerabilities in their networks before any malicious attackers find them. The writers state that there are two types of penetration tests: announced or unannounced testing. With the announced testing the testers attempt to break into the organization’s network with the organization’s full knowledge and cooperation. This type of testing pinpoints specific parts of the network for vulnerabilities. Unannounced testing is testing that is done on an organization’s network with only the upper management’s knowledge of the attack. Unannounced testing tests the organization’s procedures and personnel. Also the writers introduce blackbox testing and whitebox testing. Blackbox testing is testing without any information of the target network given to the testers. Whitebox testing is testing where information on the target network is given to the tester. The writers then give a list of different types of vulnerabilities that can appear on a network. Next the writers give a list of tools that can be used to do penetration testing using the exploits given in the last list. Then the writer gives a list of tools that scan for vulnerabilities and services running on a network. Next the writers give a list of exploitation tools that are diversified. All these lists could be a great benefit in this lab. These lists break down the tools and exploits into categories, kind of like what we have been doing in the lab (even though we are breaking them down in a much more useful manner). Next the writers give four types of trends in penetration testing: Semi-automatic, IP backbone network, wireless network, and integration of application security. Semi-automatic testing provides consistency and reduction of costs without losing the creativity and flexibility needed in the tests. IP backbone network testing concentrates on testing the IP infrastructure outside of the private networks. This can include Cisco networks, public databases, and routing vulnerabilities. Wireless networks test the security of wireless infrastructures. This includes gaining access to the wireless network, determining the service that is being used on the wireless servers, and exploiting well-known vulnerabilities. Last, with integration of application security, the writers talk about exploiting vulnerabilities in applications. This was discussed in one of the other articles given in this lab, Red-Team Application Security Testing: Testing techniques designed to expose security bugs (Thompson, Chase, 2003). In the conclusion the writers state that, although penetration testing is a great way to point out vulnerabilities, it is by far not an end to securing a network.
Methodology
In this lab the team looks at a couple of ways to discover the vulnerabilities on a target machine. The lab first looks at using tools to discover the vulnerabilities on the target machine. Then in the second part the lab shows how to discover the latest vulnerabilities using online sites.
The first part of this lab starts off with researching what exploits are used in the tool Nessus that will work on the target machine. This was done by making sure that Nessus had the most recent plug-ins installed into the tool and then simply running Nessus against the target machine. Next, the results were tabulated into a table that sorted them into each of the OSI model layers and McCumber’s cube. The table is shown in the results section of this paper. Next, the group tested two tools that utilized the Nessus/Nmap exploits. The tool that was used to utilize the Nessus exploit was the Nessus 3.0.5 Build W313 tool. This tool was updated with the most current exploits and ran against a Windows XP SP0 virtual machine. Then the group used the Zenmap tool to help in creating an Nmap command to use against a Windows XP SP0 virtual machine. The outcomes of the two tests are shown in the results.
The second part was mostly researching what sites provide current vulnerabilities in today’s networks. These sites were located by using online search engines. Each site was then evaluated on the level of expertise involved. The descriptions of each of the exploits were looked at. If the site did not give good descriptions of the exploit or referred the user to a third party tool repeatedly, then a poor level of expertise was given. If the site gave accurate and complete descriptions of the exploits, then a high level of expertise was given. Then the exploits given by the sites were analyzed using the OSI model and McCumber’s cube. Any interesting conclusions were noted and given below in the results section.
Findings and Results
In the first part of the lab the team found that the best way to discover what vulnerabilities that could be exploited on a target machine using Nessus was to simply run Nessus against the target machine and analyze the results. The group found that when the Nessus scan was done on the Windows XP SP0 machine that most of the vulnerabilities were exploits against the application and session layer. This was due to the exploits targeting specific application vulnerabilities and server message block (SMB) vulnerabilities. All the exploits targeted the integrity of the machine. Most of the exploits also targeted the process and technology.
Exploits Found on a Windows XP SP0 Using Nessus
OSI Layer |
Exploit |
McCumber’s Cube |
8 |
|
|
7 |
SMB Detection File/Print sharing, Microsoft Hotfix KB828741 (network check), MS Task Scheduler vulnerability, Microsoft Hotfix for KB835732 (SMB check), Microsoft RPC Interface Buffer Overrun (KB824146) (network check), Vulnerability in Printer Spooler Service Could Allow Remote Code Execution (896423) – Network Check, Vulnerability in SMB Could Allow Remote Code Execution (896422) – Network Check, Vulnerability in Plug and Play Service Could Allow Remote Code Execution (899588) – Network Check, Microsoft Windows Server Service Crafted RPC Request Handling Unspecified Remote Code Execution (958644) – Network check, Microsoft Windows SMB Vulnerabilities Remote Code Execution (958687) – Network check, Vulnerability in Server Service Could Allow Remote Code Execution (917159) – Network check, Vulnerability in Server Service Could Allow Remote Code Execution (921883) – Network check, Microsoft RPC Interface Buffer Overrun (823980), Vulnerability in Windows Could Allow Information Disclosure (888302) (network check), TCP timestamps, VMware Guest, OS Identification |
Integrity, Processing, Technology |
7 |
Microsoft Windows SMB Shares Access, |
Integrity, Storage, Technology |
6 |
Microsoft Windows Server Message Block (SMB) Protocol SMB_COM_TRANSACTION Packet Remote Overflow DoS, SMB guest account for all users, Vulnerability in Web Client Service Could Allow Remote Code Execution (911927) – network check, SMB Native Lan Man, SMB log in, SMB accessible registry, SMB fully accessible registry, SMB NULL session, SMB shares enumeration, SMB get host SID, SMB use host SID to enumerate local users, Buffer Overrun in Messenger Service (real test), Network Time Protocol (NTP) Server Information Disclosure |
Integrity, Process, Technology |
5 |
SYN scan port opened, DCE Services Enumeration, Using NetBIOS to retrieve information from a Windows host, |
Integrity, Processing, Technology |
4 |
Scan for UPnP/TCP hosts, |
Integrity, Processing, Technology |
3 |
ICMP Timestamp Request Remote Date Disclosure |
Integrity, Process, Technology |
2 |
Ethernet card brand |
Integrity, Processing, Technology |
1 |
|
|
0 |
|
|
The group then ran Nessus and Nmap against the Windows XP SP0 machine and discovered that the two scans picked up the same vulnerabilities, except Nessus picked up more vulnerabilities than Nmap. Each of the tools used to run the Nessus and Nmap exploits worked. The Nessus tool is a GUI based program that can connect to the Nessus web site and load the latest plug-ins. The group used Zenmap because of its ease of use when trying to come up with a command to use against the target. Below are a couple of screenshots showing the results that were produced from the scans of Nessus and Nmap on the Windows XP SP0 machine.
In the second part of the lab, websites that identify newly discovered vulnerabilities were discovered and analyzed. The websites were analyzed based on their expertise and the vulnerabilities were tabulated in relation to the OSI model and McCumber’s cube.
The level of expertise varied between the different vulnerability publishers. Obcomputerrepair.com listed some new Trojans and gave a thorough description of the malicious programs. However, the site is highly biased in that they point to spyware doctor to be the end all be all solution for malicious programs. SECURINFOS contained a list of current vulnerabilities in applications and operating systems from across the board by the date that vulnerability was released. When a particular vulnerability was clicked on from the list, a detail page appeared, which rated the severity of the attack, listed what platform was affected, and then gave a description of the vulnerability. However, the deception of the vulnerabilities was lacking for the site only gave a one sentence description of the vulnerability. Securitytracker listed vulnerabilities that were published on the day that one accesses the website. When the vulnerability was clicked on a description n page appeared which gave an overview of the vulnerability. However, the descriptions of the vulnerabilities were very brief and lacked detail. Insecure.org contained their own version of Bugtraq, which contained an extensive list of vulnerabilities that were classified by the month of discovery. Their Bugtraq list was comprised of vulnerabilities that were discovered by a community of authors, so the descriptions of the vulnerabilities varied. Secuityfocus ‘s bugtraq was very similar to that of Insecure.org and in fact contained some of the same authors, who pointed out the same vulnerabilities in each site, but they both contained entries that were also unique for that particular site.
Vulnerabilities discovered within the month of June
OSI Layer |
Technology |
Exploit Method |
McCumber |
Layer 7 / Application |
Linksys WAG54G2 Web Management Console Local Arbitrary Shell |
Shell Command Injection |
Integrity, Process, Technology |
Layer 7 / Application |
Firefox |
FIREFOX URL space character SPOOF
|
Integrity, Process, Technology |
Layer 7 / Application |
Cross-site scripting (XSS) vulnerability in proxy_ftp.c in the |
Injection of arbitrary web script or HTML via wildcards in a pathname in an FTP URI (CVE-2008-2939) |
Integrity, Process, Technology |
Layer 7 / Application |
The Apache HTTP Server 2.2.11 and earlier 2.2 versions does not |
local users gain privileges by configuring (1) Options Includes, (2) Options +Includes, or (3) Options+IncludesNOEXEC in a .htaccess file, and then inserting an exec element in a .shtml file |
Integrity, Process, Technology |
Layer 7 / Application |
Open Computer and Software (OCS) Inventory NG version 1.02 Unix |
SQL injection |
Integrity, Process, Technology |
Layer 7 / Application
|
Zemana Antilogger |
DoS attack |
Availability, Process, Technology |
Layer 7 / Application
|
Ubuntu 6.06 LTS Ubuntu 8.04 LTS Ubuntu 8.10 Ubuntu 9.04 Kubuntu Edubuntu Xubuntu.
|
cron did not properly check the return code of the setgid() and initgroups() system calls. A local attacker could use this to escalate group privileges. |
Integrity, Process, Technology |
Layer 7 / Application
|
Apache Dav |
Remote Denial of Service |
Availability, Process, Technology |
Layer 7 / Application |
Apple QuickTime version 7.6 |
heap-based buffer overflow via a specially crafted AVI file. Successful exploitation may allow execution of arbitrary code. |
Integrity, Process, Technology |
Layer 7 / Application |
Apple iTunes Multiple Protocol Handler Buffer |
remote attackers to execute arbitrary code on vulnerable installations of Apple iTunes |
Integrity, Process, Technology |
Layer 7 / Application |
Ubuntu 8.04 LTS: pidgin 1:2.4.1-1ubuntu2.4 Ubuntu 8.10: Ubuntu 9.04:
|
Pidgin did not properly handle certain malformed messages when sending a file using the XMPP protocol handler. If a user were tricked into sending a file, a remote attacker could send a specially crafted response and cause Pidgin to crash, or possibly execute arbitrary code with user privileges. |
Integrity, Process, Technology |
Layer 7 / Application |
Ubuntu 6.06 LTS, Kubuntu, Edubuntu, and Xubuntu running Gaim
|
Gaim did not properly handle certain malformed messages when sending a file using the XMPP protocol handler. If a user were tricked into sending a file, a remote attacker could send a specially crafted response and cause Gaim to crash, or possibly execute arbitrary code with user privileges. |
Integrity, Process, Technology |
Layer 7 / Application |
OCS Inventory NG 1.02 (Unix) |
It is possible for unauthenticated users to extract arbitrary files from the hosting system due to inadequate file handling in cvs.php. |
Confidentiality, Process, Technology |
Layer 7 / Application |
Apple Safari memory |
Memory Corruption Vulnerability, which allows remote attackers to execute arbitrary code on vulnerable installations of Apple Safari. |
Integrity, Process, Technology |
Layer 7 / Application |
Apple WebKit dir Attribute Freeing Dangling Object Pointer |
attackers could execute arbitrary code on vulnerable software utilizing the Apple WebKit library. |
Integrity, Process, Technology |
Layer 7 / Application |
Microsoft Office Excel 2000 |
Microsoft Office Excel Malformed Records Stack Buffer Overflow |
Integrity, Process, Technology |
Layer 7 / Application |
FreeBSD |
The ntpd(8) daemon is prone to a stack-based buffer-overflow when it is configured to use the ‘autokey’ security model, which could be exploited to execute arbitrary code in the context of the service daemon, or crash the service daemon, causing denial-of-service conditions. |
Integrity, Process, Technology |
Layer 7 / Application |
Microsoft Internet Explorer |
Microsoft Internet Explorer concurrent Ajax request memory corruption |
Integrity, Process, Technology |
Layer 7 / Application |
Microsoft Office Word |
Microsoft Word document stack based buffer overflow |
Integrity, Process, Technology |
Layer 7 / Application |
Active Directory |
Remote exploitation of an invalid free vulnerability in Microsoft Active Directory Server allows attackers to exhaust all virtual memory. |
Availability, Process, Technology |
Layer 7 / Application |
Adobe Reader/Acrobat |
A memory corruption vulnerability exists when processing PDF documents and handling TrueType fonts, which could allow an attacker to execute arbitrary code with the privileges of the current user. |
Integrity, Process, Technology |
Layer 7 / Application |
Microsoft Windows 2000 print spooler |
Microsoft Windows 2000 print spooler remote stack buffer overflow vulnerability |
Integrity, Process, Technology |
Layer 7 / Application |
Firefox 3.0.7, 3.0.8, and 3.0.9 for Windows with JRE 6 Update 13 |
Java applet loading vulnerability |
Integrity, Process, Technology |
Layer 7 / Application |
|
Trojan Agent ASMU |
Confidentiality, Process, Technology |
Layer 7 / Application |
Cisco Adaptive Security Appliance (ASA) 8.x |
Input passed within web pages is not properly sanitized before being used in a call to eval() in context of the VPN web portal. This can be exploited to execute arbitrary HTML and script code in user’s browser session in context of the WebVPN. |
Integrity, Process, Technology |
Layer 7 / Application |
Solaris NFSv4 Server Kernel Module “nfs_portmon” tunable |
Error in Solaris NFSv4 Server Kernel Module “nfs_portmon” tunable and may grant attacker unauthorized read/write access to shared resources. |
Confidentiality, Process, Technology |
Layer 6 /Presentation |
zlib_stateful_init function in crypto/comp/c_zlib.c |
remote attackers |
Availability, Storage, Technology |
Layer 6 /Presentation |
Sun Solaris |
auditconfig Privilege Escalation |
Integrity, Process, Technology |
Layer 6 /Presentation |
Red Hat Enterprise Linux version 3 |
a specially crafted GETBULK request to trigger a divide-by-zero error and cause the target snmpd service to crash. |
Availability, Process, Technology |
Layer 6 /Presentation |
Mozilla Firefox |
A remote user can cause arbitrary scripting code to be executed on the target user’s system within the context of an SSL-protected domain. |
Integrity, Process, Technology |
Layer 5/ Session |
Motorola Timbuktu PlughNTCommand named pipe |
Attacker could send specially crafted data via the PlughNTCommand named pipe to trigger a stack overflow and execute arbitrary code on the target system. The code will run with SYSTEM privileges. |
Availability, Process, Technology |
Layer 4/ Transport |
TCP packets used by Cisco Physcial Access Gateway |
port 443 |
Availability, Transmission, Technology |
Layer 4/ Transport |
Sun Solaris 10 systems with patch 138888-03 or 139555-08 applied and having Solaris Trusted Extensions installed and running are affected. |
Patch regression of UDP traffic, which could cause a DoS condition |
Availability, Process, Technology |
Layer 3/Network |
Ubuntu 6.06 LTS Edubuntu Xubuntu
Ip-sec tools |
ipsec-tools did not properly handle certain |
Availability, Process, Technology |
Layer 3/Network |
Sun |
memory leak in the IP multicast reception which can be exploited to exhaust kernel memory. |
Availability, Process, Technology |
Layer 3/Network |
Solaris 10 with patch 141414-01 or later OpenSolaris based upon builds snv_118 or later |
Attacker could send specially crafted jumbo frames to trigger a flaw in the Cassini Gigabit-Ethernet Device Driver (ce(7D)) and cause a system panic. |
Availability, Process, Technology |
Layer 2/ Datalink |
Safenet SoftRemote IKE Service Remote Stack |
Safenet SoftRemote IKE Service Remote Stack |
This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of the Safenet Softremote IKE VPN service. Authentication is not required to exploit this vulnerability. |
Layer 1/ Physical |
N/A |
N/A |
N/A |
After analyzing and tabulating vulnerabilities that were identified by the websites, it became apparent that the majority of the vulnerabilities existed within the application layer of the OSI model. In regards to the McCumber cube, most of the vulnerabilities affected Integrity, processing and naturally technology.
Issues
In this lab, the team had a few problems. The group struggled with trying to sort through the thousands of exploits that existed in Nessus that would work on the targeted machine. The team also had a problem interpreting what was meant by stand alone tools that used Nessus exploits. The group did look into tools that could perform tasks that existed in Nessus, but most of the tools worked only on the local machine, thus deviating from the way Nessus used the exploits. It was then determined that this was not what the question was asking.
Conclusion
In conclusion the group learned from this lab how to research vulnerabilities of a target machine using both tools and researching online databases. To gather information on what vulnerabilities existed on a target machine tools like Nessus and Nmap were used to do scans of the target machine. It was found that both tools produced the same information, but the Nessus tool provided the most information of the two. The exploits that were exposed as being able to affect the Windows XP SP0 machine were put into a table and analyzed. This analysis showed that most of the exploits affected the machine at the application and session layer. The exploits targeted the integrity of the machine because of the nature of the test. The test was an active reconnaissance test that did port scans to determine the weakness of the target machine. The group also explored the use of sites that provided the most up-to-date exploits. A table that depicted a collection of the newest exploits and organized them by the OSI model and McCumber’s cube was created. The table showed very clearly that the exploits existed mostly in the application layer of the OSI model. This was due to most of the exploits targeting specific applications.
References
Davidson, J. (2005). Vendor system vulnerability testing test plan.
He, L.& Bode, N.(n.d.).Network penetration testing.
Insecure.org.(2009). Bugtraq: by thread. Retrieved June 30, 2009 from,
Ob computer repair.(2009).Security Software. Retrieved June 30, 2009 from,
http://www.obcomputerrepair.com/SecuritySoftware/
Securityfocus.(2009). News. Retrieved June 30, 2009 from,
Securinfos.(2009). Security Advisories Alerts and News. Retrieved June 30, 2009 from,
https://www.securinfos.info/english/index.php
Securitytracker.(2009). Retrieved June 30, 2009 from,
http://www.securitytracker.com/startup/index.html
Thompson, H. & Chase, S.(2003). Red-team application security testing. Dr. Dobb’s Journal.
Team four presents a lab that does fit all of the requirements of the lab design document, but is lacking a number of areas that need to be improved upon for future lab reports. The abstract while explaining what will be performed in the lab is not nearly long enough as per the syllabus. They seem to imply that the tools NAMP and NESSUS are the tools used as a first step in penetration testing, I question that as those are not the only two tools available. The literature review is lacking. While it does explain each article that was part of lab four, that is all it does. Each of the articles is laid out and individually explained. There is a total lack of cohesion among the literature reviewed and it does not give a good explanation of the state of the literature on the topic. It also does not in any way tie into lab four itself, since all articles did in one way or another relate to the steps of the lab, I question the overall completeness of the lab on the part of team four. The literature review is nothing more than a list with APA citations, and needs to be improved upon in future labs. If assistance is required the Purdue Online Writing Lab is a good resource. The methods section as provided by team four is lacking. Three short paragraphs do not denote a scholarly or academic discussion of the strategy and technique used in the competition of the lab requirements. Team four claims that the second part of the lab was mostly researching vulnerability databases. I would argue that it was actually all of the part two. Anything that was not research was reporting on the findings of that research. In their findings and results section team four begins by stating that they discovered the best way to learn about system vulnerabilities through NESSUS was to run NESSUS against the system in question. This equates to discovering that getting results out of the tool requires running the tool. This is beyond obvious for graduate work, and needs to be explained better. In team four’s first table, they list a number of server message block (SMB) exploits at layer six, the presentation layer. Unless the ISO has made a change I am unaware of, SMB is a layer five or session layer protocol. This calls into question the level of research that was performed by team four as not being scholarly as this should be obvious. For layer two they list Ethernet card brand as an exploit. I fail to understand how the brand of Ethernet card by itself could be an exploit. In their second table team four lists Sun Solaris as being the technology being a presentation layer exploit. I was not aware that Solaris was a layer six protocol or tool. I was under the impression Solaris was an operating system. Finally, the tables should’ve been listed in a figures and tables section after the conclusion, and referenced in the lab discussion, not in the lab itself.
Team 4 begins their lab report with an abstract stating their objectives; they will be using two tools, Nessus and Nmap, to discover vulnerabilities that are present in the target machine. This will lead them to determining what tools are needed in performing the penetration tests.
They begin their literature review with Red-Team Application Security Testing: Testing techniques designed to expose security bugs (Thompson, Chase, 2003). They summarize the article as an argument for securing software applications rather than trying to secure the network around them. I believe this is a pretty fair assessment of the article. They continue on to describe how the article discusses breaking down the components of an application by function and testing them separately. They relate this to our current lab by stating that we are learning to research exploits, and that we need to look for ways to exploit applications using their normal activities. I believe this article also is a hint for a portion of the second part of the lab. Since this article states that applications cause the most vulnerability, it stands to reason that we may find that most vulnerabilities lie within the application layer of the OSI model.
They proceed to discuss Vendor system vulnerability testing test plan (Davidson, 2005). They listed the procedures and methods outlined in the testing plan. The related this to our current laboratory assignment only in that the testers have a strong understanding of the systems that are being tested prior to the test. They state that the document is in error because it did not specify whether or not the operator or developer consoles had removable storage. I disagree that this is an error. In the abstract it is described as a “generic test plan to provide clients (vendors, end users, program sponsors, etc.) with a sense of the scope and depth of vulnerability testing performed at the INL’s Supervisory Control and Data Acquisition (SCADA) Test Bed and to serve as an example of such a plan”. As a generic plan, it leaves room for modifications to the plan to fit the testing of a particular system. I also believe that it can also serve as an example to help us to model and document our own penetration testing.
The next article that they review is Network Penetration Testing (He, Bode, ). They discuss the explanations contained within the article, such as announced and unannounced testing, and the difference between blackbox and whitebox testing. They mention that the article contains a list of exploits and tools that can be used to perform penetration testing on those exploits. They relate the lists of exploits and penetration testing tools to be a benefit to our current lab assignment.
They continue with their methodology section. They made sure that Nessus had the most recent plug-ins and then ran it against the target machine. They tabulated the results and sorted them by OSI layer. Then, they took two of the exploits that they discovered and ran them against the target Windows XP SPo virtual machine. They discovered that Nessus was the better way to discover vulnerabilities. They also discovered that the vulnerabilities that it found were in the upper layers of the OSI model, namely the application and session layers. They also found that Nessus and Nmap found many of the same vulnerabilities, however Nessus found more.
For part 2 of the labratory assignment they identified Obcomputerrepair.com, SECURINFOS, Securitytracker, and Insecure.org as sites that contain databases of known vulnerabilities. They tabulated the vulnerabilities that were discovered for the month of June and sorted them by OSI layer. The concluded, as did our team, that the majority of the vulnerabilities lie in the application layer.
The literature review only treated each of the assigned readings individually. Instead of a cohesive write up with the literature and the lab tasks, each reading is simply summarized in paragraph form. Stating that the literature would be a benefit in the lab isn’t enough. The only article that was given any further thought beyond a summary was Vendor System Vulnerability Testing Test Plan. The critique of the assumptions of the article is hard to agree or disagree with because little explanation is given or logic behind why this particular section was selected. Removable storage capabilities could certainly pose a risk to these particular systems. Valid users could inadvertently attach a removable drive containing a virus or worm which could affect the availability of the SCADA network. Concerns like this are also the job of the person evaluating the risks in the system, insider threats pose a significantly higher risk factor that outside entities as we’ve seen in previous labs and literature.
The methodologies section is too brief to be reproducible. For the first part, why was only one VM tested? If you’re looking to have more data to support your findings for section two, running Nessus and nmap against at least two different operating systems would be a good start. The method of evaluating a vulnerability database’s expertise isn’t sufficient to make a good determination. What was the author’s methods for determining if the descriptions were given were “good?” Is linking to another database a bad thing? What if that database was the original source? Wouldn’t it be better to simply catalog the basic data and refer users to the source?
The table in the findings contained quite a bit of data for layer seven which would fit with the findings the group discussed. The data in the lower levels doesn’t really constitute as exploits or vulnerabilities, particularly the items in layers four and two. The sentence immediately following the table “The group then ran Nessus and Nmap against the Windows XP SP0 machine and discovered that the two scans picked up the same vulnerabilities, except Nessus picked up more vulnerabilities than Nmap” should be removed, the two statements conflict with each other. One of the vulnerability databases listed, obcomputerrepair.com can hardly be considered a vulnerability database, the critique of it is valid but based on that alone it should’ve been replaced with something more reputable. The extensive table showing vulnerabilities would’ve been better with information on the sources used to compile it and possibly links to the entries in those databases. One item totally missing from the findings was the handling of the tools used to test the exploits that were found. Even if the tools were found to be local only, as listed in the issues section, it would’ve been good to list them and their corresponding vulnerabilities along with a discussion of why you didn’t believe these were valid in the context of the lab exercises.
The conclusion is a good summary of the activities of the lab along with results but doesn’t tie the topic, the literature, and the results from the lab exercises together.
Once again, I must admit that I found this team’s lab write-up to possess substantial depth in investigation and analysis. I found the literature review informative, with a few good questions raised. Additionally, the research of vulnerabilities for the first and second part of the exercise appeared the product of substantial effort. Finally, I found the ‘Methodology’ section to be reasonably detailed, and the ‘Results’ section to be informative, if brief in discussion.
That is not to say that some problems cannot be found with this report, however. The literature review, while being of generally respectable literary quality, lacked anything more than a trivial reference to application within the scope of the lab exercise. Additionally, the ‘very’ long paragraphs used, while cohesive in subject matter, should have most likely been broken down into smaller excerpts. It also appears that the reviewer ran out of creative drive toward the end of the review, as we see a progression of sentences which read: “Next the writers…, Then the writer, Next the writer…, Then the writers…” Not a strong finish from such a promising start.
Also of note was the missing discussion of attack tool testing. One of the requirements of the lab exercise was to ‘test’ tools found which match vulnerabilities discovered in the team’s experimental system: this was totally absent. I would submit that the heading on the result table for part one should read ‘Vulnerability’ rather than ‘Exploit,’ as an area of vulnerability is not in itself an exploit. An ‘exploit’ proper would need to include a means by which to utilize this security flaw: i.e. an attack tool. In fact, I could not locate ‘any’ attack tools listed in the entire lab write-up. I admit the research done on vulnerabilities appeared well done, but without matching tools (where available) to take advantage of these opportunities, it is hard to classify any of this work as truly describing ‘exploits.’ I also thought the table for the first part of the exercise to be somewhat poorly formatted. It is obviously mostly a copy-and-paste from a ‘Nessus’ report: a little more care in organization and presentation would be appropriate.
Of minor note, I noticed that ‘screen shots’ were mentioned, but did not appear in the report anywhere. It appears the only way to include images of any kind on this blog is to use an image hosting service, such as http://imageshack.us/ , and then link to the uploaded image via an image frame or HTML object in WordPress. It is unfortunate that this team was ultimately unable to share their results because of posting issues: this significantly affects the ability of a reviewer to evaluate the research this team has done.
Finally, although I found the database exploit table interesting, I thought that confining the listing to “the month of June” an odd choice. I must confess I don’t believe one month to be a ‘significant’ representation of the ‘known vulnerabilities’ in these databases. I would say that, due to the huge number of ‘prior’ vulnerabilities existing in these databases, a listing such as this is heavily biased toward the OSI application layer. Consider this: operating systems generally decrease in the number of security flaws as they age, therefore ‘recent’ vulnerability snapshots will likely not show the ‘substantial’ number of ‘OSI lower layer’ vulnerabilities accumulated over time by aging operating systems. In fairness, it appears this team used appropriate methods on the information obtained: I do not question the procedure. I simply suggest that the data set chosen is a heavily biased sample, and therefore questionable in the scope of making accurate statistical measurements.
This group did not follow the tags that are required for submission of lab reports. Right off the bat I need to ask the question, are these lists really up-to-date? Who is in charge of updating these lists, and wouldn’t the list only have the known exploits? If that is the case, then we really can call the lists up-to-date, but rather a list of currently known exploits. The team went from having too many citations to basically having only one citation (not in APA 5 format). If you use text from the articles you MUST cite them, just don’t go overboard but cite when you must. This literature review reads like a list. It needs to be more cohesive, combine the articles when you write. It is obvious that different people wrote different sections of the literature review. Before submission, the literature review needs to sound like one voice. Make the changes, or if needed, have one person write the literature review. The SCADA article got a lot more attention than the other articles. All of the questions were answered for this article, but not for the other two articles. I think this is because of the different team members writing the literature review.
I am wondering why this team was the only team to use both Nessus and Nmap. I thought the lab stated to chose one of them. The methodology seems like a rehash of the objectives of the lab experiment, which belongs in the abstract. The group keeps mentioning the utilization of Nmap, but there is little detail on what was done with it. Nessus seemed to be the primary tool that this team used. How is it possible that the two scans picked up the same vulnerabilities, but Nessus picked up more than Nmap? If they picked up the same vulnerabilities, then they should not be different. I question the validity of Obcomputerrepair.com. This does not seem like a vulnerability database at all. I think the second table was not necessary, nor was it required. The team did not give any reason as to why the patterns they found occurred. There needs to be more detail in the results section, more explanation as to why the group believes their findings are correct. What did the group do to get around the issues they had? I don’t think we can really count the websites that were just using the Bugtraq list. The lab stated that the groups needed four different databases, not ones that are basically mirror images of each other. The team should have research the He & Bode article to see when it was published; all they needed to do was Google it and they would have found the year it was published. I don’t find n.d. as an acceptable date for the article.
I think team 4’s abstract was well written and explained in detail what they were going to accomplish in lab 4. Their literature review contains discussions of the sources and is organized by publication as opposed to combining all the publication into one massive summary.
Their methodologies section is too brief in comparison to how lengthy their literature review was. Perhaps more time could have been spent with this section. The table in the findings section contained a lot of data for layer seven which is representative of the group’s findings. I don’t agree that the data in the lower levels accurately depicts them as exploits or vulnerabilities. The table displaying vulnerabilities was lacking the information about the sources used to find the data. Perhaps a link to the sources would have been beneficial. In addition I did not read any specific levels of expertise needed in being able to run the exploits listed in the table. The conclusion is a good summary of the activities of the lab along with results but falls short of tying the whole lab together. I must say though that the writing skills in this lab were much improved over their past labs.
I think that group 4’s write-up for lab 4 was poor. The abstract for this lab was adequate and provided a short overview of the lab. The literary review was good and adequately reviews the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done well and all of the pages were included. For part 1, there seems to be many problems. At first glance, the section appears to be very short. When reading, I got to part 2 and didn’t realize that part 1 was over already. Part 1 consisted of only three paragraphs. The findings for part 1 only answer about two of the seven questions required. No research was done for the exploits, they were only listed as an output of Nessus. Also, the group included both Nessus and Nmap when the direction only asks to choose one. The directions asked to list the vulnerabilities by system and the group did not. Absolutely no stand-alone tools were listed for part one and since they were not, they were clearly not tested either. No conclusions were made for this section, let alone explained. No strategy was included for how this knowledge was gained, only a list of the vulnerabilities found and a short description of those vulnerabilities. This section was very weak and seemed as if the group did not even read the lab questions before writing it. For part 2, it seems that the part was very weak. The group included BugTraq, which was not supposed to be listed. The group only discussed the level of expertise involved for one site. The group did add many vulnerabilities into the grid and all of which were related to the McCumber Cube. However, the links included are for vulnerability repositories and not exploits. A vulnerability is not an exploit and the lab required research on exploit code repositories. Also it appears that the group had trouble finding sources when they include “Kansas City’s most celebrated “no wipe-out” computer repair specialist” that looks like a geocities page. The conclusion was adequate and summarizes what was covered. Overall, few of the required questions were answered and the lab could have been much better.
The team starts of by describing what they are going to do within the lab and describe the steps that will be taken to accomplish the task. They then go on to the literature review and again this week it is not cohesive. They just describe what happens within each article and there is little argument within the literature reviews. They do relate them to the lab and some things that can be implemented, but this does not make for a well rounded literature review. It makes it really hard to review this section when there is little arguments between the articles or their opinions for or against different the different points expressed within the articles. Do not be afraid to back up your team’s opinion. Yes, not all the times will it be correct but that makes for interesting discussion and it will create learning experiences. If there are questions left from the article try to answer them and research different possibilities to gain more knowledge on the topics. They then go onto the methodology section and describe what is to be done within each part. In this part they define the tools that where going to be used within the testing environment. Then for the second part they described that they where going to be finding databases with exploitations. In there results and findings section it was a little disorganized and the findings where not describe in detail. Just that they had found exploits within Windows XP SP0. The other operating systems on their virtual networking are not even mentioned. Yes they did provide a list of exploits that where found within the month of June, but what they provided within the section does not backup their findings. Within the second part of the lab they listed some of the databases but did not describe the databases and seem to only find the minimum required. There where many databases available, why was a government standard database for vulnerabilities discussed such as National Vulnerability Database? This would allow for a standard to be discussed and how each of the other databases has been affected by the standard or comparing and contrasting them. The team goes on to describe their issues, and states that their biggest problem was shifting through the exploits with Nessus. After this they go on to their conclusion and discuss what they learned. They included some information within the conclusion that could have been placed better into the findings. The conclusion could have used some revising to better summarize the whole of the lab.
Team four presents a report that is much improved over previous attempts, but still has a long way to go. The writing flows much better, and the information is for the most part coherent. An obtuse methods section and unclear results mar the effort. The group does not differentiate between vulnerability and exploit and uses the terms interchangeably throughout the lab, to their detriment.
In the abstract, this team states that the lab is about performing scans to detect vulnerabilities. In actuality, this is what the last two labs discussed. This portion is really a precursor to actual learning objectives of the lab. The remainder of the abstract simply states the objective of the exercises, without going into great detail.
The literature review is much improved over previous weeks. The writing style is coherent, and the articles are well summarized. The team attempts to relate Thompson and Chase’s article to the assignment, but could use more detail. What are similarities between researching exploits like objectives for the lab and what Thompson and Chase propose for application focused red teaming? How is it different? The team evaluates Davidson, pointing out flaws that they perceived in the test plan. Are these actually flaws? Might there have been some requirement that made seemingly superfluous tests necessary? The group only weakly relates the document back to the labs. What is it that is different about what Davidson is doing? The group summarizes He and Bode, but neither evaluates or relates to the writing more than superficially. Is there something special about their methodology?
The methods section is vague, unrepeatable, and flawed. In part one is it NESSUS that actually performs the exploits? I’m not really certain what you did other than run NESSUS and NMAP. In part two, what were the terms you used to find sites? You use vulnerability and exploit interchangeably. Are they the same thing? Did you really look at every entry on all the sites? How did you fit that into a week? Is a reference to another source indicative of a lack of expertise, or an attempt to pass accurate information?
I’m not sure how the table supplied for results in part one is any different then what was done in the previous lab. The group discusses running the tools against XP service Pack 0 but I’m not sure what the results were. This piece really belongs in the methods section, but what were your results? For part two you list among your sources Obcomputerreapair.com, which is the site for a pc repair shop in Kansas. How is this in any way authoritative? You criticize “secureinfos” for having a lack of detail. Is this perhaps because the original site is in a foreign language? Perhaps something is lost in translation? Why do you suppose that insecure.org and security focus share so much information? The group’s methods state that you will analyze the expertise of the various site, but your findings don’t reflect this.
I’m not quite sure what the group is trying to say in your issues section. I think you were attempting to use the hard way eluded to in the assignment. What, if anything did you do to overcome your issues? The conclusion does a decent job of recapping the information contained in the lab.
In the team’s abstract they indicating some key points of there laboratory and claimed that they were going to utilize two tools, Nmap and Nessus. The abstract was short and only mentions discovering vulnerabilities, tools, and exploits. The team does cover the reading about verifying tools in the article Red-Team Application Security Testing. The team says the writing is mainly about how using a piece of software to secure a network. This is similar to what the other teams say. More then software is needed to protect a network, penetration testing can help show where areas of a network needs more attention and can also show where an attacker would attack.
It seems unclear which tools the team seems to user of if the team perhaps to use both tools. In the findings and results section they have a chart, titled Exploits Found on a Windows XP SP0 Using Nessus. The exploit that seem to stand out the most to me was the exploit at layer 2. The exploit at layer two is Ethernet card brand. The question I am wondering is did Nessus provide this information as an exploit or was it just extra information that was returned during an exploit? The team also talks about researching exploit code but not from a anti-virus vendor. They provide a chat with vulnerabilities that were discovered within the month the June. Where did this come from?