Abstract
The purpose of this exercise is to compromise three virtual hosts using primarily passive reconnaissance and research for the first two, with active reconnaissance in addition to the these methods for the third host. First, the team examines the concept of passive reconnaissance and its relation to this exercise. Then, literature related to the area under question is evaluated. Continuing, a plan is devised by which exploitation of two virtual hosts is attempted, using only vulnerabilities discovered by passive means. Next, active reconnaissance means are employed against a third host, with discovered vulnerabilities targeted for areas of exploit. Finally, we examine the results of these experiments, and attempt to qualify the implications with regard to penetration testing concepts in general.
Introduction
In previous lab exercises, this team has discussed the topics of both passive and active reconnaissance, and attempted to create definitions for each respective area. In this exercise, we again address the concept of passive reconnaissance, and qualify it with respect to this laboratory exercise. We have presented that passive reconnaissance includes the three concepts of uncertainty, invariant risk, and limitation of scope: the challenges of the current exercise have created the need to examine the ideas of uncertainty and invariant risk in greater detail.
In prior discussion we have speculated that such an action as querying a names server does not present any real risk to an attacker, as conservative application of this query can in no way be singled out as different from acceptable usage. To expand on this, we feel that this can apply to many other common services, such as user authentication, NetBIOS and SMB utilization, and even port interrogation. The key is that an attacker should ‘behave’ in a non-threatening or conservative manner which approximates a ‘normal’ user. It is expected that workstations will attempt logon to a server in a local network environment: it only becomes suspicious when many logon requests are initiated in rapid succession. Likewise, it is ‘normal’ for clients to interrogate a server for offered file shares and services; even port queries are not out of order if the pattern of a ‘port scan’ does not evolve. These ideas are closely tied to the concept of ‘slow scanning’ which was also discussed in a prior lab: here to, we classified this as passive.
Therefore, in light of these considerations, we suggest that the emulation of normal usage patterns with respect to network client behavior necessarily fulfills the requirements of uncertainty and invariant risk, with the limitation of scope to information gathering being assumed. If an attacker does not step outside the bounds of acceptable behavior to gain information, neither is his presence betrayed nor is any real risk taken. We feel this falls solidly within the realm of what could be termed passive reconnaissance.
With this discussion of passive reconnaissance methods in mind, we define two areas of research undertaken within this exercise. First, we propose to use passive reconnaissance coupled with exploit research to attempt the compromise of two virtual machines: namely a Windows XP Service Pack 0 host and a Windows Server 2003 Service Pack 2 host. Then, we use all methods available, including active reconnaissance, to attempt the compromise of a third virtual host: that being a Windows XP Service Pack 3 machine.
Literature Review
As we proceed from exploits without tools to exploits with tools, we receive a new set of articles to read and review. This week’s readings contain a diverse set of articles within the area of penetration testing. The general topics involved cover the tools, techniques, and procedures for testing a system. Many of the articles cover network issues when conducting penetration tests. Password recovery, blackbox testing and root kits are also covered in this week’s readings.
A networked computer system is a double edged sword. On the one hand it helps us to be more productive, more efficient, and more connected. On the other hand, being connected may also mean being connected to those who wish to exploit our system. Chen, Yang and Lan (2007) discuss using tools such as Nessus and SATAN to test the vulnerability of a networked system. They state, however, that the output of these tools must be analyzed to ensure that it is not a false positive. They discuss a new tool that they developed using a distributed system with two components; a controller and an agent. The agents are placed at different points throughout the network and gather information concerning the security of the system. Once it has conducted a vulnerability scan it will conduct a penetration test to ensure that the scan did not produce a false positive. The controller then receives the results from the agents for analysis so that the controller can formulate a report on the security of the system (p1-4).
Often a firewall is placed between the local area network and the internet to protect the local area network from malicious systems on the internet. Firewalls also need testing to ensure that they are working properly and to locate flaws that will allow the local area network that they are protecting to be vulnerable to attack. Haeni (1997) recommends performing manual testing in addition to automated scanning to ensure that the systems behind the firewall are kept safe. He describes a penetration test in which a denial of service attack is performed against a trusted host behind a firewall so that its IP address can be spoofed. By spoofing the IP address of the trusted host, the attacker can gain access to the target (p1-25). One issue that I had with his article is that it states that a demilitarized zone is necessary to operate a web server through a firewall. Many routers now have the ability to forward a single port through a firewall to a web server or other service, eliminating the need to have a demilitarized zone.
Systems without a firewall, or with a firewall that is not properly configured, may be compromised and become part of a distributed denial of service attack. In this type of attack, several computers on a network are compromised and used to send network traffic to the target computer. The target computer becomes so occupied with processing the bogus requests that it can no longer process legitimate requests. Mirkovic and Reiher (2004) discuss how such an attack can be performed. Packets of data are transported from their source to their destination through a series of networks. The intermediate networks have little responsibility but to forward the packets. This allows for attacks such as the TCP SYN attack. In this attack multiple TCP SYN requests are sent to the target filling its connection queue so that legitimate requests cannot be completed (p39-54).
Often, root kits are installed on compromised systems so that the compromised systems can be used to conduct the denial of service attack against the target system. Kuhhauser (2003) discusses how an attacker begins by finding a weak point at which to compromise the system. Often these weak points occur because of programming errors in conventional, sequentially programmed components. Once the root kit is installed it will hide itself and then place several backdoors so that the attacker can have access (p12-23).
Another way to gain access to a computer in order to compromise it is to crack a user password on the system. The article Ethical Hacking and Password Cracking: A Pattern for Individualized Security Exercises (Snyder, 2006, p13-18) discusses exercises designed to teach students how to access and recover passwords such as a Front Page user password. This article discusses various types of hashes used to encrypt passwords that are stored on a system. These hashes include SHA1, MD4 and MD5. Password recovery tools such as John the Ripper will continually guess the password until successful.
Modeling is sometimes used to assess the vulnerability of networked systems. One such model is Attack Net, which uses the Petri net paradigm. With this approach, places represent the state of the system and transitions represent events that act on the system to change the state. A token represents the current progress of an attacker (p. 17). When a token is located at a place, the attacker has gained control of that place. Tokens move from place to place via directed arcs by transitions toward the target (p. 17). Another method of modeling network vulnerabilities is discussed in Modeling TCP/IP Networks Topology for Network Vulnerability Analysis (Zakeri, Shahriari, Jalili, Sadoddin, 2004, p1-6). This approach uses text-based modeling in which definitions of hosts, links, and interfaces are combined to make up the model of the network. Likewise the operating system associated with a host is defined by brand, component and configuration. Services are broken down into service information and access policies. Vulnerabilities are then defined with the vulnerability name, precondition and post condition. Finally the attack is defined as a sequence of vulnerabilities which may be exploited to reach the desired goal.
Our research has shown that many of the vulnerabilities are caused by programming errors within the application layer. One way to locate errors within an application is through black box testing. Bo, Xiang, and Xiaopeng (2007) present MobleTest, which is a utility to perform black box testing on smart mobile devices. MobleTest uses a layered design to adapt to the differences of the various devices. This system allows for simultaneous multiple states and multiple tasks to be tested to determine their effect on each other (p1-7).
With the assistance of these tools and techniques we are better prepared to conduct penetration testing. We have seen how systems can be compromised using root kits, password crackers and denial of service attacks. We have also seen how they can be made more secure testing firewalls and software for vulnerabilities.
Methodology and Procedure
The team began by dividing the lab into two separate testing domains. The first test setup was created to facilitate passive reconnaissance and the second for active reconnaissance. Specifically, the first testing environment was created in VMware Workstation with four hosts running: Windows XP SP 0 (192.168.3.3) as a target, Windows Server 2003 SP 2 (192.168.3.4) as a target also, Windows XP SP 3 (192.168.3.100) as a tool host, and nUbuntu (192.168.3.101) as an observer. It should be noted that the Windows 2003 Server machine had no firewall active, and automatic updates had been disabled. While the nUbuntu machine was not strictly necessary, it offered us the ability to test the passive reconnaissance capabilities of the ‘lanmap’ program, which is not readily available in binary form for Microsoft operating systems.
Foremost, the team eliminated the use of account cracking for the duration of all the penetration tests, as we could determine no way to simulate a realistic environment with hypothetical user accounts: any method involved biases inherent to members of the team choosing the account parameters, even if blind methods were to be employed. Further, any exploits which required user interaction were deemed unusable for all tests for reasons similar to the account cracking issue. Additionally, although we were already aware of each machines configurations and IP addresses, the team thought it important to prove that this information could be discovered by using only passive reconnaissance means, as this would be a requirement in a ‘real’ penetration exercise. We utilized two parallel methods to do this: the ‘lanmap’ tool was run on the nUbuntu machine until a reasonable network map was generated, and we also used the ‘nbtstat’ tool built into the Windows XP tool host in conjunction with ‘Nete’ (www.cultdeadcow.com/tools/nete.html ). The ‘lanmap’ executable (Figure 1) was able to passively identify the Windows XP SP0 host directly by NetBIOS name and IP address, however, the Windows Server 2003 host was only fingerprinted as ‘Win,’ and so other methods were required to identify it more precisely.
Using the command “nbtstat -RR” and then “nbtstat -r” generated a list of NetBIOS names on the network. The names were ‘XPSP0VM’ and ‘2K3VMWARE,’ which might have been considered sufficient for identification, but we pushed further. Running “nbtstat -a 2K3VMWARE” followed by “nbtstat -c” displayed this host’s NetBIOS name associated with an IP address. It should be noted that NetBIOS names need only be associated with a MAC address; we ran the first command to force an IP address resolution. Finally, to determine the machine type, we ran “nete /M 2K3VMWARE” which reported that the kernel was NT version 5.2, easily found by web search to belong to the Windows 2003 server family: our task was complete.
The team used web searches to find the most likely points of vulnerability by which to compromise both target hosts. The Windows XP SP0 required very little research as we already had prior knowledge from a previous exercise as to working exploits (ironically, this does fit in with the web search method, as prior lab exercise reports are now part of internet history). We ran the Metasploit ‘ms04_007_killbill’ plug-in from our tool host, and succeeded in compromising the XP SP0 machine in one attempt (Figure 2).
Research of the vulnerabilities for the Server 2003 machine was initially conducted via the Milw0rm archive (http://www.milw0rm.com). We found a working exploit for the MS08-067 vulnerability, and thought it might have a chance of success due to it being corrected only recently (October 23, 2008). In truth, the team was aware that the Server 2003 host was current with updates through April 15 of this year, so we really did not expect success: an admitted bias. We found the Metasploit plug-in for this vulnerability, and attempted to compromise the Server 2003 machine with various configurations for the plug-in; we ran this approximately a dozen times unsuccessfully before we gave up. It should be mentioned that we obtained additional SMB pipe names using the Metasploit pipe_auditor plug-in for use with the primary exploit.
As this exploit did not work, the team decided to create a table of Microsoft security updates from the previous attempted exploit’s date (October 23, 2008) to the current date (Table 1). This was felt to be appropriate, as in an actual blind testing scenario, it would be reasonable to assume that the failed exploit of a vulnerable area would indicate a target at least current in updates up to the security patch addressing that vulnerability. In addition to Microsoft Security bulletins, we searched the web for any other possible exploits, including the National Vulnerability Database (http://nvd.nist.gov) but found nothing significant in addition to this bulletin based list. The goal of this list creation was to reverse engineer vulnerabilities out of corrective measures, much like the previous lab exercise.
After this, a passive evaluation of the Server 2003 host was done using standard Window’s utilities and third party tools. We used basic methods, such as attempting to connect to shares and services via the file explorer, using the ftp, tftp , and telnet clients, and also remote registry, ‘net use,’ and the remote desktop application. We were unable to successfully connect to the remote host, except for an extremely limited null session via ‘net use 2K3VMWAREIPC$ “” /USER:”” ,’ which proved essentially worthless. Additionally, we used Microsoft’s ‘PortQry’ utility to test individual standard ports, with the most interesting being found by the command “PortQry.exe -n 2K3VMWARE -e 135 -p tcp” which returned a list of RPC endpoints being offered (emap service). Despite this effort, the team did not discover any usable services or ports which responded, and so were unable to use any of the possible vulnerabilities indicated by our table. At this point, the team considered the testing methods exhausted, as we had eliminated account cracking for practicality reasons.
The third part of the testing called for a full-on assault of the third operating system using active brute force tactics and no consideration for stealth. The team decided that, in order to make it interesting, we would run this attack against fully patched Windows XP Service Pack 3. We ran Nessus from another XP machine. The scanner was fully updated. We ran it using the all plug-ins and included dangerous plug-ins. All of this was performed in a second VMware Workstation environment similar to the first setup, with the number of hosts being changed to two.
The outcome was rather disappointing. Nessus was unable to uncover a single exploitable vulnerability in the system. This led to some discussion among the team as to how to progress, as well as why this oft maligned operating system appeared to be so well secured. In view of the passive testing results and the research involving the vulnerability table for Server 2003 (which is identical to the reported vulnerabilities for Windows XP SP 3 sans the entry involving MS SQL server) we considered this test ended also, and so did not attempt any exploit, as none were known to work against this host configuration.
Results and Discussion
It is somewhat remarkable that this team was unable to compromise any machine other than the notoriously insecure configuration found in the Windows XP SP0 host. We examined the test procedures and results, and formulated an explanation for the disappointing outcome. Specifically, we feel that the answer lies in the test environment configuration itself.
Foremost, the environment is too clean. It lacks users and usage, so it really is not a good model of a real network system. There are no services running, no traffic moving in and out. There is a complete lack of intermittent devices that could be used as preliminary targets with the ultimate goal of compromising the system.
To further explain this point, we look back to the discussion of the team’s report from last week. It was noted that while Windows firewall filters most traffic, it creates a barrier to the user. Each time the user wishes to run an application that requires web services, they are prompted to either continue blocking the application, or unblock it, thus opening a port or ports, and incidentally opening a hole in the defenses. It has been the team’s experience that a user with sufficient privileges may become annoyed enough with the constant prompts that they simply turn the firewall off entirely.
Each application added to the system also comes with its own set of bugs and adds vulnerabilities to the mix. As discussed in earlier lab exercises, there are methods that can be used to help mitigate this risk, but it can never be removed entirely. A user with sufficient privileges may circumvent any policy in place and install untested software. This is especially dangerous in an environment where the user feels the policies are too restrictive or interfere with their productivity. Add into the mix web browsing and the possibility of execution of client-side scripts, and the surreptitious installation of malicious code and the once impenetrable fortress now looks like an open-air market.
This should not be taken to mean that the team feels draconian policies and restrictive rights assignment are necessary to secure a system. There is a constant and often noted trade-off between security and usability. What we would suggest instead is that security policies should be flexible in order to provide the services necessary to keep users working efficiently without opening the system to undue risk. This method is difficult, and requires constant review and dialog with the stakeholders. There is not a one-size-fits-all solution, and the only truly secure system is completely useless.
As demonstrated by our results, the base operating system with few services running or applications installed is quite secure. As one adds elements to this secure system, vulnerability will increase. We feel that this concept is also reflected in the overwhelming number of exploits and tools available which target applications, and therefore the OSI seventh layer. Consider also, many of application layer vulnerabilities really arise from how the application is used, e.g. tricking the user to install infected add -ons or browse to a malicious web page. There is a perceivable bias toward this OSI layer, but it is one which simply reflects reality: many more vulnerabilities are present in applications coupled with user interaction than non-interactive lower level functions. This becomes an important factor in penetration testing, as testers may have to rely on the unreliable and unpredictable actions of users to effectively penetrate the target system. It may raise issues of ethics, and even questions of the legitimacy of the test itself, as some might view this as simple trickery or opportunistic treachery. It would seem that exploits found on the lower levels of the OSI layer, those which most often do not involve human interaction, are much simpler to ‘sell’ to customers, as these are relatively easy to correct from a security standpoint: application users cannot be easily ‘fixed.’ Finding exploits which are rooted in user behavior lays bare the idea that no installation can ever be fully secured as long as humans are employed in the course of normal operation: this is an uncomfortable truth.
Further demonstrated by our results, we believe that in many cases, active scanning will not reveal any more information than what effective passive scanning and careful research will on its own. Conclusively, we found nothing with the Nessus scan which we did not already know from the previous passive test against a machine of similar properties. If situation allows, much can be gained from an active scan, reducing information gathering and research times to a small fraction of the passive approach. However, if time is not an issue or risk of detection is high, passive means provide a thoroughly effective technique. Finally, we can see that biases necessarily exist in such active tools as Nessus, but this is mostly due to the limitations of project development. In theory, the Nessus project could be capable of detecting all known vulnerabilities up to the current date of use, but it seems that this may not be realistic, as it would require a tremendous amount of vigilance and effort on the Nessus team’s part. Hence, we would expect that such active tools as Nessus will always be deficient with respect to the very latest vulnerabilities revealed to the security community simply because of the time lag caused by development and deployment concerns. Therefore, a penetration tester should always check current vulnerability listings when employing an active scanner, as the very latest exploits are likely not checked for.
In examining the implications of specific exploit of an OSI layer with regard to effect on other layers, we must propose that upper layers are necessarily compromised by lower level exploits. As one area addressed by the McCumber cube model is ‘availability,’ we are certain that at the very least a lower level exploit can deny layers above it access to information. Whether this is a network hardware compromise, or a host based exploit, it cannot be denied that this is true. We do recognize that the issue is not always this simple, however. As the OSI model incorporates both ‘host’ and ‘network proper’ components (usually divided below the fourth layer), there are situations involving the host layers where an exploit will compromise all layers of the host, even those which lie ‘below’ the exploited layer: essentially an attack on the ‘processing’ McCumber subset of coordinates. A good example of this is a layer seven web browser exploit: this often renders a host entirely compromised, with such things as layer four proxies inserted into the network stack. In this situation we see a reversal of the previous concept: higher level exploits necessarily imply lower level compromise.
Furthermore, we believe an exception may exist with relation to encryption technologies found at layer six of the model. Consider that though an attacker may ‘possess’ lower levels such as those involving network hardware, a properly devised encrypted communication scheme will betray no usable information to the attacker. While the transmission of information may ultimately be denied, the ‘confidentiality’ aspect of the McCumber model is not affected. In addition, the attacker may only disrupt this traffic at the risk of betraying his own presence: hence, in theory it appears that this traffic will likely be left to function normally, all the while retaining its aspect of confidentiality.
Problems and Issues
We encountered very few real problems with this laboratory exercise, although we found having two network interfaces running for a virtual machine created dual connections between hosts on different subnets. This initially proved confusing, as NetBIOS names resolved to IP addresses which were unexpected. The team corrected the issue by disabling the second subnet interface for all hosts running in the test environment.
Conclusions
In conclusion the, the team has attempted the compromise of three Microsoft Windows based operating systems. The first, namely Windows XP SP 0, fell easily to basic passive reconnaissance and research coupled with the Metasploit ‘killbill’ plug-n. The second machine, a Windows Server 2003 installation, remained unexploited despite extensive passive reconnaissance and the reverse engineering of Microsoft security bulletins. The team attempted approximately a dozen configurations of the Metasploit MS08-067 plug-in against this host, and remained unsuccessful in exploit attempts, as no additional vulnerabilities were found. Similarly, the Windows XP SP 3 machine betrayed no vulnerabilities, even though active measures were employed: no exploit was even attempted as research regarding the second machine indicated no exploit was possible. The team speculated that these machines were too clean with regard to use environment, and questioned the practicality of such configurations, as it does not reflect normal usage employments. Additionally, the team presented a discussion of penetration tools and exploits encountered in this exercise, and noted that most were found in layer seven of the OSI model. Furthermore, the team discovered that the use of active tools does not contribute substantially to the effectiveness of penetration testing if careful research and information gathering is done in preparation. Finally, an examination of the OSI model reveals that the exploit of lower layers implies compromise of higher layers, with exceptions.
Charts, Tables, and Illustrations
Figure 1: Lanmap used to passively fingerprint Windows XP SP0.
Figure 2: Compromise of Windows XP SP0 with Metasploit ‘ms04_007_killbill’ plug-in.
Table 1: Table 1: Microsoft Server 2003 SP2 (NT v5.2) security patches since October 23, 2008 (MS08-067).
Report Date
( Microsoft Security Bulletin) |
Patch title | Evaluation: possible usage |
July 14, 2009 (MS09-032) | Cumulative Security Update of ActiveX Kill Bits | Internet Explorer based, user interaction required: none |
July 14, 2009 (MS09-029) | Vulnerability in the Embedded OpenType Font Engine Could Allow Remote Code Execution | User interaction required: none |
July 14, 2009 (MS09-028) | Vulnerabilities in Microsoft DirectShow Could Allow Remote Code Execution | User interaction required: none |
July 9, 2009 (MS09-026) | Vulnerability in RPC Could Allow Elevation of Privilege: | Affects third party RPC clients; no third party clients detected on target: none |
July 9, 2009 (MS09-025) | Vulnerabilities in Windows Kernel Could Allow Elevation of Privilege | Valid logon credentials required, essentially local exploit: none |
June 9, 2009 (MS09-023) | Vulnerability in Windows Search Could Allow Information Disclosure | User interaction required: none |
June 9, 2009 (MS09-022) | Vulnerabilities in Windows Print Spooler Could Allow Remote Code Execution | Windows Printing Service was not detected on target: none |
June 9, 2009 (MS09-020) | Vulnerabilities in Internet Information Services (IIS) Could Allow Elevation of Privilege | IIS was not detected on target: none |
June 9, 2009 (MS09-019) | Cumulative Security Update for Internet Explorer | Internet Explorer based, user interaction required: none |
June 9, 2009 (MS09-018) | Vulnerabilities in Active Directory Could Allow Remote Code Execution | Active Directory service was not detected on target: none |
April 14, 2009 (MS09-015) | Blended Threat Vulnerability in SearchPath Could Allow Elevation of Privilege | User interaction required: none |
April 14,2009 (MS09-014) | Cumulative Security Update for Internet Explorer | Internet Explorer based, user interaction required: none |
April 14, 2009 (MS09-013) | Vulnerabilities in Windows HTTP Services Could Allow Remote Code Execution | User interaction required: none |
April 14, 2009 (MS09-012) | Vulnerabilities in Windows Could Allow Elevation of Privilege | Valid logon credentials required, essentially local exploit: none |
April 14, 2009 (MS09-011) | Vulnerability in Microsoft DirectShow Could Allow Remote Code Execution | User interaction required: none |
April 14, 2009 (MS09-010) | Vulnerabilities in WordPad and Office Text Converters Could Allow Remote Code Execution | User interaction required: none |
March 10, 2009 (MS09-007) | Vulnerability in SChannel Could Allow Spoofing: | Appears to be a Windows Domain issue; Microsoft Active Directory/ Domain Controller usage was not detected on target: none |
March 10, 2009 (MS09-006) | Vulnerabilities in Windows Kernel Could Allow Remote Code Execution | User interaction required: none |
February 10, 2009 (MS09-004) | Vulnerability in Microsoft SQL Server Could Allow Remote Code Execution | MS SQL Server was not detected to be running on target: none |
February 10, 2009 (MS09-002) | Cumulative Security Update for Internet Explorer | Internet Explorer based, user interaction required: none |
January 13, 2009 (MS09-001) | Vulnerabilities in SMB Could Allow Remote Code Execution | Marked as ‘theoretical’ code execution possible, no useful exploit found, except a Metasploit DOS only plugin; we did not find this plug-in listed in our Metasploit installation. Regardless, as this is a simple DOS attack, we see no real use for it in our test: none |
December 17, 2008 ( MS08-078) | Security Update for Internet Explorer | Internet Explorer based, user interaction required: none |
December 9, 2008 (MS08-076) | Vulnerabilities in Windows Media Components Could Allow Remote Code Execution | User interaction required: none |
December 9, 2008 (MS08-073) | Cumulative Security Update for Internet Explorer | Internet Explorer based, user interaction required: none |
December 9, 2008 (MS08-071) | Vulnerabilities in GDI Could Allow Remote Code Execution | User interaction required: none |
November 11, 2008 (MS08-069) | Vulnerabilities in Microsoft XML Core Services Could Allow Remote Code Execution | User interaction required: none |
November 11, 2008 (MS08-068) | Vulnerability in SMB Could Allow Remote Code Execution | Infamous reflection attack via Metasploit smb_relay, etc.; user interaction required: none |
References
Bo, J., Xiang, L., & Xiaopeng, G. (2007). MobileTest: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices. pp. 1-7.
Chen, S.-J., Yang, C.-H., & Lan, S.-W. (2007). A Distributed Network Security Assessment Tool with Vulnerability Scan andPenetration Test. pp. 1-4.
Haeni, R. E. (1997, January). Firewall PenetrationTesting. pp. 1-25.
Kuhnhauser, W. E. (2003). Root Kits An Operating Systems Viewpoint. pp. 12-23.
McDermott, J. P. (2001). Attack Net Penetration Testing. pp. 15-21.
Mirkovic, J., & Reiher, P. (2004, April). A Taxonomy of DDoS Attack and DDoS Defense Mechanisms. pp. 39-54.
Snyder, R. (2006). Ethical Hacking And Password Cracking: A Pattern For Individualized Security Exercises. pp. 13-18.
Zakeri, R., Shahriari, R., Jalili, R., & Sadoddin, R. (2004). Modeling TCP/IP Networks Topology for Network Vulnerability Analysis.
This group starts off with an abstract that lays out the purpose of this lab and give a quick brief explanation of the steps involved in this lab. The group’s abstract seemed to be redundant in that it explains twice how each of the systems is going to be tested. The importance of this lab and how it relates to the other labs is given in an introduction to the lab. In this introduction the group explains that this lab is relying on past labs that explain how passive and active reconnaissance works. The group give an excellent job in explaining how use of queries to gain information is not looked at as an attack unless a great amount of queries are done in succession. The group then uses this concept to argue that if a set of queries are done not in succession, but spread out, that this can constitute as a passive scan of a system. The group then explains that they will use this idea to do passive reconnaissance on three systems to gain information which they will use to exploit each of the systems. The group will be using the Windows XP SP0 as the first test, Windows Server 2003 SP 2 for the second test, and Windows XP SP3 for the third test. The group then gives a good introduction to the literature reviews for this lab. They tie this lab to the last lab and gives an overall description of the articles presented in this lab. Then the group goes into an explanation of each of the articles. The explanations are given in a way that shows how each article could be used in securing a network. The explanations do give a summary of the article, but they do not tie each article to the others. Also they do not relate the articles to the current lab directly. Again each of the explanations of the articles do not cover the method used in the article or research in the article, or even explain that they did not cover ether one. The group’s methodology section started off with a very good explanation of how they split the lab up into two parts and how they set up the first part of the lab. They are thorough in explaining the systems that they are going to use and any information that will be used in the tests. They also do a great job in explaining the scope of their penetration tests. They explain what tests will be conducted and which ones will not. The group gave a very detailed explanation of how the tests were to be carried out. They gave rules that the group would follow in doing the test to keep the test as real as possible, like obtaining the IP addresses even though they already knew them. Some results were given in the methodology, but it was only enough to explain other actions that were needed to be done in completing the tests. The group gave the specific commands and parameters used in each step of gathering information in determining the operating system and IP addresses of each of the machines. They then explain how they compromised the Windows XP SP0 machine with little effort using a Metasploit exploit. The group next goes into a very thorough explanation of how they tried to penetrate the Windows Server 2003 system. The group did very good in keeping track of what exploits were tried, but I did not see any actual plan, but a line of attempts at trying to penetrate the system using anything they could as long as it remained in the scope of the rules discussed earlier. The group explained that even though they did a full on assault against the Windows XP SP3 system they could not find any exploitable vulnerabilities. Given this the group gave up on the third system and concluded the methodology. Even though there was not as much research put into the exploiting of the third system, the group concluded that Windows XP SP3 vulnerabilities were identical to Windows 2003; the group did not need to continue. The group gives a very good explanation of why they failed in penetrating two of the three systems. The group explains that the systems were too clean and did not include any user interaction with the computers and no normal usage was done on these computers was done ether. This lack of real world usage tends to lead to the lack of any service from which to exploit the system. The group goes on explaining different means that can introduce vulnerabilities into the systems like flaws in applications, restrictions from policies, and restrictions from firewalls which cause users to bypass security to increase usability. The group gives an excellent explanation of why there is a bias toward the application lay of the OSI model when it comes to vulnerabilities and exploits. The group also gives a very good explanation of why automated active reconnaissance tools like Nessus or Nmap will not provide any more information than doing a passive reconnaissance on a system. They explain that Nessus could be made to detect all the current vulnerabilities if the Nessus team had a vast amount of vigilance and effort to keep the tool up-to-date on the most current vulnerabilities and to make the tool less biased. Next the group explains very nicely how attacks at different parts of the OSI layer model can affect ether lower or upper layers. They also explain that through encryption at lower layers the confidentiality of the information will still be intact, even after a denial of service attack. The only issue the group had with this lab was with confusion between network cards in the virtual environment. They even gave a solution that they used while doing this lab. The conclusion for this group just gave an overview of what was accomplished in this lab. The only thing that could have been improved on in this conclusion would be an explanation of what the group has learned overall in doing this lab.
Team three as usual presents a lab report that is complete and mostly within the bounds of the syllabus. Team three has always presented an abstract that does explain an overview of the lab and the steps of the process. With that in mind team three has still never written an abstract that falls within the bounds of the syllabus. The abstract is not the required length of two paragraphs. According to the syllabus an abstract of anything less of two paragraphs will be judged as poor scholarship. The introduction that team three presents is a very good introduction to the lab exercise and does a very good job of putting the reader in the mindset of the literature review and process of the lab. The literature review that team three presents explains the articles that were required reading for lab six. While they do not contain headings that break up the articles reviewed, they still do not show the level of cohesion required. Like team one they break the articles down in terms of category of tools and how they can be used to break into a target system. This amounts to a list of analyzed articles explaining tools rather then a review of the state of the literature on the overall topic. Team three also do not relate any of the articles reviewed to the steps of the lab, this was also one of the requirements of syllabus. Team three’s literature does not encompass an academic or scholarly review of the state of the literature on the topic. Hopefully team four will show a level of cohesion in the literature review for lab seven that is respectable for high-level graduate student. The methods section provided by team three is rather complete and does a very good job of explaining the how and what of the process they will be performing. I do like how team three included a fourth machine in their penetration testing exercise for comparison. I did the same for the team two lab, as this shows a level of dedication to the outcome of the lab. However team three does not explain the when and why of the methods. They fail to explain when they expect to receive results on their tests, as well as why they are looking for the results they are attempting to achieve. Like team one they include some information in their methods section that would better fit in their results section. The methods section is meant to explain their process, the results section is meant to explain the outcome of their methods. Like the other teams, team three was unable to explain any machine other then the windows XP SP0 machine. This adds to credence and believability of their results. I agree with their point that a NESSUS scan is generally no more informative than a passive network scan and some research. This also makes for a much more “secure” attacker, as they don’t need to give away any information to gain target information. I did fail to find any discussion in the number of exploits attempted until success or failure. Their discussion on OSI layers is also a point I agree upon; exploiting a lower layer implicitly exploits the upper layers. In their issues section they list multiple nics on the VMs as a confusing issue. This should never be a problem for technologists. They did fail to mention the inability to attack two of their three systems successfully; this seems to be an oversight, as any issues should be recorded. I agree with team three’s conclusions.
The introductory paragraph is a good synthesis of all of the previous lab activities and ties them in to this week’s lab. The literature review lacks cohesion and synthesis between the assigned readings and the lab activities. In spite of this, each of the papers is dealt with insightfully and the group gives their opinion on whether or not they agree with the some of the opinions stated in the readings. It seemed because of the higher number of assigned readings, each paper wasn’t handled in a lot of detail. Because of this, some of the paper’s write ups are very brief and the group doesn’t have the chance to tie them all together.
The methodologies are very detailed and cover a wide range of possible exploit scenarios. I like how the group identified which areas of the system they intended to test as well as methods they didn’t consider utilizing because of biases using the team members in setting up the environment. The identification of the XP SP0 host by NetBIOS name should have been discussed a little more. There’s bias from knowing the details of the machines already but the default NetBIOS name was “XPSP0VM.” This wouldn’t likely be encountered in a real world scenario. The compromise of the XP SP0 machine lacked detail surrounding what payload was used and what was done once the host was compromised. The analysis of the failed attempt on the server 2003 virtual machine was a good application of previous labs. Even though the exploit didn’t work, it showed that there was a good level of depth put into the vulnerability assessment.
The findings section contains a good assessment of the issues with the test environment, primarily, the lack of users and therefore applications. Quite a few of the vulnerability exploits contained within Metasploit target programs installed on the operating systems. The assessment of the Windows firewall however isn’t necessarily true in my experience. The user unblocking applications in the windows firewall is only allowing those programs outbound. The only time ports are opened inbound is when the user adds exceptions to the firewall. As for the conclusion that nothing Nessus found wasn’t already known, the one benefit of Nessus is the ability to run the scans remotely. If you’re targeting a remote system, monitoring its network traffic is going to be pretty much impossible. If you’re on a LAN this is a completely different scenario but distance from the target is an advantage to the attacker. Using the distance factor makes attribution often impossible.
Had the team broken up their abstract paragraph they would meet the required length of the abstract. Break up your ideas into separate paragraphs; use some of your introduction in the abstract to get the required length. Like always, the team has a great introduction. Why are they the only team that does this? Is it because it is not required and this team does more work? It looked like the team tried to make a cohesive literature review, but it still looked like a list. One article was talked about cited, and then the next one. Combine the articles. If they have similar focal points, compare them to each other as well as contrast them. Most of the required topics that are required in the literature review were not answered. The team talks about how the articles relate to the lab experiment, but that is about all of the requirements. It did not seem like the team needed to cite much in their literature review. Should I assume that the sentence before the citation is what is from the article?
It was awfully nice that the team gave everyone their IP addresses. We will be needing those for the next lab experiment, when the teams attack each other. This team seemed to do the most work, or at least write the most in their methods section. Just like other teams, this team was unable to compromise any machine other than their Windows XP SP 0 host. Failure is an option. There is no need for disappointment, be happy that you tested your system and could not compromise it. This might look better for lab experiment number 7 when other people are trying to compromise your system. The main issue that this team found was that the systems were not “real” enough. Get some user activity and try again. Make sure that we will not be able to compromise your system. This team is one of the few teams that finds issues and also finds ways around them. Good job on including that in the problems and issues. The conclusion seems on the long side. Cut it down some for the next lab report. I liked that this team included pictures from the lab experiment. That is a good way for them to prove that they actually performed the lab and did not just make it up. Once again the tags were not included
Team 3’s abstract is well written and gives a good overview of what they will be attempting in lab 6. As always team 3’s introduction is good, and is a summary of all of the previous lab activities and how they tie into lab 6. The literature review is not as cohesive as perhaps it should be however, each article is summarized in detail and the group compares and contrast the articles and gives their opinions on the information presented stated in the articles.
Team 3’s methods section is very detailed. I like the approach team 3 took in dividing the lab into two separate testing domains. This helped to identify the areas they intended to test as well as methods they didn’t intend to use. I also like the fact that they eliminated the use of account cracking during the penetration tests, because as a team they couldn’t determine a way to simulate a realistic environment with hypothetical user accounts. Any methods they used involved biases inherent to members of the team choosing the account parameters, even if blind methods were to be employed.
Their results and discussion section describes the issues with the test environment, primarily the fact that it lacks users and usage, so in their opinion it was really is not a good model of a real network system. Many of the vulnerability exploits contained within Metasploit target programs installed on the operating systems.
There conclusions section was quite detailed and their screen shots, charts, and tables were well done.
Team three starts with their abstract and explains what is going to happen within the lab. They also explain that their will be literature related to the lab and results of the experiments and thoughts on them. They then go onto their introduction section and discuss the previous labs and the concepts that are address in each. They then discuss a previous conversations related to the presence of any real risk to an attacker. Then the team goes into what are passive attacks, and their point of view on when a passive attack will become an aggressive attack. The idea of passive and aggressive attacks where well aid out within the section. After this section the team goes into the literature review and start with an overview of this week articles. They did a cohesive literature review that went over the various point of the literature and compared them with each other. They also included the way the articles relate to this weeks lab exercises. Next the group goes into the methodology section. Within the section they describe what is occurring during the hands on section of the lab. There were parts of the methodologies section that could have gone into the results section. Next the team discusses their results and findings. When going over this section they did a good job explaining some of the reasons while their attacks where not as successful as they hoped. There findings where more of an overview of what occurred during the section. It did not break down the results for each system well. It made me want to know how long the attacks against the exploits took how many attempts where there. When one tool failed was there another tool that might pick up the slack where the other failed. The team then discussed their issues, which they had little problems. They concluded their lab by giving an overview of each systems main exploit they chose. They also included why in their lab the aggressive attack did not work as well as expected. An unexpected addition of a table was a nice layout of the exploits they did attack, what the patch is for the exploit, and what the usage of that exploit is. Over all the team did a good job, and provided a good amount of detail. Where the team could improve was with the use of tools and could they be used in another way.
In the abstract section of the laboratory report, team three gave a brief overview of what was to be accomplished in the lab six assignment.
In the introduction section, group three reiterated their definition of passive reconnaissance in that it contained three concepts of uncertainty, invariant risk, and limitation of scope, for passive reconnaissance was used to determine what operating systems were being used by the targeted virtual machines. The group also went into explanation about the attacker’s behavior while attacking a system, for if his/her behavior differs from that of a legitimate user, he/she will be more likely be detected.
In the literature review section of the laboratory report, team three had an issue with the firewall penetration testing article when they stated “One issue that I had with his article is that it states that a demilitarized zone is necessary to operate a web server through a firewall. Many routers now have the ability to forward a single port through a firewall to a web server or other service, eliminating the need to have a demilitarized zone.” When the group discussed the password article, I had to disagree with the statement “Password recovery tools such as John the Ripper will continually guess the password until successful.” for these cracking tools are only as good as the password lists that are used by these tools. If the password is not in the password list, then the tool would not break the password. In general, it seemed that team three needed to correlate the articles to the laboratory assignment.
In the methodology section of the laboratory report, team three used the ‘lanmap’ tool which was run on the nUbuntu machine until and used the ‘nbtstat’ tool built into the Windows XP tool host in conjunction with ‘Nete’ to do passive reconnaissance on the targeted virtual machines to determine what operating systems were being used. Team three listed what machines were to be targeted when they stated “Windows XP SP 0 (192.168.3.3) as a target, Windows Server 2003 SP 2 (192.168.3.4) as a target also, Windows XP SP 3 (192.168.3.100) as a tool host, and nUbuntu (192.168.3.101) as an observer.” However, I was not sure what system they were going to delegate as their third target. Team three was able to exploit Windows XP service pack 0 with the metasploit tool set, just as many of the other teams have done as well. The group also seemed to have trouble exploiting Windows Server 2003.Team three like most of the other groups were not able to exploit Windows XP Service Pack 3.
In the results section, team three concluded that their group like most of the teams had a very low success rate for exploiting the operating systems with the exception of Windows XP Service pack 0. Team three attributed the low success rate to the way that the test environment was configured. Team three stated “Foremost, the environment is too clean. It lacks users and usage, so it really is not a good model of a real network system. There are no services running, no traffic moving in and out. There is a complete lack of intermittent devices that could be used as preliminary targets with the ultimate goal of compromising the system.” I have to agree with their observation, for in actuality the machines are sitting there doing nothing.
In the issues section, team three described an interface problem when they stated that they found having two network interfaces running for a virtual machine created dual connections between hosts on different subnets.
In the conclusion section, team three restated their results of the laboratory assignment and concluded “that the use of active tools does not contribute substantially to the effectiveness of penetration testing if careful research and information gathering is done in preparation.”
I think that group 3’s write-up for lab 6 was good. The abstract and introduction for this lab was very good. The literary review was somewhat very good. Group answered all of the required questions for the literature review. All of the citing for the literary review was present, but not proper throughout the lab. The literature review was cited properly throughout. The author and year of the reference was included and all of the page numbers were present. For this lab, the group answered all of the desired questions. The group used many interesting methods for finding the OS of the target machine. Some of the attempts to discover the OS were not passive though. Many scans were port scans, rather than analyzing local traffic. However, information about exploits used was done well and was very detailed. Finally, the conclusion to this laboratory was also well done because it accurately sums up their procedures and findings.
This teams starts off with a well written abstract and introduction. They indicated what is going to happen and illiterate more on the lab portion. This team, like teams 2 and team 4 choose Winodws XP Sp0, Sp3, and Sever 2003. This team like others used a combination of tools to identify the specific operating system. They used lanmap and nbtstat to retrieve NetBIOS name and IP. Team 1 said that relining on NetBIOS name to identity the host is not a grantee. A windows server system could be named XP-home. Like other teams that scan against a Windows Server 2003 system found it difficult to footprint the operating system. The team uses the tool lanmap is which is different from any other team’s tool for footprinting. This was ran until a “reasonable” network map was generated. What is considered reasonable and why is the tool network mapping? Like other groups, the team was able to easily penetrate the Windows XP Sp0 machine. They did indicate that the environment was “too clean” which is true. This is often not the case for production equipment. Users log in, web browse, modify settings, etc., which with that kind of activity is often an available exploit. Even with the aid of Metasploit, the team was unable to exploit Windows Server 2003 since the patches were up to date. If the patches were not current, the Metasploit exploit discovered by this team would have exploited the system.