Abstract
The purpose of lab 6 is to build upon the previous labs, and this week using the matrix of attacks we created we will choose a tool or exploit method and attempt to exploit three systems, hopefully on the first try. Like we did in Lab 5 we will choose our exploits carefully. We will carefully create a plan for exploiting each system and will report on how many times it took to exploit each one. After each exploit attempt if needed we will refine our methods and continue to exploit and report our findings.
With the third system we will use NESSUS to evaluate the system and attempt to exploit it. Each systems exploit attempts will be documented in our findings section. Following, literature related to the topic is evaluated, and applied within the scope of the exercise. Our methods section will then be developed by which to categorize this information. All of this will be done in order to prepare the teams for the next lab.
Literature Review
This week’s review of published literature builds upon the premise presented in the articles from Lab 5 that vulnerability assessments are a necessary process in protecting network security and that using automated tools in combination with penetration testing is a much more efficient way to do so. In the lab 5 exercise we analyzed and tested methods of actively probing the network to discover vulnerabilities without using tools. Understanding the components of our networks through analysis of configuration and security documentation we developed an exploit table. All of the articles discussed in this literature review directly relate to what we have been learning in the previous 5 labs. Each lab has built upon the previous labs and the articles have done the same. The common thread tying the articles together in lab 6, except for Robin Snyder’s article discusses the importance of vulnerability assessment and also adds that combining penetration testing makes the process even more efficient. In this lab we will use tools to discover vulnerabilities in a system to identify potential areas of exploitation.
The first type of tool presented to us deals with testing Mobile devices. By comparison this is different then what we have discussed in the past however, there is the same concern for vulnerabilities with these devices as with networks and the same use of automation and care and concern is discussed. In their article “Mobile Test: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices”, Jiang Bo, Long Xiang, and Goa Xiaopeng introduce us to MobileTest, a tool for supporting automatic black box test for software on smart mobile devices (Bo, Xiang, & Xiaopeng, 2007). MobileTest is an automatic black box testing tool for smart mobile devices such as PDAs, smart-phones, which have embedded Operating Systems such as Windows Mobile, Symbian, Linux, and Palm. It can build sophisticated, maintainable and reusable test case libraries for testing system level and application level software on a variety of smart mobile devices (Bo, Xiang, & Xiaopeng, 2007). MobileTest was compared against other automatic testing tools such as, TestQuest Pro, and Digia AppTest which are also used to test mobile applications however these tools have issues with their ability to test at high level automation. Tests results showed that using automated test tools were more efficient than manual tests and MobileTest was superior to TestQuest and Digia App Test.
Vulnerability scanners are another tool used to look for known vulnerability weaknesses. However, there are issues with using these types of vulnerability assessment tools, in that they often provide false positive scores and/or may not completely detect certain types of problems. Shi-Jen Chen, Chung-Huang Yang and Shoa-Wei Lan in their article “A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test” tell us that Vulnerability scanners are automated tools designed to scan for known vulnerabilities and weaknesses. They look for misconfigured file permissions, open services, and other operating system problems. Using information from a scanner in combination with other testing data, network administrators can gain valuable information of their network and systems (Chen, Yang, & Lan, 2007). Penetration tests used in combination with vulnerability scan is an effective means of uncovering hidden vulnerabilities. Vulnerability scanning tools have limitations that make their use limited in a penetration testing environment. These tools will often return either too many results for the tester to sort through or, worse yet, identify false positives which could waste precious testing time. Combining vulnerability scanning tools with penetration tests and analyzing the reports produced, a tester can get an idea of how serious the security defects exists in their network.
Firewall Penetration testing is an extremely important aspect of network security. According to (Haeni, 1997) firewalls are often regarded as the only line of defense in securing information systems. In his article “Firewall Penetration Testing” Reto E. Haeni states that security scanners can help in firewall testing but cannot replace manual tests. This seems to fly in the face of what the authors of the other articles are saying in regards to being able to use automation testing for efficiency. Perhaps since this article was written in January 1997 and technology and vulnerability assessment tools have become more sophisticated Haeni would think differently today. He does however convey as did the authors from last weeks articles did, that one of the most important steps in testing is to make sure that there is a plan in place for the test, and that the testing is done by and independent third party with integrity, and experience in writing skills and technical skills.
The sophisticated computer networks that we all work on and the various applications we use in our daily work life make network security extremely important and also extremely vulnerable to attacks. The complexity of analyzing network vulnerabilities will be increased as the number of hosts and services increase. As mentioned in our previous articles, using an automated approach to vulnerability analysis is necessary. Reza Zakeri, Hamid Reza Shahari, Rasool Jalili, and Reza Sadoddin in their article “Modeling TCP/IP Networks Topology for Network Vulnerability Analysis” say that there are several tools that exist which analyze host vulnerabilities in isolation, but to protect the network against the attacks we need to consider the overall network vulnerabilities and the dependency between services which hosts provide. Analyzing network vulnerabilities in an isolated environment is inefficient. As discussed in the articles we researched in lab 5 and the articles presented in lab 6 the authors agree that vulnerability analysis assists network administrators in finding vulnerabilities and weakness that may lead to security violations. Zakeri, Hamid, Shahari, Jalili, and Sododdin use a case study Man in the Middle Attack to demonstrate how a network can be vulnerable to an attack.
In his article “Root Kits-An Operating Systems Viewpoint” Winfried E. Kuhnhauser introduces us to Root Kits as tool boxes containing a collection of highly skilled tools for attacking computer systems. The security threat of root kits is serious. The attack is quick, fully automatic, and has lasting effects. The attack has a high success probability and requires very little knowledge (Kunhnhauser, 2003). The attack happens so fast that it is virtually impossible to detect. In our previous lab exercises we analyzed and tested methods of actively probing the target network to discover vulnerabilities. In understanding the components of our target networks through analysis of configuration and security documentation we developed an exploit table. In lab 5 we didn’t use any technical tools to discover vulnerabilities in a system, but instead used the vendor’s security documentation to identify potential areas of exploitation. In lab 6 we will attempt network penetration testing, using various methods and tools of identifying the components in a target network.
Robin Snyder’s paper on “Ethical Hacking And Password Cracking: A Pattern for individualized Security Exercises” switches gears from our previous discussed articles about vulnerability assessments and exploit tools. This paper provides and discusses more about the design and implementation of web-based training exercises (Snyder, 2006). A key component to this article is that students must be made aware that password cracking and recovery techniques are for educational purposes and should not be used unethically to attack systems. That goes without saying even in this course we are taking. The author discusses SecureS, which is a software package developed to make it easy for students to do security exercises. This seems to relate back to what we have been doing overall in this class in that we have been provided hands on learning environment whereby we have had the opportunity to learn about exploit tools and methods and now have the opportunity to actually use them. By understanding the process and components of exploit methods and tools in a training environment, we will learn and be able to develop more effective countermeasures for defending such attacks in the networks we either manage today or will be managing in the future.
In his article “Attack Net Penetration Testing”, J.P. McDermott brings us back to discussing penetration testing. Penetration testing is a critical step in the development of any secure product or system and is the fundamental area of information system security engineering (McDermott, 2001). According to McDermott, penetration testing follows one of two approaches: flawed hypothesis or attack trees. Flawed hypothesis is the most commonly used approach to penetration testing. The attack tree approach was developed by Sparta and is intended for penetration testing where there is less background information about the system (McDermott, 2001). McDermott’s approach is to organize penetration testing using an attack net. His approach is different from the flawed hypothesis and attack tree approach but still has the benefits of both.
Distributed denial-of-service (DDos) is a rapidly growing concern. (DDoS) attacks pose an immense threat to the internet, and many defenses have been proposed to defend against the problem (Mirkovic, &Reither, 2004). In their paper “A Taxonomy of DDoS Attack and DDos Defense Mechanisms” Jelena Mirkovic and Peter Reiher the authors introduce us to the taxonomy of DDoS Attacks and taxonomy of DDoS Defense. The goal of these taxonomies is to help the information security community think about the threats faced and possible defenses. A benefit gained from these taxonomies was easier cooperation from researchers and attackers cooperated to exchange attack code and information vulnerable machines (McDermott, 2001).
Methods
First, three virtual machines were selected for security testing. The operating systems chosen were Windows XP Professional, Windows XP Professional Service Pack 3 and Debian 4.0 Etch. Next, a plan to discover the operating system/version was created. Using Ettercap, the group decided to attempt to discover the operating system/version by analyzing web browsing. First, a Windows XP Professional machine was started and some casual web browsing was performed. While the machine was browsing web pages, our attack machine was running BackTrack 3 and passively sniffing traffic using Ettercap.
After viewing only a single web page, Ettercap was able to successfully identify the client as a Windows XP client. Since the service pack level was not indicated, the group proceeded with the attack assuming that the service pack level was 0. Next, on the BackTrack machine, the Metasploit Framework was opened and an exploit was selected. The exploit selected was the Microsoft RPC exploit. This exploit was chosen because in the “info” section for the exploit it was said to correctly exploit Windows XP SP0 machines. Once the exploit was chosen, the win32_reverse payload was selected. This payload was selected because of the ability to obtain a remote shell. Next, all of the parameters were set, including the local host address and the target’s IP address and open port, which were obtained via Ettercap’s passive scan. Finally, the exploit was launched and successfully exploited the system by giving the BackTrack machine a remote Windows shell.
For the next machine, Windows XP Professional Service Pack 3 was used. The same method was used to fingerprint this machine as the previous machine. While web browsing on the target machine, Ettercap determined that the operating system was Windows 2000 SP4. This wasn’t the correct operating system, but when an SMB connection was made, the hostname was shown, which was “XPSP3VM”. However, it’s not likely for a host name to indicate its service pack level. Therefore, the group attempted the RPC attack that should also work on Windows 2000, and it failed of course. Therefore, this indicated that further reconnaissance should be performed before more exploits. Since, there are no exploits that currently work in the Metasploit framework against Windows XP Service Pack 3, the test was over.
Finally, Nessus was run against the Debian 4.0 Etch virtual machine. First, the machine’s operating system was “discovered” using Ettercap’s passive fingerprinting. The OS was reported as Linux 2.6.x kernel, which was correct. Nessus was run from an Ubuntu 9.04 virtual machine with all of the latest plugins. However, the Nessus scan completed with no found vulnerabilities.
Findings
For the first test, everything went smoothly. Ettercap’s passive fingerprinting worked well enough to get a good idea of the operating system without performing any active attacks to obtain the operating system/version. The exploit used worked flawlessly on the first try. This was possible because Ettercap was able to determine an open port where a successful exploit was possible. This method was very beneficial to an attacker, because the actual exploit was the only active part of the attack.
For the next test, the attack did not go smoothly. Ettercap’s initial passive fingerprint resulted in an incorrect operating system/version detection. Although the discovered NetBIOS name indicated the correct service pack level, this often is not the case. In order to test, an exploit known to work on Windows 2000 SP4 was used to indicate that the OS fingerprint was incorrect, if the exploit failed. This indicates that other methods should be performed in order to discover the correct operating system/version. In a real-world scenario, this would make an attacker consider an active form of reconnaissance to ensure the highest level of certainty of the operating system before considering more exploit attempts. Since Metaploit is a framework for applying one’s own exploit code and using sample exploits as proof of concept, all of the included exploits were of no use to Windows XP Service Pack 3. While the payloads can be used, a new exploit must be created. This, of course, doesn’t mean that XP SP3 is not vulnerable to attack. While no common exploits would work, other attacks could be used. For instance, using methods such as ARP poisoning, DNS Spoofing and DoS attacks could all work, but are beyond the scope of this lab. These attacks were not used, because they are not really exploits. The difference between scans, attacks, vulnerabilities, exploits and payloads seem to be overlooked and the term “exploit” seems to inherit the qualities of all of these.
For the final test, the attack went as expected. Ettercap was able to correctly fingerprint the operating system. When dealing with Linux-based operating systems, the closest fingerprint that one can obtain with most tools isn’t a distribution name but rather a kernel version. That is, of course, what Linux is. Therefore, the fingerprint was correct, but Nessus did not return any known vulnerabilities. This is the case because of the Linux-based operating system chosen and the packages specified. The version used is a minimal install and does not have any common services running and, thus, no open ports. Therefore, an exploit may not be the best method of compromising the system. Of course this all depends on what the ultimate outcome of an attack would be for an attacker. If the attacker wanted data off of the system, then common exploits would be of no use. However, a Man-In-The-Middle attack or DoS attack could work with attacking integrity and availability of the system.
In the team’s second attack on the Windows XP service pack 3 machine, the operating system was returned as a false one. A planned attack will almost work better than an unplanned attack. Research into the system should be performed in order to accurately determine what is on the system before the attack. This seems to be the case from the second attack. If the team was to attack a system solely based on the information found on the fingerprinting. The Windows XP service pack 3 returned that the OS was Windows 2000 service pack 4. The attack or red teaming could have been unsuccessful based on the returned information. Since there were no known exploits for Windows XP SP 3, this does not mean that it is not vulnerable. Other attacks could have been performed as previously mentioned; they are not classified as exploits. If someone decided to use NESSUS as the tool for exploitation and decide to just go randomly attack a system, there is a chance that they would not be successful with proper planning.
For the previous labs, the teams have found that many tools and exploits work on the upper layers more than the lower layers. What this means for red teaming and penetration testing is that they should use the tools and exploits for the upper layers, and use different means for testing the lower layers of the OSI 7 layer model. As far as protecting the upper layers, penetration testers need to make sure that the upper layers are secured properly, because of the bias that exists with the tools and exploits. This is not to say that that lower layers should not be paid attention to, but the upper layers could be more at risk because of the bias that exists.
It can be argued that when a layer is exploited, that the layer above it is also exploited. This is because of how systems “talk” to each other. From one side it starts at the Application layer and travels down the stack until it reaches the physical layer, and then it connects to the physical layer to the other system and works its way back up. Now this is not always the case, it can communicate through other layers and skip the lower layers. But if a layer is exploited, this means that the next layer about it can also be exploited. Another reason that this is true is because in order to exploit a layer, the lower layers must also be exploited as well. The other side of the argument can be that just because a layer is exploited does not mean that the layer above it is vulnerable. From the research that the teams have been performing over the past few labs, it would seem that every layer is vulnerable to exploits, but the tools used in this lab show the bias of exploiting the upper layers.
Issues
The team did not have many issues with performing this lab experiment. The team did not expect that the exploits would work on every system on the first try. Planned attacks seem to go much smoother than unplanned attacks. This lab was different from the rest that the teams have performed so far, but it is known that this will help with laboratory experiment 7. The team will use the issues from this lab experiment to plan on how to attack another team.
Conclusions
In the first system that was attacked, it worked in the first try. The second attack gave the team the wrong operating system was returned. If this information was used to attack a system, if would not be successful. A planned attack has a better chance of being successful than an unplanned attack. This lab experiment will be very useful in lab 7. In this lab the team tested our own systems, while in the next lab experiment the team will be testing other team’s systems. The lessons learned from this lab will be used in lab 7. The team now knows that more research and planning needs to go into an attack or test before beginning.
Works Cited
Bo, J., Xiang, L., & Xiaopeng, G. (2007). Mobile Test: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices.
Chen, S., Yang, C., & Lan, S. (2007). A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test.
Haeni, R. E. (1997). Firewall Penetration.
Kuhnhauser, W. E. (2003). Root Kits An Operating Systems Viewpoint.
McDermott, J. (2001). Attack Net Penetration Testing.
Mirkovic, J., & Reiher, P. (2004). A Taxonomy of DDoS Attack and DDoS Defense Mechanisms.
Snyder, R. (2006). Ethical Hacking And Password Cracking: A Pattern For Individualized Security Excercises.
Zakeri, R., Shahriari, H., Jalili, R., & Sadoddin, R. (2005). Modeling TCP/IP Networks Topology for Network Vulnerability Analysis.
The group starts off with a good abstract. This abstract explains the purpose of the lab by talking about how it builds on the previous labs and using the past labs to choose tools and exploits to exploit three systems. The group gives a quick summery of the lab and explains each of the parts of the paper. The abstract could have gone a little more into the purpose of this lab and gave a description of why this lab is important. The group starts off the literature review by tying the previous labs into this one and at the same time tying the general concept of the readings given in this lab. This is a good start because it is an introduction to what to look for while reading the articles given in the lab. The literature reviews this group gave, discusses each individual reading separate from the rest of the readings. For each reading the group starts off with just giving a brief summary of the article. Later they start to relate the article to the other labs and the current lab, but still do not give any review of the article. Near the end of the literature review the group gives some opinions of the articles and shows some errors and omissions. The group still did not show any information pertaining to the methodology or research, or the lack there of, of the paper. Most of the articles they do a good job in relating the articles with past labs and the other articles in this lab. The group could have done a better job on describing the articles and given a commentary type review of each article. The methodology the group put together seemed to lack in some details on how certain attacks were done. Also a lot of results were given in the methodology that could have been put in the findings section. In the methodology the group explains that they selected three systems to run penetration tests on. These machines included Windows XP SP0, Windows XP SP0, and Debian 4.0 Etch. They then talk about how they gathered information on each of the systems and used an exploit to try and penetrate that system. The methodology should have just included the steps that were taken to perform each of the steps in the lab and not the results. Also the group seemed to have not put much effort into trying to exploit some of the systems. It looks like they just tried once and then gave up. The group should have tried a few different tools and exploits to see if anything would work. This methodology could have been expanded to include the plan that the group was going to use on each system, the commands used to run each tool, and the configurations used in the exploits. Examining just the methodology, it looks like very little was put into doing each machine. In the findings section the group gives a brief description of what happened with the Windows XP SP0 system, even though they do not tell you which system they were penetrating. They do explain that in the process of doing the attack on the first system the only active part of the attack was the exploit which would be beneficial to an attacker trying not to leave any trails behind. For the next test the group discovers that the fingerprinting used in the last test did not work properly. The group attempted to exploit the wrong operating system to show that the exploit would not work. The group did a nice job on showing what would need to be done to actually exploit a Windows XP SP3 system. They explain that these attacks are out of the scope of this lab and leave it at that. I would have been interested if the group would have tried to see if these attacks would have worked, but I understand the time strain on getting this lab done in the set amount of time. For the last test the group used Ettercap again to gain the operating system. The operating system was gained and revealed the correct version of Linux. Next the group used Nessus to test for vulnerabilities and came up with none. The group explained that the operating system did not give up any vulnerability, because there were no extra services running on that system. The group then gave a good explanation of what would have needed to be done to actually penetrate the Linux system, like a Man-In-The-Middle attack. One thing that the group could have done to get more results out of a penetration test using Nessus was to use the Windows Server 2003 SP2 system created on their network. The group could have probably got more results out of it than the Debian 4.0 Etch system. The group then rehashes the second test to explain how without a proper plan and proper research into the system that is being attacked the exploit will fail even with a tool as Nessus. Then the group does a nice job in explaining that because of the bias of the tools and exploits to the upper layers of the OSI model, more care needs to be taken in securing the upper layers than the lower. They also are careful to note that this does not mean to ignore the lower layers and solely concentrate on the upper layers. Last in the findings the group does an excellent job of explaining how exploiting one layer can affect the layers around that layer. They give a couple of good reasons that this is true and explain them briefly. In the issues the group claims that there were no issues in doing this lab. The group states that they tried the attack once on the system and that was all. This lab states that the attempts should have been tried several times. This group could have put more effort into attempting different ways to penetrate the systems than one try and then quitting. In the conclusion the group does a nice job of explaining what was done in the lab and more importantly what they learned from the lab. They also do a good job in tying this lab into the next lab and show what they learned in this lab will help them out in the next lab.
Team one presents a good lab report for lab six. The abstract explains that the purpose of lab six was to build on the previous lab, especially lab five. While the abstract explains the steps that will be completed in the lab, the statement that this lab builds on the previous ones is rather obvious. According to the syllabus the entire course is an additive in nature. The abstract meets the requirements as per that syllabus. The literature review presented by team one is an improvement upon last weeks literature review. The literature review does contain a level of cohesion that team one usually does not show. However, it does still read like a list of tool categories with an explanation by each of the authors who talked about those categories. While this is effective for the lab report and course, it does not provide an academic or scholarly level of review as it applies to the state of the literature on the topic. They did do a good job of relating the articles as they were discussed to the lab exercises for lab six. While this literature review was an improvement over their previous, hopefully they will have a very high level of cohesion in their lab seven literature review. The methods section presented by team one presents the steps that the team used to complete the exploits of the three systems. The information presented was rather specific as to the steps they used to exploit each system. This information belongs in the findings section rather then methods section as it details the success and failure of the exploits. With this information in the methods section this is not an academic or scholarly methodology. It explains the how of what is going to be done, but it does not explain the who what where and when. This information is required to complete an academic methodology. The findings as presented by team one shows more of the specifics they began in the methods section. They explain that using ettercap they were able to determine the operating system on all of machines that were their targets. Using ettercap was a good idea, and agrees with the steps that team four used for OS fingerprinting. Team one’s results of not being able to exploit only the Windows XP SP0 machine also agrees with the other teams results. In my opinion this means that their results are very reasonable and possible. I did not however see any discussion on the number of exploits tried before success or failure. The issues that team one discussed explain in very limited detail that while they didn’t expect to exploit all three systems on the first attempt, they did expect to actually be able to do so successfully. This seems somewhat presumptuous on the part of team one. Assuming they are better then the collective security team as Microsoft is just somewhat arrogant. They also state that one of their issues was that this lab was different then the others. If the lab was the same as the others, then why would we have even done it? The conclusions presented by team one are acceptable.
The literature review has a good introduction that appears to identify the common idea between all of the articles from this week’s readings. The rest of the literature review, however, abandons this common thread idea and treats each article separately, some without any ties to this week’s lab exercises. The “Mobile Test” article is only given a brief summary, isn’t tied to the lab exercise, nor does the group state whether or not they disagree with the findings in the paper. It’s hard to tell in the discussion of the “Distributed Network Security Assessment Tool” paper, whether or not the statement that “there are issues with using these types of vulnerability assessment tools, in that they often provide false positive scores and/or may not completely detect certain types of problems” is a summary of the paper’s opinions or the group’s. The group makes references to performing penetration tests and vulnerability assessments but doesn’t discuss the differences or similarities between the two. The articles would have been a good source of information on this and would’ve helped the depth of the literature review. In the review of Haeni’s paper, “Firewall Penetration Testing” the group comes very close to a discussion on the core of this week’s lab, the difference between automated and “manual” testing. Mention is made to the previous paper’s topic of automated testing but it appears from the assessment of Haeni’s article that the group thinks automated testing is better. Wasn’t the idea behind lab six to do “manual” sniper-like penetration testing using knowledge and cunning rather than brute force?
The methodologies are pretty good but there are a few holes in it that leave some questions. In using Metasploit for the first attack, the version of Metasploit isn’t mentioned. One might assume the most recent version but Metasploit 3 doesn’t have a payload called “win32_reverse.” Also, no mention is made as to what was done to the machine once the remote shell was activated. While it could be assume that you have complete control of the system and could do whatever you wanted, it would have been useful to have a discussion on what exactly was done and why. The testing of the SP3 machine was extremely weak. There are, in fact, post-release SP3 vulnerabilities within Metasploit 3. Some research into the release date of service pack 3 along with the release of vulnerability fixes would have yielded some fairly critical security holes in a machine with just service pack 3 installed. In your testing you would have found that none of those would have worked because the machine is fairly recently patched but it shows a lack of depth to the testing and not applying what we’ve learned about researching vulnerabilities in previous labs. Finally, the Debian machine was also not researched as deeply as it could have been. Running Nessus and it not finding any vulnerabilities isn’t reason to just walk away.
The findings section was pretty much a restatement of the methodologies. The only thing that added was answers to the questions in the latter part of the lab. The answer to the question of bias in the tools is avoided, the group merely acknowledges the presence of bias but gives no further discussion. The answer to exploiting OSI layers is similarly vague.
Team 1 began by stating the purpose of the lab; to use the information gained from the previous labs to attempt an exploit of three different systems on the first try. Their intention is to plan each exploit and report their findings. With the third system, they intended to use a Nessus scan to evaluate the system prior to attempting an exploit.
Team 1 began their literature review by discussing their purpose for lab 5 and the previous labs. Team 1 then stated that the common thread tying the articles together in lab 6 (except for Robin Snyder’s article) discuss the need for vulnerability assessments and that combining penetration testing makes the process even more efficient. They then restated the purpose for lab 6; to use tools to discover vulnerabilities and identify potential areas of exploitation.
Team 1 reviewed the article Firewall Penetration Testing (Haeni, 1997). They misquoted Haeni as stating “firewalls are often regarded as the only line of defense in securing information systems”. What Haeni actually stated was “Firewalls are often regarded as the only line of defense needed to secure our information systems”. The missing word “needed” drastically changed the meaning of the sentence.
Team 1 then discussed Root Kits-An Operating Systems Viewpoint (Kunhnhauser, 2003), and stating how root kits are a major security threat. In mid paragraph they suddenly began a review of previous labs and then restated their purpose for lab 6; to “attempt network penetration testing, using various methods and tools of identifying the components in a target network”.
Team 1 included a review of Ethical Hacking and Password Cracking: A Pattern for individualized Security Exercises (Snyder, 2006). Team 1 stated that this article “switches gears from our previous discussed articles about vulnerability assessments and exploit tools”. I would have to disagree. I believe this article is about vulnerability assessments and exploit tools. Although this states that it is intended to describe web-based learning exercises for students in the area of security education, in actuality provides some very good information on how passwords are stored and how to recover them. Although it doesn’t provide the encryption algorithms, it does discuss the hashing methods for MD-4, MD-5 and SHA-1. It also describes how the brute force method works in password recovery. It explains how passwords can be salted to provide better encryption.
Team 1 proceeded to discuss the methods used in this lab exercise. They began by using Backtrack and Ettercap against their Windows XP SP0 VM to determine the operating system. After successfully identifying the operating system, Team 1 attempted to compromise the Windows XP VM using the Metasploit Framework. They used the Microsoft RPC exploit with the win_reverse payload and successfully obtained a remote shell. They again used Ettercap to determine the operating system of their Windows XP SP3. Ettercap reported it as Windows 2000 SP4. They attempted the RPC attack against this VM and were unsuccessful. They then ran Ettercap against Debian 4.0 Etch, which correctly identified the Linux kernel. They ran Nessus to determine if there were any vulnerabilities, however none were discovered. From this lab, Team 1 determined that a planned attack is better than an unplanned attack.
Team 1 begins with their abstract and gives an overview of what is to be accomplished within this week’s lab. The team then goes into the literature review. This section started well with going over different tools and relating them to the lab and what the roles of them are in exploiting variables. Then it starts to become broken down again by the articles. When reading the articles does the teams think of creating an overall plan before implementing the testing later on in the lab? Also from planning would attacks be more successful? Many times in my opinion a project is more successful with preparation and planning. Yes there are the times were luck comes into play, and works out for the attack. Just like in war attacks are planned to even take out weak areas of the enemy and then break apart the infrastructure so they can not function. The teams then goes onto discuss the methodology section and what is going to occur in the hands on part of the lab. Here the group describes that they will be using Windows XP SP0 and SP3, and Debian Linux as their test operating systems too exploit. They also describe what tools they will be using and a little information on how they plan to use them. The team then goes into the results section and gives their findings. They go into detail on the attacks they used but it did not seem like they used a wide variety of tools. Would the testing for each group be more valid if more tools where used against each system? Are there tools that might be more useful for one system than the other rather than using the same attack against all three? The team goes on to discuss the OSI layer and the difference between attacks against the higher and lower levels. At some point does it matter than what type of attack as long as date is destroyed? So would dropping a bomb on a data center still be within the scope of cyber warfare. This would be exploiting the weakness of the building that the system is kept in. After the findings the team goes into the issues that had occurred and discussed that planned attacks go over better than non planned attacks. The team ends on the note of what they had briefly discuss with planning attacks for the next lab and what they had learned from lab 6.
In the abstract section of the laboratory report, team one gave a brief overview of what was to be accomplished in the laboratory assignment. The group also mentioned that this laboratory assignment is a precursor to the last assignment that will be conducted by the class.
In the literature review section, I did not understand why the group summarized the root kit article and then stated “ In lab 5 we didn’t use any technical tools to discover vulnerabilities in a system, but instead used the vendor’s security documentation to identify potential areas of exploitation. “I presumed that the group was trying to contrast the rootkit article to the requirements of previous labs. However, there were no transitions statements to indicate such a contrast. Group five needed to connect the DoS article with penetration testing or the lab assignments.
In the method section group one listed Windows XP Professional Service pack 0 , Windows XP Professional Service Pack 3 and Debian 4.0 as the three operating systems that they would test in this laboratory exercise. Ettercap was used to determine what operating systems were in use but Ettercap incorrectly identified Windows XP Service Pack. Team one like some of the other groups, could not get metasploit to obtain a Windows shell on any other virtual machine except the one that was running Windows XP Service pack 0. Nessus was used to find vulnerabilities in Debian, but no vulnerabilities were discovered.
In the findings section, group one like other teams have come to realize that exploit tools such as metasploit have limitations and that other avenues of attack that go beyond penetration testing are needed for exploiting Operating Systems with Service Packs. Group one just as the other groups agree that penetration testing tools have a bias towards the upper layers of the OSI model.
In the issues section, Team one stated that “The team did not have many issues with performing this lab experiment. The team did not expect that the exploits would work on every system on the first try. Planned attacks seem to go much smoother than unplanned attacks. “However, I was surprised that they did not put limitations of the tools used as an issue, since this affected group one and other group’s ability to exploit certain Operating Systems.
In the conclusion section, group one briefly restated their results of the testing that was performed on the different Operating Systems. The team also stated that “The team now knows that more research and planning needs to go into an attack or test before beginning. “
I think this team had a fairly nice literature review. I noticed that tie-ins with the lab exercises were done, and even a bit of comparison between articles occurred. There appeared to be some analysis done with the articles, notably the discussion of the “Firewall Penetration Testing” paper with respect to its age and relevance. Additionally, the teams methods were spelled out fairly well, I was really left with no questions as to what had occurred. I cannot fault the conclusions drawn from this exercise, as they appear to align with the results of all the other teams performing the exercise.
That is not to say some problems do not exist with this report, however. Foremost, I found the ‘forcing’ of traffic from the target machines to be somewhat controversial. I believe that passive fingerprinting can be accomplished from methods which are not as ‘contrived’ as this appeared to be. True, a typical network may have a much higher volume of usable host traffic than what was available in the test setup, but significant exceptions exist. It may be that the element of greatest importance in the scope of a ‘real’ test, for instance a network monitoring system, will generate very little outbound traffic. In this case, skill in passive reconnaissance ‘without’ user induced traffic becomes crucial. I simply point out that this team may have missed an important opportunity to explore the concept of ‘pure’ passive reconnaissance possible in this test environment.
Furthermore, I found little description of the research done to evaluate other means of attack. Metasploit is not the ‘only’ attack program available: in fact, most of the newest exploits will be standalone programs. If this team, as implied, called the test finished simply because the Metasploit framework had no plug-ins which listed XP SP3 as a possible target; then I must conclude that this is poor methodology. I know for a fact that recent application exploits within the last month work against XP SP3: as this group embraced user interaction as valid in the test, why were some of these not evaluated? I believe team five, which chose somewhat similar methods to this team, clearly showed that this is could be done. Finally, a bit more discussion on the results of the Linux target based test probably would have been in order.
A further criticism: I found the discussion of the laboratory questions to be a bit on the ‘light’ side. The discussion on the possible NESSUS bias did not really address ‘if’ a bias existed, or ‘why’ the team believed the bias must necessarily exist. The discussion on the OSI and exploits was also somewhat simplistic: just because more exploits exist at the upper layers, why should penetration testers use these? I would suggest that these types of upper layer exploits will always be present: finding these ‘first’ is fast and easy, but more serious vulnerabilities existing at lower layers may be missed in this eagerness to show speedy results. Finally, I thought the discussion on layer exploitation relationships to be confusing. I found no conclusive answer presented, with a hand wave to “every layer is vulnerable to exploits,” and a jumble of somewhat random and contradictory statements. What exactly is meant by “skipping lower layers?” Perhaps an example to illustrate this statement would add credibility to this statement. This team asserts that the layers below an exploited layer must necessarily be compromised also, but this seems to contradict the previous statement about “skipping layers.” I confess I found this section to be particularly void of real coherent discussion: perhaps in the future it would be wise to use a structured logical approach in dissecting concepts such as this.
The team starts out with a strong abstract and talks about how they did in the lab. I like the methods used by team 1 for discovering what operating system was running on the host client by using the network traffic. Most users do use a web browser for different reasons, which triggers network traffic. Ettercap was used on the two Windows machines and shown to have a fifty percent success rate with there results. After Ettercap reported the wrong information for the Windows XP SP3 box, would running Windows Update, or a different trigger for network traffic, could have given Ettercap better results? Nessus was used for the Debian 4.0 Etch system and while an exploit was available for the Windows XP Sp0 system, none was found for the Debian system from Nessus. All groups say that exploiting Windows XP Sp0 is simple and this group exploited that system using win32_reverse payload. By using this exploit the team was able to gain a remote shell with administrator privileges. After receiving wrong information from Ettercap about Windows XP Sp3 system, they continued on believing it was Windows 2000 SP4 machine, and attempted and exploit for Windows 2000. This exploit did not work, which shows that the exploit was fix in a newer release.
Team one’s submission provides a general overview of the experiment, but lacks detail throughout the work.
The abstract sounds like it was taken directly from the learning objectives for the lab. It tells me what you will ideally do, but doesn’t give me much more. If the pedagogy calls for the labs to build on each other, do you really need to mention it in the abstract?
The first paragraph of your literature review is completely unnecessary and tells me nothing. The group claims that the common thread is the importance of vulnerability assessment. Is this so? How? The team makes an attempt to relate the articles back to the lab, but does so only in very broad terms. The literature lacks any real evaluative content.
Your methods are complete, but could use more detail in order to make them repeatable. Though it was not directed in the lab, you browed the internet using the target machine. Why?
Your findings are insightful. What would make the XP service pack 3 exploitable? Does the environment accurately reflect real-world situations? Why or why not?
The group’s issues section makes it unclear if the group actually experienced an issue. Were there issues? If so, what were they? The group’s conclusion simply summarizes the findings without expanding on them. What did the group learn? Was the experiment valuable? Why?