April 23, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 6, Team 5

  1. This group’s abstract only gives a summary of what is involved in this lab. This abstract could have been expanded to include more on what is the purpose of this lab and how it is related to the rest of the course. In the literature reviews for this group, they start off with summarizing up all the labs up to and including this lab and give an overview of the entire course. This whole section could have been summarized and placed in the abstract of this lab. This section did not pertain to any of the readings. The rest of the literature reviews examine how each of the articles fit into this lab and how they relate to each other. The group does a very good job at this in the literature review, but they do not review each article individually. Cense the group does not review the articles individually they do not discuss any methodologies of the papers or research done by the writer. The group does expose some discrepancies in the articles and clarifies them. This group approached the lab with a different means than everyone else in the class. In the methodology the group describes that the group set up this lab as a simulated red teaming exercise between the two members of the group. One of the members changed IP addresses and passwords and the other attempted to discover IP addresses and associate them to operating systems. The team used a Windows XP SP0 system, Windows XP SP3 system, and a Debian Etch system as there three test computers. They used a fourth machine with Backtrack installed on it to be used as the attacking computer. In the methodology the group used Nmap to determine what the operating systems were. This goes against the rules of the lab exercise. The lab states that the gathering of information about the operating system needs to be done passively. The group does explain how this could have been done passively though. Throughout the methodology the group does a very good job in explaining each step and how they configured each of the tools and exploits to get the results they needed. The group did include a lot of results in the methodology section of this paper. They could have just given the method that they were going to follow and then explain in the findings what happened when they attempted the different exploits. The findings section seemed to just rehash what was already said in the methodology section of this paper. The second part of the findings in this paper explains how firewalls even can be used to exploit a network if they are not patched properly. Also noted was that remote services are the bias for most of the exploits out in the world. The last statement in this section talks about leaving attacks that can be ran locally on a machine by insiders or others that have gained physical access. This was after explaining that the tools have a bias toward remote exploits. These statements seem to be contradicting themselves. How can a remote exploit be a local attack by a local user? Last the group explains that higher layers in the OSI model have a trust in the lower layers. They say that this would mean that if an attack happens at a lower layer the upper layers will not know about the attack and be compromised. One question that I have is what happens when you introduce encryption into the lower layers though?

  2. Team five begins their lab with the abstract that explains an overview of the steps that will be accomplished in the lab. Unlike previous abstracts, this one meets the requirements of the syllabus including the two-paragraph length. The literature review that team five presents for lab six demonstrates a high level of cohesion between the articles as well as high level of understanding of the topics presented. They also do a good job of relating the topics of the literature review to the exercise of the lab. One team five is done with their literature they explain the methods they will use to complete the steps of the lab. They explain in their methods that in order to prepare for lab seven one team member changed IP information on the VMs and afterwards the other team member had to fingerprint the VMs in order to attack them. They start by performing a ping scan. While I see this a needed step, is does violate the passive nature of the first few steps of the lab design document. A ping scan is a generally a dead give away to an attack. Making use of the new version of NMAP was a very nice touch, and one that other teams did not make use of. The rest of their methods section detailed the majority of steps they were going to follow. Some of the information they presented in the methods section, like most of the teams, did belong in the findings section. They list exploits actually attempted in the methods section, which in my opinion is not correct. However I do enjoy the discussion around the varied nature of the exploits they attempted. In the findings section they explain that like the other teams they were only able to exploit the Windows XP SP0 machine, however they did mention the actual exploits they attempted, including updated and current exploits that are part of the metasploit framework. I do not see any discussion about the total number of exploits attempted before achieving success or failure. Again, like the other teams, they were only able to exploit the XP SP0 machine. Like team two, once they discovered that the Debian machine had no running daemons they did not just give up. While not directly listed in the lab design document, adding services to the Debian machine shows a very scholarly level of involvement in the outcome of their lab. This lends credence to their results. Even though there were obviously issues with their lab, there is no issues section to be found. This is a direct violation of the syllabus as per the lab report design. I do not totally agree with a “trust” model placed between the layers of the OSI model as it pertains to the implementation of the model in a computer system. The addition of the windows firewall is a direct result of the no trust nature of the layers of the OSI model in my opinion. Even though the firewall is technically a layer seven application, it interacts as low as layer two of the OSI model. I agree with the conclusions presented by team five.

  3. The team had a nice introduction in their literature review, nicely summing up the past labs and telling the audience how they have all added up to this lab experiment. Like other teams this team had very long paragraphs in the literature review. For the first time in any lab report, lab 6 by team 5 was the first to make the entire lab report sound like one person. Was this because only one person wrote the lab report, using information from the other member, and rewriting it for cohesiveness? The approach of having one team member change IP addresses and then having the other one perform a penetration test on the systems is very unique. I like this approach that this team took, why are they the only ones to think about performing the lab this way? It seems like many teams did not bother to attack the Debian machines, is this because other teams do not know how to attack a non-windows machine?
    I think that this team had one of the most detailed methods section of all the teams. It is nice to see that this team was able to admit that they could not compromise some of their systems, and eventually gave up. So there was no number of times it took to compromise the system, since they never did. Like other teams, this team realized the reason why the it was difficult to compromise the system. No services are running, no users are using it. Basically it is a clean system. So to me, this means that adding functionality to a system, makes it more vulnerable. I have to ask, where is the Issues section? Was it that this team had no issues, or that the team did not have time for the issues? The lack of an issues section is an issue, but where do you put that? What a catch 22. After seeing other teams have screenshots of their lab experiment, I would like to have seen some from this team. Screenshot can add to the ability to duplicate the lab that the team performed. Overall this is a good lab, but not one of the better labs that this team has written. I hope to see the findings from this lab help this team, and all teams, with the next lab experiment.

  4. Team 5 begins their lab report by stating their objectives; to exploit the target hosts in as few attempts as possible. They intended to do this by placing additional emphasis on the tool selection process. They state that the added benefit would be a lowered interaction with the target host and therefore less chance of detection.

    In the first paragraph of the literature review, Team 5 reviewed all of the labs that they had completed up to this point. They then stated that the focus on lab 6 is selecting the proper tool for exploitation on the first try. Although it appears to be the introductory paragraph for the literature review, it didn’t introduce the literature review or tie the literature review with the lab assignments.

    They proceeded to review A Taxonomy of DDoS Attack and DDoS Defense Mechanisms (Mirkovic & Reiher, 2004). They related this article to lab 6 in the way that the author studied previous attacks and current vulnerabilities to create a plan prior to conducting the attack. They also related the lab to Attack Net Penetration Testing (McDermott, 2000) by discussing how penetration testers can model the penetration using attack trees. They mentioned the article A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test (Chen, Yang, & Lan, 2007) and state that it uses the DDoS model; however they don’t explain how it fits this model. Perhaps they are comparing the handler-agent design of the DDoS with distributed computing system described by Chen, Yang, & Lan. They do however make a good point concerning the communication within the Distributed Network Security Assessment tool. If the data transferred between the agent and controller within the distributed system is not encrypted, it may cause information leakage to someone who is passively sniffing on the network.

    They discuss Modeling TCP/IP Networks Topology for Network Vulnerability Analysis (Zakeri, Shahriari, Jalili, & Sadoddin, 2005) and discuss how it is a model for vulnerability analysis. They point out that it could be beneficial in this lab by providing a technique for modeling the system to determine vulnerabilities. Likewise they discussed Firewall Penetration Testing (Haeni, 1997) and how the methods described, such as attacking behind a firewall, will be beneficial in conducting lab 6.

    In the methodology section, Team 5 stated that they intend to conduct lab 6 as a simulated red team exercise between the two members of Team 5. One team member changed the IP addresses and passwords of the target machines so that the other team member could attempt penetration. They began with nmap to discover the target systems. Team 5 makes a good point that if the machines had been in use they would have been able to identify them passively through network traffic. They successfully exploited the Windows XP SP0 machine using the MS03-026 exploit and the shell_bind_tcp payload. They planned their attack against the Windows XP SP3 VM by first determining the date that service pack 3 was released and finding vulnerabilities that were discovered since then. They were unsuccessful in compromising the system. Team 5 made a good point that had the system been in actual use, they may have been able to target the system by placing malicious code in a web page or email. Likewise they were unable to find any exploits against their Debian machine, even after installing an Apache and SSH server.

    Team 5 makes a good point that the systems are overly secure due to the lack of applications that are running and the lack of human interaction within the target systems. Our own research has shown that many of the known vulnerabilities occur within the application layer of the OSI model, and many require human interaction on the target system.

  5. Team 5’s abstract was well written and set the stage for what they were going do in lab 6. The team did a nice job with their introduction and writing a cohesive literature review in that they tied their lit review back to previous labs. This was good because it shows an understanding that this course has been set up for each lab to build upon the previous lab. I thought the approach team 5 took with one team member changing the IP addresses and passwords on three of the virtual machines and then notifying the other team member that the work was completed, so that the second team member could identify what the IP addresses were changed to was truly a team effort.
    Team 5 had one of the most detailed methods sections of all the teams. Their findings section was very detailed as well and helped me top understand more about this topic. I didn’t see any issues so I have to ask, were there any? Overall this is a good lab, well written and cohesive.

  6. In the abstract section of the laboratory report, team five viewed the constraints of the assignment as successfully exploiting the designated targets in as few attempts as possible, thus making tool selection a crucial element in the success of such an objective.

    In the literature review section of the laboratory report, team five was able to intertwine the different articles to create a cohesive explanation of denial of service (DoS) and penetration testing. The only problems I could find with the section were that the summaries of the articles were very brief and the articles were not always related to the laboratory assignment. Team five was able to relate the denial of Service (DoS) article to the lab assignment by stating “The authors identify an attack path used for DDoS propagation that will be similar to the one utilized in this lab. “Team five also found that password cracking article was of poor quality when they stated “Other than some steps on running John the Ripper, the paper isn’t very useful as far as depth into the topic.”

    In the methodology section, team five split their team in half and one group member changed the IP addresses and passwords of the Windows XP SP0, Windows XP SP3, and Debian Etch virtual machines. When the group stated “After that was completed, that team member shut down the team of virtual machines and notified the other team member that the work was completed“ I was somewhat unclear of this statement, for why would one shut down the virtual machine that the other team members would need to access? The group then used nmap to identify the operating systems on the virtual machines. Group five was able to exploit Windows XP service pack 0 with metasploit just as all of the other teams. Just like all of the other groups, team five was unable to exploit Windows XP service pack 3. The group also was unable to exploit Debian even when it was implemented with Apache server.

    The findings section of group five’s laboratory report showed that the group came to the realization that there are limits on the effectiveness of the tools that all of the groups have been so heavily reliant upon. Group five just as some of the other groups including my own have realized that some of the layer eight techniques are beginning to look pretty appealing now. Group five stated “Through social engineering, it would be possible to direct the user of a system to a malicious web server hosting exploit code that would inject into the browser and give us remote access. “

    In the conclusion section, I had to agree with team five when they stated “Instead of indicating that the host isn’t exploitable, a failure simply narrows our focus even further” This reminds me of what was said by Thomas Edison when he said that he did not fail, he just found numerous ways not to make a light bulb .The group also concluded that lower layer attacks would be harder to defend against.

  7. I thought overall this team’s report to be quite excellent. The literature review was clearly superior, in my opinion. The methodology was well described, displaying considerations which with substantial thought underlying them. I especially was impressed by the experimental design, which used members of the group in a ‘blind test’ setup. I think this indicates a high degree of cohesion and cooperation among the team members: this is something other teams should seek to emulate.

    Despite the overall excellent nature of the write-up, a few omissions and oversights appear to be present. Foremost, while I cannot criticize the ‘blind user’ setup, as it appeared to be well conceived, I do wonder if the methods used for the first two hosts were really an attempt at ‘passive’ reconnaissance. Despite having the advantage of user generated traffic by which to fingerprint hosts, this team resorted to using ‘nmap’ to actively interrogate the targets. This seemed totally unnecessary, as simple sniffing tools would have produced equally useful results. It was not mentioned if ‘nmap’ was run at a slow scan rate; if it was, I would likely consider this ‘passive enough’ and withdraw my criticism: this should have been discussed in the methodology in any case.

    I thought it interesting that the team mentioned identifying the browser type by traffic. I would suggest that this is probably a method prone to error if HTML headers are being used for identification. It is well known that many browsers ‘lie’ with regard to this; in fact many allow the user to set the browser ID string to any number of commonly used configurations (such as Opera, Firefox via plug-ins, and I believe Konqueror also) . If HTML injection is being done, it might be better to utilize Javascript or CSS processing behavior characteristics (a common technique in web design) to determine browser type: this is usually quite accurate. This is just a suggestion, and not really a criticism: I found this teams research to be quite interesting with regard to this topic.

    Furthermore, I do question the wisdom of tampering with a test system once the experiment has begun. Installing Apache on the Linux machine seemed a somewhat controversial choice. To be ‘fair,’ why not install Apache on Windows XP SP3 also, as it is cross platform. It just appears that the ‘baseline’ of functionality was not maintained across machine types. Additionally, why not attempt to browse the web from the Linux machine via a text based browser; or, install an X server and graphical browser so that similar exploits can be attempted across machine types. Really just a suggestion: these steps would necessarily require a fair amount of additional time to complete, which may not have been realistic in the test situation.

    Finally, I note that while the discussion of the OSI layer compromise situation and the exploit/tool biases were relatively thorough, I did not see the team address the question regarding merit of NESSUS versus passive means as discovered in the exercise. Furthermore, I believe that some factors to be missing in the OSI layer exploit discussion. As the team asserts that there is ‘implicit trust’ among layers: what are the implications of good encryption techniques, whereby trust becomes explicit and reliable?

  8. I think that group 5’s write-up for lab 6 was fair. The abstract for this lab was good and provided a good overview of the lab. The literary review was very good, in terms of summarizing the readings. Group 5 chose to write the literature review as one big comprehensive review, which is good; however most of the required questions were not answered. It seemed as if the literary review was nothing more than a summary of the required readings and did not include any speculation about the research methodology or any errors or omissions, though they did they indicate how it relates to the laboratory. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail about the steps they performed to attack the target systems. However, there are some errors with the way the lab was performed. It appears that they did NOT use passive methods of OS detection, nor did they report the correct information about they active OS detection. The group used nmap instead of a passive scanner such as Ettercap or P0f. They also indicated that the –A argument was used to detect the OS’s when nmap uses the –O argument. This makes me wonder if they read the instructions correctly. However, their analysis of the ease of exploiting TCP/IP than the actual computer is a good a idea. This is very true as TCP/IP attacks can work against fully patched systems. The conclusion was well written and accurately summarizes what was covered. Overall, most of the required questions were answered and answered well; however, in the lab there seemed to be a little confusion how to perform passive scanning.

  9. This team, like team 1, choose Windows XP Sp0, SP3, and Debian Etch. Unlike group one this team changed the IP address as an to attempt to mask the target clients from a different group member that was doing the testing. This shows a different approach that some attackers take. They may not know what they are targeting and must find out different information such as there IP address. This group used nmap to scan the network and locate the target clients. Nmap discovered the clients but similar to other scan tools used by other groups, discovering the operating systems is an educational guess. The Windows XP Sp0 was easily identified while the Debian and XP Sp3 system where given false or too many matches for an OS fingerprint. This team did something to different to exploit the Windows XP Sp3 system. They launched a web browser and took it to there web server that exploited there machine. What the same result would have happened if the web browser was different, say Mozilla Firefox with the add-on No Script? This type of an attack can be difficult to target users that the attack is unfamiliar with. Specifically attacking an unknown target node with this attack would be difficult. The Debian system was also attempted differently they used “apt-get” to install Apache and SSH. They soon discovered that this did not help any and where unable to exploit.

  10. Team five’s report is well done and accurately describes what took place in the lab. The only place where it runs a bit thin is in the methods section.

    Team five’s abstract is a quick overview of the lab. It could use more detail. What exactly are you doing to choose tools? What are the general steps in the process?

    The literature review is well developed and relates the articles back to the lab as well as the class. It provides in-depth evaluative content. The articles are summarized so that the reader knows what they discuss, without trying to convey too much information.

    The Methods section could be more detailed in order to provide a repeatable process. Some of the data contained in this section belongs in you findings. I’m curious as to why you deviated from the assignment.

    The group’s findings are complete and well reasoned. Do you think that improvements could be made to the lab environment in order to more accurately reflect a real-world scenario?

    The group’s conclusion accurately reflects the lab, and gives a good explanation of what was learned. The report is missing an issues section. Did you have any issues?

Comments are closed.