April 23, 2025

9 thoughts on “TECH 581 W Computer Network Operations: Laboratory 6, Team 1

  1. The group starts off with a good abstract. This abstract explains the purpose of the lab by talking about how it builds on the previous labs and using the past labs to choose tools and exploits to exploit three systems. The group gives a quick summery of the lab and explains each of the parts of the paper. The abstract could have gone a little more into the purpose of this lab and gave a description of why this lab is important. The group starts off the literature review by tying the previous labs into this one and at the same time tying the general concept of the readings given in this lab. This is a good start because it is an introduction to what to look for while reading the articles given in the lab. The literature reviews this group gave, discusses each individual reading separate from the rest of the readings. For each reading the group starts off with just giving a brief summary of the article. Later they start to relate the article to the other labs and the current lab, but still do not give any review of the article. Near the end of the literature review the group gives some opinions of the articles and shows some errors and omissions. The group still did not show any information pertaining to the methodology or research, or the lack there of, of the paper. Most of the articles they do a good job in relating the articles with past labs and the other articles in this lab. The group could have done a better job on describing the articles and given a commentary type review of each article. The methodology the group put together seemed to lack in some details on how certain attacks were done. Also a lot of results were given in the methodology that could have been put in the findings section. In the methodology the group explains that they selected three systems to run penetration tests on. These machines included Windows XP SP0, Windows XP SP0, and Debian 4.0 Etch. They then talk about how they gathered information on each of the systems and used an exploit to try and penetrate that system. The methodology should have just included the steps that were taken to perform each of the steps in the lab and not the results. Also the group seemed to have not put much effort into trying to exploit some of the systems. It looks like they just tried once and then gave up. The group should have tried a few different tools and exploits to see if anything would work. This methodology could have been expanded to include the plan that the group was going to use on each system, the commands used to run each tool, and the configurations used in the exploits. Examining just the methodology, it looks like very little was put into doing each machine. In the findings section the group gives a brief description of what happened with the Windows XP SP0 system, even though they do not tell you which system they were penetrating. They do explain that in the process of doing the attack on the first system the only active part of the attack was the exploit which would be beneficial to an attacker trying not to leave any trails behind. For the next test the group discovers that the fingerprinting used in the last test did not work properly. The group attempted to exploit the wrong operating system to show that the exploit would not work. The group did a nice job on showing what would need to be done to actually exploit a Windows XP SP3 system. They explain that these attacks are out of the scope of this lab and leave it at that. I would have been interested if the group would have tried to see if these attacks would have worked, but I understand the time strain on getting this lab done in the set amount of time. For the last test the group used Ettercap again to gain the operating system. The operating system was gained and revealed the correct version of Linux. Next the group used Nessus to test for vulnerabilities and came up with none. The group explained that the operating system did not give up any vulnerability, because there were no extra services running on that system. The group then gave a good explanation of what would have needed to be done to actually penetrate the Linux system, like a Man-In-The-Middle attack. One thing that the group could have done to get more results out of a penetration test using Nessus was to use the Windows Server 2003 SP2 system created on their network. The group could have probably got more results out of it than the Debian 4.0 Etch system. The group then rehashes the second test to explain how without a proper plan and proper research into the system that is being attacked the exploit will fail even with a tool as Nessus. Then the group does a nice job in explaining that because of the bias of the tools and exploits to the upper layers of the OSI model, more care needs to be taken in securing the upper layers than the lower. They also are careful to note that this does not mean to ignore the lower layers and solely concentrate on the upper layers. Last in the findings the group does an excellent job of explaining how exploiting one layer can affect the layers around that layer. They give a couple of good reasons that this is true and explain them briefly. In the issues the group claims that there were no issues in doing this lab. The group states that they tried the attack once on the system and that was all. This lab states that the attempts should have been tried several times. This group could have put more effort into attempting different ways to penetrate the systems than one try and then quitting. In the conclusion the group does a nice job of explaining what was done in the lab and more importantly what they learned from the lab. They also do a good job in tying this lab into the next lab and show what they learned in this lab will help them out in the next lab.

  2. Team one presents a good lab report for lab six. The abstract explains that the purpose of lab six was to build on the previous lab, especially lab five. While the abstract explains the steps that will be completed in the lab, the statement that this lab builds on the previous ones is rather obvious. According to the syllabus the entire course is an additive in nature. The abstract meets the requirements as per that syllabus. The literature review presented by team one is an improvement upon last weeks literature review. The literature review does contain a level of cohesion that team one usually does not show. However, it does still read like a list of tool categories with an explanation by each of the authors who talked about those categories. While this is effective for the lab report and course, it does not provide an academic or scholarly level of review as it applies to the state of the literature on the topic. They did do a good job of relating the articles as they were discussed to the lab exercises for lab six. While this literature review was an improvement over their previous, hopefully they will have a very high level of cohesion in their lab seven literature review. The methods section presented by team one presents the steps that the team used to complete the exploits of the three systems. The information presented was rather specific as to the steps they used to exploit each system. This information belongs in the findings section rather then methods section as it details the success and failure of the exploits. With this information in the methods section this is not an academic or scholarly methodology. It explains the how of what is going to be done, but it does not explain the who what where and when. This information is required to complete an academic methodology. The findings as presented by team one shows more of the specifics they began in the methods section. They explain that using ettercap they were able to determine the operating system on all of machines that were their targets. Using ettercap was a good idea, and agrees with the steps that team four used for OS fingerprinting. Team one’s results of not being able to exploit only the Windows XP SP0 machine also agrees with the other teams results. In my opinion this means that their results are very reasonable and possible. I did not however see any discussion on the number of exploits tried before success or failure. The issues that team one discussed explain in very limited detail that while they didn’t expect to exploit all three systems on the first attempt, they did expect to actually be able to do so successfully. This seems somewhat presumptuous on the part of team one. Assuming they are better then the collective security team as Microsoft is just somewhat arrogant. They also state that one of their issues was that this lab was different then the others. If the lab was the same as the others, then why would we have even done it? The conclusions presented by team one are acceptable.

  3. The literature review has a good introduction that appears to identify the common idea between all of the articles from this week’s readings. The rest of the literature review, however, abandons this common thread idea and treats each article separately, some without any ties to this week’s lab exercises. The “Mobile Test” article is only given a brief summary, isn’t tied to the lab exercise, nor does the group state whether or not they disagree with the findings in the paper. It’s hard to tell in the discussion of the “Distributed Network Security Assessment Tool” paper, whether or not the statement that “there are issues with using these types of vulnerability assessment tools, in that they often provide false positive scores and/or may not completely detect certain types of problems” is a summary of the paper’s opinions or the group’s. The group makes references to performing penetration tests and vulnerability assessments but doesn’t discuss the differences or similarities between the two. The articles would have been a good source of information on this and would’ve helped the depth of the literature review. In the review of Haeni’s paper, “Firewall Penetration Testing” the group comes very close to a discussion on the core of this week’s lab, the difference between automated and “manual” testing. Mention is made to the previous paper’s topic of automated testing but it appears from the assessment of Haeni’s article that the group thinks automated testing is better. Wasn’t the idea behind lab six to do “manual” sniper-like penetration testing using knowledge and cunning rather than brute force?

    The methodologies are pretty good but there are a few holes in it that leave some questions. In using Metasploit for the first attack, the version of Metasploit isn’t mentioned. One might assume the most recent version but Metasploit 3 doesn’t have a payload called “win32_reverse.” Also, no mention is made as to what was done to the machine once the remote shell was activated. While it could be assume that you have complete control of the system and could do whatever you wanted, it would have been useful to have a discussion on what exactly was done and why. The testing of the SP3 machine was extremely weak. There are, in fact, post-release SP3 vulnerabilities within Metasploit 3. Some research into the release date of service pack 3 along with the release of vulnerability fixes would have yielded some fairly critical security holes in a machine with just service pack 3 installed. In your testing you would have found that none of those would have worked because the machine is fairly recently patched but it shows a lack of depth to the testing and not applying what we’ve learned about researching vulnerabilities in previous labs. Finally, the Debian machine was also not researched as deeply as it could have been. Running Nessus and it not finding any vulnerabilities isn’t reason to just walk away.

    The findings section was pretty much a restatement of the methodologies. The only thing that added was answers to the questions in the latter part of the lab. The answer to the question of bias in the tools is avoided, the group merely acknowledges the presence of bias but gives no further discussion. The answer to exploiting OSI layers is similarly vague.

  4. Team 1 began by stating the purpose of the lab; to use the information gained from the previous labs to attempt an exploit of three different systems on the first try. Their intention is to plan each exploit and report their findings. With the third system, they intended to use a Nessus scan to evaluate the system prior to attempting an exploit.

    Team 1 began their literature review by discussing their purpose for lab 5 and the previous labs. Team 1 then stated that the common thread tying the articles together in lab 6 (except for Robin Snyder’s article) discuss the need for vulnerability assessments and that combining penetration testing makes the process even more efficient. They then restated the purpose for lab 6; to use tools to discover vulnerabilities and identify potential areas of exploitation.

    Team 1 reviewed the article Firewall Penetration Testing (Haeni, 1997). They misquoted Haeni as stating “firewalls are often regarded as the only line of defense in securing information systems”. What Haeni actually stated was “Firewalls are often regarded as the only line of defense needed to secure our information systems”. The missing word “needed” drastically changed the meaning of the sentence.

    Team 1 then discussed Root Kits-An Operating Systems Viewpoint (Kunhnhauser, 2003), and stating how root kits are a major security threat. In mid paragraph they suddenly began a review of previous labs and then restated their purpose for lab 6; to “attempt network penetration testing, using various methods and tools of identifying the components in a target network”.

    Team 1 included a review of Ethical Hacking and Password Cracking: A Pattern for individualized Security Exercises (Snyder, 2006). Team 1 stated that this article “switches gears from our previous discussed articles about vulnerability assessments and exploit tools”. I would have to disagree. I believe this article is about vulnerability assessments and exploit tools. Although this states that it is intended to describe web-based learning exercises for students in the area of security education, in actuality provides some very good information on how passwords are stored and how to recover them. Although it doesn’t provide the encryption algorithms, it does discuss the hashing methods for MD-4, MD-5 and SHA-1. It also describes how the brute force method works in password recovery. It explains how passwords can be salted to provide better encryption.

    Team 1 proceeded to discuss the methods used in this lab exercise. They began by using Backtrack and Ettercap against their Windows XP SP0 VM to determine the operating system. After successfully identifying the operating system, Team 1 attempted to compromise the Windows XP VM using the Metasploit Framework. They used the Microsoft RPC exploit with the win_reverse payload and successfully obtained a remote shell. They again used Ettercap to determine the operating system of their Windows XP SP3. Ettercap reported it as Windows 2000 SP4. They attempted the RPC attack against this VM and were unsuccessful. They then ran Ettercap against Debian 4.0 Etch, which correctly identified the Linux kernel. They ran Nessus to determine if there were any vulnerabilities, however none were discovered. From this lab, Team 1 determined that a planned attack is better than an unplanned attack.

  5. Team 1 begins with their abstract and gives an overview of what is to be accomplished within this week’s lab. The team then goes into the literature review. This section started well with going over different tools and relating them to the lab and what the roles of them are in exploiting variables. Then it starts to become broken down again by the articles. When reading the articles does the teams think of creating an overall plan before implementing the testing later on in the lab? Also from planning would attacks be more successful? Many times in my opinion a project is more successful with preparation and planning. Yes there are the times were luck comes into play, and works out for the attack. Just like in war attacks are planned to even take out weak areas of the enemy and then break apart the infrastructure so they can not function. The teams then goes onto discuss the methodology section and what is going to occur in the hands on part of the lab. Here the group describes that they will be using Windows XP SP0 and SP3, and Debian Linux as their test operating systems too exploit. They also describe what tools they will be using and a little information on how they plan to use them. The team then goes into the results section and gives their findings. They go into detail on the attacks they used but it did not seem like they used a wide variety of tools. Would the testing for each group be more valid if more tools where used against each system? Are there tools that might be more useful for one system than the other rather than using the same attack against all three? The team goes on to discuss the OSI layer and the difference between attacks against the higher and lower levels. At some point does it matter than what type of attack as long as date is destroyed? So would dropping a bomb on a data center still be within the scope of cyber warfare. This would be exploiting the weakness of the building that the system is kept in. After the findings the team goes into the issues that had occurred and discussed that planned attacks go over better than non planned attacks. The team ends on the note of what they had briefly discuss with planning attacks for the next lab and what they had learned from lab 6.

  6. In the abstract section of the laboratory report, team one gave a brief overview of what was to be accomplished in the laboratory assignment. The group also mentioned that this laboratory assignment is a precursor to the last assignment that will be conducted by the class.

    In the literature review section, I did not understand why the group summarized the root kit article and then stated “ In lab 5 we didn’t use any technical tools to discover vulnerabilities in a system, but instead used the vendor’s security documentation to identify potential areas of exploitation. “I presumed that the group was trying to contrast the rootkit article to the requirements of previous labs. However, there were no transitions statements to indicate such a contrast. Group five needed to connect the DoS article with penetration testing or the lab assignments.

    In the method section group one listed Windows XP Professional Service pack 0 , Windows XP Professional Service Pack 3 and Debian 4.0 as the three operating systems that they would test in this laboratory exercise. Ettercap was used to determine what operating systems were in use but Ettercap incorrectly identified Windows XP Service Pack. Team one like some of the other groups, could not get metasploit to obtain a Windows shell on any other virtual machine except the one that was running Windows XP Service pack 0. Nessus was used to find vulnerabilities in Debian, but no vulnerabilities were discovered.

    In the findings section, group one like other teams have come to realize that exploit tools such as metasploit have limitations and that other avenues of attack that go beyond penetration testing are needed for exploiting Operating Systems with Service Packs. Group one just as the other groups agree that penetration testing tools have a bias towards the upper layers of the OSI model.

    In the issues section, Team one stated that “The team did not have many issues with performing this lab experiment. The team did not expect that the exploits would work on every system on the first try. Planned attacks seem to go much smoother than unplanned attacks. “However, I was surprised that they did not put limitations of the tools used as an issue, since this affected group one and other group’s ability to exploit certain Operating Systems.

    In the conclusion section, group one briefly restated their results of the testing that was performed on the different Operating Systems. The team also stated that “The team now knows that more research and planning needs to go into an attack or test before beginning. “

  7. I think this team had a fairly nice literature review. I noticed that tie-ins with the lab exercises were done, and even a bit of comparison between articles occurred. There appeared to be some analysis done with the articles, notably the discussion of the “Firewall Penetration Testing” paper with respect to its age and relevance. Additionally, the teams methods were spelled out fairly well, I was really left with no questions as to what had occurred. I cannot fault the conclusions drawn from this exercise, as they appear to align with the results of all the other teams performing the exercise.

    That is not to say some problems do not exist with this report, however. Foremost, I found the ‘forcing’ of traffic from the target machines to be somewhat controversial. I believe that passive fingerprinting can be accomplished from methods which are not as ‘contrived’ as this appeared to be. True, a typical network may have a much higher volume of usable host traffic than what was available in the test setup, but significant exceptions exist. It may be that the element of greatest importance in the scope of a ‘real’ test, for instance a network monitoring system, will generate very little outbound traffic. In this case, skill in passive reconnaissance ‘without’ user induced traffic becomes crucial. I simply point out that this team may have missed an important opportunity to explore the concept of ‘pure’ passive reconnaissance possible in this test environment.

    Furthermore, I found little description of the research done to evaluate other means of attack. Metasploit is not the ‘only’ attack program available: in fact, most of the newest exploits will be standalone programs. If this team, as implied, called the test finished simply because the Metasploit framework had no plug-ins which listed XP SP3 as a possible target; then I must conclude that this is poor methodology. I know for a fact that recent application exploits within the last month work against XP SP3: as this group embraced user interaction as valid in the test, why were some of these not evaluated? I believe team five, which chose somewhat similar methods to this team, clearly showed that this is could be done. Finally, a bit more discussion on the results of the Linux target based test probably would have been in order.

    A further criticism: I found the discussion of the laboratory questions to be a bit on the ‘light’ side. The discussion on the possible NESSUS bias did not really address ‘if’ a bias existed, or ‘why’ the team believed the bias must necessarily exist. The discussion on the OSI and exploits was also somewhat simplistic: just because more exploits exist at the upper layers, why should penetration testers use these? I would suggest that these types of upper layer exploits will always be present: finding these ‘first’ is fast and easy, but more serious vulnerabilities existing at lower layers may be missed in this eagerness to show speedy results. Finally, I thought the discussion on layer exploitation relationships to be confusing. I found no conclusive answer presented, with a hand wave to “every layer is vulnerable to exploits,” and a jumble of somewhat random and contradictory statements. What exactly is meant by “skipping lower layers?” Perhaps an example to illustrate this statement would add credibility to this statement. This team asserts that the layers below an exploited layer must necessarily be compromised also, but this seems to contradict the previous statement about “skipping layers.” I confess I found this section to be particularly void of real coherent discussion: perhaps in the future it would be wise to use a structured logical approach in dissecting concepts such as this.

  8. The team starts out with a strong abstract and talks about how they did in the lab. I like the methods used by team 1 for discovering what operating system was running on the host client by using the network traffic. Most users do use a web browser for different reasons, which triggers network traffic. Ettercap was used on the two Windows machines and shown to have a fifty percent success rate with there results. After Ettercap reported the wrong information for the Windows XP SP3 box, would running Windows Update, or a different trigger for network traffic, could have given Ettercap better results? Nessus was used for the Debian 4.0 Etch system and while an exploit was available for the Windows XP Sp0 system, none was found for the Debian system from Nessus. All groups say that exploiting Windows XP Sp0 is simple and this group exploited that system using win32_reverse payload. By using this exploit the team was able to gain a remote shell with administrator privileges. After receiving wrong information from Ettercap about Windows XP Sp3 system, they continued on believing it was Windows 2000 SP4 machine, and attempted and exploit for Windows 2000. This exploit did not work, which shows that the exploit was fix in a newer release.

  9. Team one’s submission provides a general overview of the experiment, but lacks detail throughout the work.

    The abstract sounds like it was taken directly from the learning objectives for the lab. It tells me what you will ideally do, but doesn’t give me much more. If the pedagogy calls for the labs to build on each other, do you really need to mention it in the abstract?

    The first paragraph of your literature review is completely unnecessary and tells me nothing. The group claims that the common thread is the importance of vulnerability assessment. Is this so? How? The team makes an attempt to relate the articles back to the lab, but does so only in very broad terms. The literature lacks any real evaluative content.

    Your methods are complete, but could use more detail in order to make them repeatable. Though it was not directed in the lab, you browed the internet using the target machine. Why?

    Your findings are insightful. What would make the XP service pack 3 exploitable? Does the environment accurately reflect real-world situations? Why or why not?

    The group’s issues section makes it unclear if the group actually experienced an issue. Were there issues? If so, what were they? The group’s conclusion simply summarizes the findings without expanding on them. What did the group learn? Was the experiment valuable? Why?

Comments are closed.