April 22, 2025

11 thoughts on “TECH 581 W Computer Network Operations: Laboratory 4, Team 5

  1. In team five’s lab report, the abstract does an extremely good job of explaining the steps of the lab that will be undertaken, and does meet the length requirement as per the syllabus. This is a point that is lacking in most team’s lab reports. The literature review that team five presents is well considered, and shows a high level of cohesion among the articles presented for the lab this week. Team five even goes beyond the scope of the required literature, and also examines other articles on the same topic. Where the literature review is lacking is with tie in to lab four. They present ideas that a general penetration tester would or should perform based on the literature, but direct correlation is made between that and the steps of lab four. The methods section as provided by team four is broken into distinct parts, and in my opinion does not present a lab report that flows well from beginning to end. In my mind, it actually shows that the lab was completed independently by two parties and brought together in the end. They state that all of the virtual machines used as part of the lab were not patched. While I believe the intent was to convey that team five had not patched the machines; it comes across as the machines were never patched which is not the case. The findings section of team five’s lab report reads much like the methods section. The findings for each part are broken down and there is no cohesion between them. Again this makes for a report that does not flow well from beginning to end. One point that team five did touch on was that without a Windows XP SP1 system, the vulnerability related to the sasser worm was not an issue. I found that point to be quite interesting and indeed to be true. In part two of team five’s findings they list the vulnerability databases that team three and team two listed as well as a number of others. By including this set of common vulnerability sources, questions must be raised about team four and team one’s research into vulnerability databases as not being entirely complete. The single table presented by team five does seem to be lacking when compared to the tables presented by the other teams for this lab. It would also seem to make sense to put the table itself in a tables and figures section at the end of the lab rather than directly in the findings section, and referencing the that table in the discussion of the results. All in all team five presents a well balanced report on their findings in lab four and goes a little above and beyond in their literature review, and findings section only to be drawn back down with their short table. If team five had issues with the table presented, it should have been listed in the issues section of the lab report. I do agree with team five’s conclusions as they could conceivable be drawn logically from the results they present in their lab.

  2. This group starts off with a good abstract. The abstract covers what they believe what this lab is trying to convey and what each part of the lab entails. I believe that the abstract did ask a lot of questions instead of just defining the lab and the goal of the lab. In the literature review the group does a great job in tying each of the articles given in the labs to each other. They talk about the process of developing a test plan to conduct a penetration test. This literature review covers the topic of each paper indirectly. The research methods and question of each article are not covered. Also there is no mention of any errors or omissions in the articles. The group does indirectly explain how these articles tie into the current lab, but do not actually cover this in the literature review. Next the group gives their methodology. This methodology is split into two parts for each of the parts of the lab. In the first part the group explains how they set up the lab to run a Nessus scan of each of their virtual machines. They did not give any specifics of how they configured Nessus. The group mentions that the virtual machines were not patched or secured in any way prior to the Nessus tests. Then they mention that each of the vulnerabilities were matched up with an exploit and classified on a table using the OSI model and McCumber’s cube. The last part of the methodology that the group gave explains how the group does its research into vulnerability databases. I had a tough time trying to figure out what the group was trying to do in this part of the lab. According to the lab, all we had to do was to research and describe vulnerability databases and discuss how much expertise was involved in them. This section starts off describing what seems to be part of the first section of this lab. They talk about how they will research the vulnerabilities found on a system. I could not make out what the group meant by system. Did the group mean the systems that were scanned in the first part of the lab by Nessus? Why was this in the second part of the lab? Then they tie both parts of the lab together by examining the CVE entries of the vulnerabilities previously found and looking them up on vulnerability databases they found on the internet. In the findings section of this lab report the group starts by explaining how they got different results from each virtual machine that they scanned with Nessus. The group then goes into detail about each virtual machine and what vulnerabilities they found on that machine. The group does make an interesting discovery on the scan of the Windows XP SP0 machine. In this scan they do not find the vulnerability that was the cause of the Sasser Worm. They stated that SP 1 is what introduced the vulnerability showing that even patching can introduce vulnerabilities in the machine. In this part the group did not mention anything about the table that was created to show the relation between the vulnerabilities, OSI model, and McCumber’s cube. In the next part of the results the group explains the last part of the lab. This section starts off explaining what the difference of CVE and CCE are and that most of the vulnerabilities that were discovered were at the application layer. The group does give a list of vulnerability databases that are not vendor specific. They do describe the site, but they do not explain the level of experience of each of them. At the end of the results the group gives a description of the table that was compiled from the vulnerabilities that were scanned in the first section of this lab. In the table the group has all the vulnerabilities at layer seven. I do not agree that all the vulnerabilities belong at the application layer. An example is that a lot of the vulnerabilities in the Windows XP SP0 virtual machine utilize RPC requests to accomplish some type of malicious activity. The RPC requests are located at the session layer of the OSI model. Also because the scan is examining vulnerabilities of a machine, wouldn’t the attack be more toward the integrity of the machine. The group mentioned that they had a great deal of trouble with executing the tests on the virtual machine. They say they had to collaborate with each other on when they were on a machine or not. Why couldn’t they have got together and worked with only one computer doing the tests? This could have alleviated the problem. This group did do a very good job in putting together a conclusion. This conclusion brings together everything that was found, in this lab and in previous labs, and gives a description on how a penetration test is carried out.

  3. Team 5 begins by defining their objectives. Their first stated objective is to use Nessus to discover vulnerabilities in their lab environment. Their second stated objective is to classify the vulnerabilities discovered in the first objective by their corresponding layer of the OSI model. They state a further objective of locating public vulnerability databases and analyzing the data to look for patterns.

    Team 5’s literature review was a conglomerate of all of the articles together. They discuss the need for a test plan and refer to Network Penetration Testing (He and Bode, p5) where there is a chart for the general process of penetration testing. They also state that the testers need to work with the customers to develop the requirements for the test, and that the testing plan should be comprehensive. They refer Vendor System Vulnerability Testing (Davidson, 2005, p3-4) and two other articles as supporting documentation. They discuss using scanning tools such as Nessus, nmap, SATAN or SAINT to identify vulnerabilities and use Network Penetration Testing (He & Bode, 2006, pp. 5-9) as a supporting reference. They also refer to the need to find unknown vulnerabilities and use Red-Team Application Security Testing (Thompson & Chase, 2003, pp. 20-24) as a supporting reference. Their literature review, although mentioning the assigned articles and discussing how they apply to penetration testing, did not show any correlation to our current laboratory assignment. I believe they missed several important points from our current literature assignments that pertain to this lab.

    In the methodology section, Team 5 discusses using Nessus to test the vulnerabilities within the test environment. They describe the systems that they will be running it against. They further explain that they will tabulate the resulting vulnerabilities and classify them within the OSI model. For part two, they state that they will research the vulnerabilities that they had located.

    In the results section, they state that most of the security vulnerabilities are from third party applications. And, that most of them occur in the Windows operating systems. They proceed to list each of the operating systems that they tested and the CVE number for each of the vulnerabilities that were located. They listed the following web sites containing vulnerability databases, nist.gov, cert.org, secunia.com, lwn.net, Securityfocus.com, osvdb.org, iss.net, and net-security.org. They also refer to bugtraq and the bugtraq mailing list as resources for known vulnerabilities. They state that the common theme among the security vulnerabilities is open source applications. Their OSI model clearly shows that the majority of their vulnerabilities lie within the application layer of the OSI model.

  4. Once again, team five’s laboratory report is relatively brief and to the point: a positive characteristic in many respects. Also, I would consider the literature review to be excellent in its synthesis of ideas; contrasted with the more common mode of ‘review’ amongst other groups which has leaned toward straight summarization. In my opinion, I believe this group’s approach to be a superior method of review. I found the ‘Methodology’ section somewhat abbreviated, yet I was left with no questions as to what had occurred: a good sign.

    This team’s report had some serious omissions noted, though. I found no ‘specific’ research of attack tools related to vulnerabilities discovered on the experimental systems; therefore it is unsurprising that no reference to any ‘real’ test of attack tools exists. I believe this to be a serious omission, as I would judge that simply listing vulnerabilities in no way defines an ‘exploit;’ an ‘exploit’ must include specific means by which to take advantage of security flaws discovered. I would present that this is a fairly significant problem with this write-up.

    Continuing, as our team (team three) took an approach which was very similar to team five’s methodology, it seemed obvious through personal experience that a few mistakes were made in the ‘Nessus’ scanning configuration. Team five noted surprise at the discovery that CVE-2003-0533, or the infamous ‘Sasser worm’ vulnerability, was not found on the Windows XP SP 0 machine. I simply note that our team found this vulnerability in our ‘Nessus’ scan, and it appears that team five failed to enable the ‘dangerous’ scanning options in the ‘Nessus’ control panel. This points out that more care should be taken in configuring the tools used, and perhaps more research devoted to ‘what’ these tools are capable of doing. Similarly, because of these configuration omissions, many other high risk vulnerabilities remained undiscovered (such as CVE-2002-0724, CVE-2003-0715, CVE-2006-1314, etc.). This would be a ‘very’ serious mistake in professional security evaluation exercise.

    One is left puzzled at the presentation of the results of the ‘Nessus’ scan, with all the CVE listings compressed into layer seven of the OSI model with little justification made, except for some vague reference to ‘cross-platform’ issues. Of course, with the previous issue of improper scan configuration, this assertion made about a ‘universal’ signature of ‘application vulnerability’ witnessed is erroneous: in actuality the Microsoft platform exhibits ‘far’ more OSI level five vulnerabilities than the Linux platform due to NetBIOS concerns. It is pointless to criticize this group’s conclusions with respect to the information they believed true; but it is a demonstration how a false start can lead to results which are thoroughly mistaken. The obvious solution: take more care in the preliminary phases of the exercise, as sound conclusions can only come from ‘good’ experimental practice.

    Finally, a significant omission was obvious in that team five essentially failed to classify how they found the vulnerabilities listed in the public databases to be patterned on the OSI construct. In fact, the section on database vulnerabilities was intermixed with the result of part one of the experiment; the reader was left to infer that the subject under examination had suddenly reverted back to the results of the ‘Nessus’ scan: from all appearances this discussion had not yet been completed before being ‘interrupted.’ Further searching revealed no further analysis ‘anywhere’ in the report which further addressed this concept: this is substantial problem.

  5. This team’s lab report started off really different from the other teams. They start asking questions right from the beginning, which is a unique approach. The team answered all of the required questions that they should have. While this team had a really cohesive literature review, it is very short. It was about three hundred words shorter than the requirements. Also, while being very cohesive, the literature review did not answer most of the required questions. The articles were not compared to each other. Basically the team just provided a summary of the article and that was it. Why did this team chose to use Nessus over Nmap? There was no discussion as to why they made this choice. I would like to have seen more detail in the methodology section. Screenshots of the steps of the process would have been nice.
    One thing to mention, before you can use acronyms, you must tell the audience what it is first. While this team found numerous websites that have exploit databases, they did not go into much detail about them. More detail is always good to have in lab reports. The team missed most of the items that they were supposed to do in part 2 of the lab. The team never states why they chose Nessus over Nmap. This team is the only team that seems to have the issue of working with the virtual environment. Did no other teams have this problem or did only person per group perform this part of the lab experiment? It was nice to see that this group was able to search the Internet for the year of the publication of He & Bode’s article. “When planning a penetration test the testers need to develop a test plan”. This sentence just doesn’t seem to fit in with the rest of the literature review. There is another sentence shortly after that has three different articles cited. Does this mean that all three articles state exactly the same thing, or is the idea compiled from all three articles? If the latter is the case, I find it very odd that no author of the required articles would have noticed this and developed the plan. Instead it took numerous articles to state what goes into a good penetration test plan. This group, once again, is the only group that incorporates other articles into the literature review. I think this shows that the group actually researches on the topic of the main articles and further expands their knowledge by looking elsewhere for information to help with the laboratory experiment. The one comment I have about their table is that they seem to have the least information in it compared to the other groups. For future lab reports, the group needs to expand their idea, write more detail, and definitely write a longer literature review that meets the requirements.

  6. Team five does good job in their abstract explaining exactly the steps they are going to take to complete the requirements of lab 4. Their literature review is well organized, and the summary of the articles is well written with proper citations. The methods section was broken into two parts making it easy to read and understand what they were trying to accomplish. Their findings section was also separated into 2 parts and was well detailed. In part two of the lab they list many of the same vulnerability sources as the other teams did. However, I feel they did not adequately explain the levels of expertise needed to run the exploits found in the sources they listed. It seemed like as with the other teams that they were more concerned with providing a description of each of the vulnerability sites rather than the expertise needed to run the exploit tools listed on the sites. My understanding was that Professor Liles wanted us to describe the level of expertise needed to run the exploits. Perhaps I misunderstood. The table presented was also lacking in comparison to the other teams tables, although they did admit that their table does look empty. All in all team five presents a well written report on their findings and does a good job once again with their literature review. I agree with team five’s conclusions.

  7. In the abstract section within the laboratory report, team five gave a brief overview of what was to be accomplished in the laboratory assignment.

    In the literature review section, team five was able to relate the articles to each other, but gave a brief summary of a few of the articles. Team five did not relate the articles to the laboratory assignment, describe their methodologies, or state if there were any errors or omissions that were present in the articles.

    In the methodology section of the laboratory report, I did not understand what the team meant by “Each of the virtual machines was added to a team in VMware so they could be easily started all at once and connected to the same internal virtual network to allow communication between the hosts while keeping the network traffic from the scans inside the virtual environment. “ The team used the Windows Server 2003 virtual machine to run their Nessus program.

    In the second section of the methodology section, I was not sure why the group performed a step when they stated “In this section we will research vulnerabilities that were found within the systems and then determine if attacks against the vulnerabilities would work or would not work.” This was not a requirement of part 2 of the laboratory assignment, but was a step in part 1. Part 2 required the teams to locate sources that listed newly discovered vulnerabilities and exploits.

    In the part 1 findings section, the group listed the identification codes of the vulnerabilities that were discovered but did not elaborate on what vulnerabilities were included within the identification codes. The table that was associated with part one suffered from the same problem. How did the team know that all of the vulnerability codes fell within the application layer of the OSI model?

    In the part 2 findings section, team five listed the National Vulnerability Database, US-Cert, Secunia, LWN.net, SecurityFocus, OSVDB, Security Tracker, Xforce and Net-Security as their vulnerability discovery sites. A brief description was given of each, but team five did not address the expertise in regards to the sites. Team five did not create a table to tabulate the vulnerabilities that were discovered from the vulnerability discovery websites in relation to the OSI model and McCumber’s cube. It seemed that several of the teams forgot to do this step in the laboratory assignment.

    In the issue section, team five stated “We encountered issues working in the virtual environment collaboratively. Instead of working on the machines simultaneously, we’d have to communicate with each other to find out when the other had paused or stopped the virtual machines and closed out of the VMware Workstation program so the other could log in and continue the work.” It sounded like there was a communication breakdown between group members.

    In the conclusion section, team five did a nice job relating the laboratory assignment to the articles and to the concept of sharing discovered vulnerabilities via the vulnerability discovery websites.

  8. I think that group 5’s write-up for lab 4 was very poor. The abstract for this lab was adequate and provided a good overview of the lab. The literary review was very good, in terms of summarizing the readings. Group 5 chose to write the literature review as one big comprehensive review, which is good; however absolutely none of the required questions were answered. It seemed as if the literary review was nothing more than a summary of the required readings and did not include whether the group agreed or disagreed with the readings, any speculation about the research methodology, any errors or omissions nor did they indicate how it relates to the laboratory. All of the citing for the literary review was done well and all of the pages were included. For part 1, it becomes apparent that another individual has written this part. At this point, the group simply states some of the methods that they will perform in the lab, without explaining why. For the findings section of part 1, there seems to be many problems. At first glance, the section appears to be very short. The findings for part 1 only answer about two of the seven questions required. No research was done for the exploits, they were only listed as an output of Nessus. These exploits were also NOT included into the grid for discussion. The only information that was included was CVE’s for the vulnerabilities found when using Nessus against a Windows SP0/SP3 and a Debian system. There are a whole lot more exploits (or vulnerabilities, honestly the difference is negligible at this point due to their constant misuse) that could be included. The directions asked to list the vulnerabilities by system and not just virtual machines for this course. Absolutely no stand-alone tools were listed for part one and since they were not, they were clearly not tested either. No conclusions were made for this section, let alone explained. No strategy was included for how this knowledge was gained, only a list of the vulnerabilities found and a short description of those vulnerabilities. This section was very weak and seemed as if the group did not even read the lab questions before writing it. For part 2, it seems that the part was very weak. The group simply listed links with a few sentences (if that) about each. The group did NOT discuss the level of expertise involved. . Some vulnerabilities were included into the grid, but apparently there are only twelve and they all operate at the Application layer. Also, the links included are for vulnerability repositories and not exploits. A vulnerability is not an exploit and the lab required research on exploit code repositories. Much like part 1, it appears that the group did not read the lab requirements for part 2 before turning in their write-up. The conclusion was well written and accurately summarizes what was covered. Overall, almost none of the required questions were answered and once again seems as if group 5 was struggling to finish on time.

  9. Team five’s report suffers from inconsistencies and vague statements that detract from the reader’s ability to comprehend the information the group is trying to relate. The schizophrenic voice issues that plagued their last report are still present but are less noticeable. The abstract is direct, with little added fluff. It does not entice the reader to continue.

    The group has attempted to present a cohesive literature review. The added touch of pulling from additional sources helps to flesh out the meaning of the writings as well as tie them to the current lab and the series of labs. That said the discussion of the articles feels a bit thin. What is it that Davidson is ultimately doing? It’s close to the other two articles, but how do they differ? How does this apply to the current lab? How can the information be applied outside of the lab environment? Is the information given useful? Are the authors off track in any way? This is a good start but could be built up even more.

    The team’s methods are vague and not easily repeatable. In this section you use “CVE”. Define Abbreviations in each new section. It appears that in part two you deviated from the assigned procedure. Is there a reason for this? CVE is clearly not the only vulnerability database in existence. Is there a reason you used it exclusively?

    In the groups findings you state that the tests of various operating systems returned different results. Wouldn’t you expect this to be the case? You list several vulnerabilities by CVE number for each operating system tested. I have no idea what these mean. When discussing XP service pack 0 you say that you expected to find a vulnerability that was introduced with service pack 1. Why would you expect to see a vulnerability that was introduced with a patch on an unpatched system? For all operating systems listed, did you actually attempt to exploit the vulnerabilities or do you just theorize that the methods you suggest will work?

    The findings for part two of the lab have no substance and provide little information of value. You mention CCE. What does it stand for? You state that this database covers software in development. I’m not sure that’s entirely accurate. You then go on to list several websites with a very brief description of what each contains. This more closely matches the original assignment, but doesn’t mesh well with your methods and doesn’t provide anything useful. Your table is pointless. Just say all the vulnerabilities you found were in layer seven and be done with it.

    Your conclusion summarizes the intent of the lab. It tells the reader what the group learned. I’m not certain how this was accomplished given the discrepancies throughout the report.

  10. The team started with a strong abstract and indicating key points of there laboratory. They identified which tool the team was going to use, Nmap or Nessus. The team decided on Nessus over Nmap however was wondering why the team decided to go with Nessus over Nmap. The team also talks about researching exploit code. The team is going to use the results from there vulnerability testing, on there tests systems, and do research on possible vulnerabilities. There literature review is in depth and a good overview. The team also talks about common vulnerabilities and exposures. They also use this vulnerabilities and exposures in the lab like team 3. They use Windows Server 2003 to run Nessus and target machine which is Windows XP SP0, which according to the team is “unmodified”, Windows XP SP3, and Debain. The team provides the CVE after doing a Nessus scan. They seem to have found several vulnerabilities for XP SP0, less with XP SP3, and one for Debian. The team also has a chart like team 4, however all this teams vulnerabilities are in Layer 7, the Application layer. Team 4 has there spread across layer 2 to layer 7. The team also did vulnerability research from depositories that are not commercial.

  11. @all who complained about acronynms – Agreed, we took too many liberties assuming these were well known and, while I’m sure most of you knew what they were, they should be spelled out for those who don’t.

Comments are closed.