May 13, 2025

11 thoughts on “TECH 581 W Computer Network Operations: Laboratory 4, Team 3

  1. Team three presents as usual a well thought out well designed lab report for lab four. While their abstract lists the steps that will be completed as part of the lab, it is lacking in length as per the syllabus. The introduction to the lab defines a number of terms that will be covered in the lab and really helps the reader to understand what is going to be discussed in more detail. Their explanation really breaks down the scope of what will be discussed in the lab, almost to the point of over explanation. I team three’s literature review they present an introductory paragraph that I feel is an attempt to bring the articles together and relate them to the lab. This attempt is apparently flawed as the literature review itself lacks any type of cohesion and is nothing more than a list of the articles for this lab four, and APA style citations in the headings for each article only. This is the area where team three needs to improve. Their literature reviews lack cohesion and even in text citations. Please visit the Purdue Online Writing Lab (OWL) to gain a better understanding of how to conduct a proper literature review with proper in text citations. The methods section of team three’s lab report details the process they went through to complete the findings section. They list a problem with the NESSUS web interface which in my opinion should be in the issues section not the method section as it is in fact in issue with their strategy. In the findings section team three first explains that metasploit worked only with the Windows XP SP0 host, and didn’t really have any luck with the other hosts in the VM network. This should be quite obvious as Windows XP is the longest running version of Windows, and “hackers” have had plenty of time to generate good sound automated exploits for the RTM release of Windows XP. Team three then lists the National Vulnerability Database, US-CERT, Scunia, and OSVDB as their chosen researched vulnerability databases. These seem to be the vulnerability databases of choice for most of the lab four lab reports as these results are in line with team two and team five as well. I agree with team three’s issues that most of the vulnerabilities that were listed by NESSUS were largely non-exploitable as no exploit code could be found to actually perform the exploit. The conclusions presented by team three are sound and are logical based on the lab report that they presented. Team three’s first table is remarkably lacking on exploits for many of the lower layers of the Windows XP protocol stack; the debian list is even shorter still. This is either because Microsoft and the like have done an extremely good job in protecting their latest releases, or team three did the lab wrong. If the former is the case, I foresee an issue when it comes to lab seven. Finally the Windows server 2003 table is much like the Windows XP SP3 table. This confirms that the former is more likely true above. Team three has presented a well balanced lab for lab four.

  2. This group’s abstract is put together very well. The group gives the purpose of the lab and then breaks the lab down and explains each part of it in a very informative and professional manner. Next the group gives an introduction to the lab. In the introduction the group gives a complete explanation of what an exploit is. They break the definition of the word exploit down into three components and define each of the components. I really like the way that the group did this. This definition helped in explaining what an exploit is and opens up a way to look at the word is in respect to the lab. In the last part of the lab the group again explains what will be done in this lab, but in a more simplistic way. The literature review starts off with giving a brief explanation of each of the readings given in the lab. Then the literature review goes into more detail on each of the readings separately. Each of the descriptions of the articles seems to be just rehashing what was said in the article. The group does not explain the research question, research methodology, and any errors or omissions. The group also does not tie each of the articles into each other and show how they are related. At the end of the literature review the group does show how the articles relate to the current lab. I believe that the group should have spent more time in explaining how the articles were put together and how they tie into each other and the lab itself, and not on simplifying what was said in the articles. In the methodology the group gives a very good explanation of how they put together the scan with Nessus and classified the results according to the OSI model and McCumber’s cube. They explained all the steps in this process very thoroughly. The methodology was missing a lot of other parts though. The group did not cover any stand-alone tools and how they tested them. Also the group did not cover any description of how they were going to research databases of vulnerabilities explained in the last part of the lab. The group seemed to just concentrate on one section of the first part of the lab. The group could have added what terms they were going to use in the search for sites that had a current database of vulnerabilities and the method of how they were going to test how much expertise the site had. In the results section of this group’s lab they start off describing what they discovered while creating the table of exploits. The group does a nice job in describing the process of how the tests were carried out and the results that they discovered after the test was done. They could have added some of this into the methodology section though. The next part of the results section was on the last part of the lab. This section started off with a very nice introduction to this part of the lab. They give a definition of what a vulnerability database is and how it can aid in penetration testing and at the same time aid in malicious use. The next part of this section gives the way that they went about finding the vulnerability databases and how they categorized them. This section should have gone up in the methodology section of this paper. The group goes on by describing each of the four sites they found and explaining how they fit into the OSI model and McCumber’s cube using samplings of the vulnerabilities found on that site. The group does a nice job in describing the pros and cons of each of the sites they researched. In the end of the results the group states that each site did not share very many similar vulnerabilities, so penetration testers need to be aware of this and need to use multiple sites to keep current. In the issues section the group had troubles with running some of the exploits on Nessus against the Windows XP SP3 virtual machine. They found out that the firewall was blocking this. Wouldn’t that tell them that, because the firewall was on and actively blocking the exploits, the virtual machine that was targeted was secure against those vulnerabilities and turning off the firewall was defeating the purpose of the test? Also if I remember correctly, the professor mentioned that we only had to test against one machine at the meeting we had. The group does do a nice job in the conclusion. They wrap up how they went about each section of the lab and give a brief description of the results. Then they explain what they learned in each part of the lab.

  3. The discussion in the introduction regarding the term ‘exploit’ could use some literature to back it up. The discussion on the three components that make up an exploit according to your definition could have been tied back to previous lab assignments. The literature review was merely a listing of the articles that were assigned for reading and a summary of each. The summaries were well written and complete but didn’t add much value to the lab. Tying the concepts in to the lab exercises and the discussion in the introductory section into the literature review would make the content more relevant to the task at hand.

    The methodology section had good details about the environment used to do the initial testing for part one of the lab exercises and even contained a good pun. Part two was handled very briefly though the example given at the end gave good insight into the procedures. It was nice to see the information from the introduction on the parts of an exploit brought back in to the process. The results section was well written and described the process of testing the exploit tools very frankly. Instead of glossing over why some tools weren’t used or didn’t work, the authors plainly state the information and move on.

    The results of the vulnerability database research provide some interesting insights into alternative uses of these sources. By using them as a source of information in this lab, we see how they can be used for bad as much as for good. The discussion on how the databases were selected and articles were picked at random would’ve fit better into the sparse handling of this section in the methodologies. The percentages of layer 7 vulnerabilities given with supporting literature shows a good depth of research for this section. While the data was good, I think some of the core concepts of this section’s research were missed, particularly the expertise involved in finding and possibly exploiting the vulnerability data listed in the databases.

    The issue with the Windows XP SP3 firewall could use some further discussion. If it’s on and you are unable to scan it, the vulnerabilities still exist within the system but is having the firewall turned on a sufficient mitigation strategy? Having the firewall turned on would’ve saved probably thousands of user’s PCs from the Sasser worm. Would it be better to test in the context of these lab exercises with it on or off?

    The conclusion lacks cohesion between the various sections of the lab, instead, it states the main points and the fact that they were successful.

  4. The proper tags were not submitted with the lab report, which is part of the directions for submission. Like many other teams, this team uses the word “we” a lot. I would like to see team or group instead of “we”. The verb tense in the abstract is grammatical incorrect, since this is the abstract it should be in the future tense. I found it interesting that the group called the OSI network stack model theoretical, what makes it theoretical? The group nicely tied in some of their previous findings in their introduction paragraphs. A lot of terms have been defined. I am assuming that these definitions are the group’s own words or definitions and not from some other source, since nothing is cited with them. If they are from another source or a summary from another source, please cite. The group’s last paragraph sounds like it belongs in the abstract or the methods section. The group’s literature review read and looked like a list instead of the cohesive format required. The citations for each article, not in APA 5 format, were placed after the section heading, which was the article title. If the following paragraphs are a summary or even text coming directly from the articles, the citations go after all of that, not before. While the literature review does summarize the articles, they do not answer all of the questions that are required for a literature review. A literature review is more than the summary of an article, but rather a critique of what was presented in the article. The group did talk about how the articles relate to the lab, but never state how the articles relate to each other.
    It was nice to see that the group had various methods in mind for how to perform the laboratory experiment. The group explained why they chose the path they did, which is different from what the rest of the groups did. The group clearly stated what steps were taken to perform the lab experiment. I liked that the group went back to using the virtual Citrix environment instead of using their own machines, like they did for lab 3. Why is this group the only one that performed these attacks on multiple machines? Did the rest of the groups miss something that this group did not? I found it interesting the differences in the vulnerabilities from one operating system to the next. I would like to have seen more screenshots, preferably within the methods section. Most of the groups found the same repositories for the databases that contain lists of vulnerabilities. One item missing from the results section is the findings of the level of expertise for the exploit repositories. So part 2 is not fully answered. The group’s conclusion section is for the most part a rehash of the abstract. This is the discussion of their results, which I found great that this group is the only one that noticed the circularity found between the open-source exploit databases.

  5. As always team 3 does an excellent job of writing their abstract in such a way as to let the reader know exactly how they are going to proceed with the lab. The introduction as always was very insightful. The literature review contains their discussion of sources and is organized by publication. The summaries were well written and but didn’t really tie the authors ideas into the lab exercise.
    The methodology section was well written and had good details about the environment used to do the testing. The results section was well written and described the process of testing the exploit tools in much detail.
    Their vulnerability database research section was interesting in that it explained how the sources were used. However, as with some of the other teams they spent a lot of effort describing the vulnerability database sources as opposed to detailing the levels of expertise needed to run the exploits listed on the vulnerability sites.
    The number of vulnerabilities in layer 7 seems to fit with the findings of the other teams’ research.

  6. In the abstract section of the laboratory report, team three gave a detailed overview of what was to be accomplished in the laboratory assignment.

    In the introduction, team three gave a description of what an exploit was. The group state “First, we must state that an ‘exploit’ is not inherently separate from other concepts such as ‘active ‘or ‘passive’ reconnaissance, but can be a complementary part of these categories” They went on to say that the term ‘exploit’ simply is a response to the concept of ‘opportunity,’ and therefore use this to develop a definition within the scope of this exercise. The team also described traits of an exploit, in that it must have three necessary components: opportunity, employment, and gain.

    In the literature review section of the laboratory report, team three gave through summaries of the articles that were reviewed and related the articles back to the laboratory assignment. Team three did not seem to find any errors or omissions in any of the articles.

    In the methodology section, team three was able to find out what vulnerabilities affected what virtual machine by running Nessus against all of the virtual machines. This approach was used by other teams as well including the team I am on. I was not sure why the team performed the step when they stated that “The tests were performed, and the result saved as an HTML report, which was then mailed to and accessed via an ‘offsite’ mail account for ease of examination.” The version of Nessus that my team was running allowed the team to view the results within the Nessus program and save the results so that report could be referenced again in the future. Perhaps the configuration of team’s version of Nessus was different from the one that was executed by my team.

    In the results section team three listed Vulnerability Notes Database, Secunia, the National Vulnerability Database (NVD) and The Open Source Vulnerability Database (OSVDB) as on-line vulnerability identification sources. In determining the amount of expertise involved, team three concluded “Secunia was the only one of the four that appeared to have internal sources for vulnerability discovery and verification of both the vulnerability and the solution.” Team three did not appear to tabulate the vulnerabilities that were found online into a table based on the OSI model and McCumber’s cube.

    In the issue section, I did not understand what the team meant when they stated “Foremost, some vulnerabilities reported by NESSUS appeared largely ‘theoretical’ in nature, as no known exploit code was found capable of utilizing these vulnerabilities.” Nessus did find vulnerabilities in some services without going into much detail about the vulnerability, was that what the team meant, for it gave no way to actually correct the vulnerability?

    In the conclusion section, team three came to a similar conclusion that most of the other teams came to when they stated “we have found the majority of vulnerabilities to lie in the OSI model upper layers, largely in layer seven; and to be overwhelmingly associated with the technology-processing subspace of the McCumber cube construct. “

  7. I think that group 3’s write-up for lab 4 was good. The abstract and introduction for this lab was very good. The literary review was somewhat very good. Group answered all of the required questions for the literature review. All of the citing for the literary review was present, but not proper throughout the lab. The literature review was cited properly, except when including page numbers. The author and year of the reference should be included in addition to the page number. For part 1, many of the required sections were missing. The group basically ran a vulnerability scan against a target machine as their only form of research. The group’s findings lacked analysis for the scan and most of what was included in their findings seemed more like methods. For part 2, the group did a good job of answering all of the required questions. However, the group only discusses vulnerability databases and not exploit-code databases. The conclusion to this laboratory was also well done because it accurately sums up their procedures and findings.

  8. The team starts off with their abstract and explains what is going to happen within the lab. Then they describe what is going to be accomplished using different tools to test any exploits that they encounter. Next they go on to discuss possibilities for testing the systems. They define exploitation in their standard to solidify their views when going into this lab. One question that came to my mind when they went further into defining exploitation was What if a user accidently stumbled upon vulnerability and did not know? Would this be a form of exploitation if the modified or gained access just by accident? An example that I could think of would be someone using a “time machine” and go back it time and the land on a plant and alter the outcome of the future. It is understood that the “time machine” would be the tool. The user or users however did not know that they would be altering the future by squashing a plant during their time travels. Next the group goes on to review the literature again they separate the pieces of literature. Yes they do at the end discuss some of the over lining themes, but there is no cohesive arguments or thoughts between the literature. It make the literature review seem robotic and that sections of the papers where pulled out and listed then described. Again comparing and contrasting in a cohesive rather than split review would give readers the understanding that the team is under standing the topic and that they are not just listing details that have been found. Then they go onto discuss the methodologies and processes used within the hands on portion of the lab. This was one of the few that actually acknowledged testing more than one operating system and the different exploitations found. They also stated that most of the vulnerabilities where found within the application layer of the OSI seven layer model. I would have to agree with this point as many of the attacks that exploit systems start with the application layer and then reach down. I am not saying that there are not attacks that exploit other layers I just agree on what was found. Does the team feel that many attacks that are below the seventh layer are not discovered as often, or detected? Or is it that many systems may be more secure at the lower levels but when it comes to the application layer do developers not keep security in mind during the development life cycle? The team then goes on to discuss the issues that they had with the lab and the problems with the firewall on Windows XP sp3. Does the firewall provide a false sense of hope in many cases? Then once the firewall is down the attacks can really begin, and do developers of updates keep these exploits in mind? The team goes on to conclude with what they learned from the lab and that many of the exploits are found in the upper levels of the OSI seven layer model. This was a good lab report and the methodologies was the strong point for the report and has given this reader additional thoughts on the subject.

  9. The team started with a strong abstract and indicating key points of there laboratory. They explained that the lab was to explain the purpose of security exploits. They propose a theoretical definition then to show exploits on the virtual hosts. The team then has an introduction, followed by there literature review. Both are in depth and provide great response. The team uses phrase “One cannot win a contest which does not exist”. To explain this they use the example that a password sniffer is an attack acted upon the network which exploited the system. I have a few questions, why must the password travel through the network? According to your setup of your virtual machines local logins seem to be used and they do not require a network authentication. The explanation seems not to answer or explain the phrase. I might be missing something but if the explanation was one can not gather a password through a password sniffer because the environment does not require the use of network login or certain exploits require a users input or perhaps a users carelessness.

    The team has a choice on which tools to use, Nmap or Nessus. The team indicates that they are going to be using Nessus, like some of the other groups. I did wonder why the team decieded to go with Nessus over Nmap? Since the team picked Nessus they then went to the Nessus plugin listing. Then it was discovered that this route was untenable because of the web interface. They then decided to use common vulnerability and exposure listings to attack vulnerabilities. The team indicates that Nessus will be ran on there Windows SP 3 virtual machine, mainly because it was already installed. The Nessus scan which the team says scanned with all plug-ins. A question that I have is, what is considered all plug-ins, are there certain exploits that can be left out? What is considered “dangerous”, does this mean that the plug-in is going to break the target system?

  10. @All: expertise is discussed, we just didn’t use the word. Perhaps it should have been made more clear.
    @nbakker: seriously, what is your obsession with abstract length?
    @mvanbode: Again with the tags? Really? Can you give me an example of any network stack that strictly follows OSI? You state,”The verb tense in the abstract is grammatical incorrect”. ‘Nuff said.
    @tnovosel: the table was irrelevant.

  11. In general, to all questions about turning off the firewall: the decision was somewhat arbitrary. We noted that neither of the other two Microsoft based machines had a firewall enabled, so we thought it fitting that the Windows XP SP 3 machine also should be without a firewall. It also allows examination of the evolution of the ‘base’ OS with respect to Windows XP, which we thought was useful, if not directly related to the lab exercise. Additionally, I think it makes sense to examine the security status of the unprotected OS in that the firewall is essentially a first line of defense. Once the firewall has been disabled (which is by no means a rare occurrence), the core OS must ‘stand on its own’ against any attack being made. Is the firewall a ‘false sense of security?’ No, it is an important defense mechanism, but it does not allow one to ignore other issues present with the OS under examination. Defense in depth is a time tested concept which ‘works.’

    @nbakker: The issues with regard to lab seven were also on my mind when these ‘short lists’ were discovered. It will be interesting to see how it all plays out.

    @jeikenbe, mvanbode with regards to testing multiple machines: was it wrong to test more than one machine? It didn’t seem to be that much more work, and it was something useful in planning for future lab exercises (i.e. lab seven).

    @tnovosel with respect to emailing results offsite: if you hadn’t noticed, we are not big fans of VMware Workstation via Citrix. The biggest problem in this case is lack of ‘screen real estate.’ It is simply just easier to open up the results on a ‘big’ monitor with multiple tabs when sifting through them, versus trying to have multiple windows open on the tiny viewport presented by VMware workstation. With respect to ‘theoretical’ exploits: as we were looking for attack methods (and not corrective measures, as you suggest) this meant that a search for attack programs addressing the vulnerability resulted in finding nothing. Hence, exploit of these vulnerabilities is mostly ‘theoretical,’ at least at this time, as there are no known working implementations of attack. We would put these in the ‘could work’ category of exploits.

    @prennick: We did a fair bit more than ‘just running a vulnerability scan’ as witnessed by our tables with specific attack methods discovered, and the descriptions of successful exploits using some of these means. What exactly do you mean by lack “…of analysis for the scan?” Should we have proposed reasons why some systems did not have certain vulnerabilities? I did not see this requirement in the lab instructions: indeed, this might have been considered ‘unrelated to the lab’ by some.

    @shumpfer: I’m not really following the “time machine crushes plant” example (this is science fiction: how is it anything more than raw speculation?), but as far as the ‘accidental’ nature of opportunity, I would suggest many vulnerabilities are found ‘accidentally’ or randomly (e.g. fuzzing program inputs, etc.): it really is a matter of what is done with this knowledge after it is discovered.

    @chaveza: We arbitrarily chose Nessus because it was one of two valid options; need we rationalize it further? The various configuration parameters of Nessus are well documented: the ‘default’ settings disable any plug-ins which can take down the remote host. The ‘all’ is, well, ‘all plug-ins available.’ This becomes an important distinction from the ‘default’ configuration, which I believe is demonstrated by our results. The question on the ‘password’ scenario: it was a general example, and not specifically meant to relate to our test setup. It does not attempt to address ‘all’ password attack scenarios. I think what you are referring to is a ‘weak password’ exploit: this is using a different vulnerability than ‘unprotected’ passwords or data, and so is another category altogether.

Comments are closed.