April 22, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 4, Team 2

  1. Team 2 begins with an abstract describing the lab requirements. In the sentence “Also a knowledge of exploits and there target victims” I believe the correct word to use would be “their” instead of “there”. There were a few other spelling and grammatical errors, but I am using this as an example. They proceed to state that they will be using tools contained in Backtrack3 and others that are not precompiled in backtrack 3. And state that they will be attempting to exploit vulnerabilities that had been reported by Nessus. They will also research publications of current exploits.

    They begin their literature review with a discussion of application vulnerabilities and then discuss the common theme of the literature reviews, “performing a structured penetration test through proper documentation for the benefit of the vendors writing new or updating existing software”. The first article they discuss is Network Penetration Testing (He, Bode, 2005). They compare this article to Vendor System Vulnerability Testing Test Plan (Davidson, 2005) and discuss how both cover the steps in the process of penetration testing. I agree with that assessment. Although these articles were very informative, I found the areas of process and documentation are applicable for our current lab. Network Penetration Testing (He, Bode, 2005) also lists a large number of tools and vulnerabilities that will be helpful in this and future labs. They proceed to review Red-Team Application Security Testing (Thompson and Chase, 2003). Although I feel that one of the major points of the article was the functional decomposition of applications to test each module separately, this review did not mention it at all. They seem to equate application testing to system testing and use that to find relevance to our current laboratory assignment. I believe that it can be summed up in a sentence from the introduction paragraph of the article, “In this article, we describe a methodology for finding the underlying causes of these vulnerabilities—bugs in software” (Thompson and Chase, 2003, p18). Since part of our laboratory assignment was to find stand-alone exploitation tools, compare them against the OSI model and look for patterns, this statement is a hint that we may find most of them in the application layer.

    In the methods section they discuss the results of the Nessus scan from the lab 3, and the vulnerabilities that each open port represents. They go into an in-depth discussion of what each open port does. They proceed to describe vulnerabilities that apply to each open port and discuss Medusa, which exploits services on port 139. They also discuss using Nmbscan, which will “show all domains, master browers, and servers”. The Nmbscan is more of a reconnaissance than an exploit tool. They did not document any other exploits. They list two web sites containing vulnerability data in this section, BugTraq and Securityfocus. They list four additional security and vulnerability databases in their findings section.

    The descriptions of the various open ports and their associated services were very good. They also included a good discussion of the various vulnerability databases. They seem to conclude that the vulnerabilities listed in the databases are evenly distributed throughout the OSI model, whereas our own conclusions were that more than 90% fall in the application layer. Also, they only found and tested one stand-alone penetration tool to be used against the exploits that were identified in their Nessus scan. Their document could also use some proofreading to eliminate some of the minor errors.

  2. Group 2 starts off with a decent abstract. The group talks about what is going to be done in each part of the lab. One thing that was missing from the abstract is some type of tie into the rest of the labs in this course. The group could have given a description on how this lab could relate to the other labs and what the overall goal of this lab was trying to accomplish. The group did a nice job opening the literature reviews. The group explained how vulnerabilities will always be part of the process of developing software because of human error and that these vulnerabilities can lead to people developing exploits to take advantage of these vulnerabilities. The group then relates the articles given in this lab to this previous statement by explaining that the articles are about “performing structured penetration tests through documentation.” The rest of the literature review was done very well. The group takes each article and describes what the article is about and where in the current lab this article fits. They also show how each article fits together in relation to this lab. The group also explains what each of the articles lack and how they could have been improved on. The group covered the literature review very thoroughly except I didn’t see anything that related to the research question of each of the articles. The beginning of the methodology section gave a brief overview of the whole lab and explained how the literature review tied into this lab. In the next section of the methodology the group gives their findings in a scan done by Nessus in lab 3. They then give a description of each of the ports that were open. I do not understand why this is in the methodology section; I believe that it should be in the results section of this lab report. I did not find any were in the methodology a description of how they came up with the table of the exploits they gave at the end of the report. There should have been a mention of how they created the table in the methodology. The group did a nice detailed job on explaining the process of testing tools, but did not explain what the purpose of these tests were. The last part of the methodology described the last part of the lab. The group covered almost all of this section of the lab and how they were going to go about accomplishing each part. The only part that was not mentioned was the explanation of the evaluation of the level of expertise involved. The first part of the findings describes what was discovered in the first part of the lab. This part seemed to lack a lot of the information that was needed to cover the questions given in the lab. The group failed to cover any discussion of the table that was created to show the relationship of the exploits in Nessus or Nmap and the OSI model and McCumber’s cube. The group does not even mention Nessus or Nmap. They cover a couple of other programs, like medusa and PwDump. The questions in the lab ask about the exploits used in Nessus or Nmap and those were not covered. The group did a nice job in explaining how they tested medusa and the results they got from running it, but nothing more than that. The next part of the methodology gives a good definition of what a vulnerability is. This part seemed out of place though. It seemed it should have gone at the beginning of the results or even in the abstract. The rest of the findings were about the last part of the lab. The group discusses the different sites that they found and did a nice job of describing the level of expertise of that site. They gave very good descriptions of each of the sites. Examining the findings in this group’s lab, it seemed that the group spent a lot of time with the last part of the lab and did not cover hardly any of the first part. In the issues the group talks about how one piece of software was not installed in Backtrack. I think this should have been included in the methodology section of the report explaining how they installed it and ran it. In the conclusion the group explains that they had trouble running a lot of the exploits due to not having the right services to use the exploits against and the lack of use of the computers in the network. This part should have not been in the conclusion, but it should have been brought up in the issues. The second part of the conclusion focuses on the table created in the first part of the lab. I do not understand why this was not given in the results section of the lab. The conclusion lacked any closing on the lab. I did not see anything that talked about the overall experience of the lab and what was learned.

  3. The abstract is a good summary but contains a few spelling and word choice errors. The literature review is an excellent treatment of the topic, subject matter, and the assigned literature. The discussion on the papers seems to talk more about how the articles discuss penetration testing than how they will relate to the topic of the current laboratory exercises. Penetration testing is important but the discussion in the papers about how that relates to finding exploits, researching them, and ultimately fixing them is the real focus of these exercises. I had a hard time following the point of the third paragraph of the literature review about application development and testing. You mention the Thompson/Chase paper saying that security testing isn’t a part of the development process. Should it be? Surely in this security class we should make the determination on whether or not we agree with this statement. The discussion on the ads that appeared in the Thompson paper was interesting. While the ad is mentioned as conflicting with the opinion of the paper, the authors don’t give their opinion on this conflict. One thing to watch in future literature reviews is the “person” that is being used. It switches between singular and plural and makes it hard to read from the perspective that this was written by a group.

    The methodologies seem to be almost entirely skipped over and the report moves straight into the discussion of the results before we even know what is being run and what it’s being run against. What machine is Medusa being run against? Assumedly this would be a Windows machine based on the protocol that Medusa works with. How is the nmbscan tool an exploit tool? Would that be more of a reconnaissance tool if it discovers SMB servers on a network? The methodologies for the “third part” make no mention of the findings from the previous section and how they would tie in to this process.

    The findings section is equally as confusing as the methodologies. The findings for running the Medusa tool are given followed by a definition of “vulnerability” and “exploit.” These would’ve been better in the literature review rather than the findings section. There is no mention of how the 21 CVE vulnerabilities are selected and the decision to use Apple and Cisco vulnerabilities doesn’t fit with the lab environment. Were these tested at all on other equipment then?

    The sentence in the “issues” section “Simply downloaded the [sic] RMP and installed using rpm -I and done” is really informal and grammatically incorrect.

    The conclusions section misses a lot of key points that should have been concluded from the lab data. Simply missing a DHCP and DNS server in our test environment doesn’t mean that the test machines can’t be exploited.

  4. Among the commendable attributes of team two’s lab write-up, I found the discussion of the concepts of ‘vulnerability’ and ‘exploits’ in the ‘Findings’ section to be nicely worded. Additionally, I found the literature review refreshing in that it was not merely a summary, but a serious attempt to analyze the concepts in the articles, and how they related to the lab exercise. The ‘Methods’ section was extensive, although some of the material appeared out of place. The screen shots were a nice visual addition, and the vulnerability listing table nicely formatted.

    Some substantial deficiencies can be found in this team’s report, however. Foremost, the literature review, while admirable in conceptual aim, had significant issues with grammatical subject identifiers. The author(s) slipped between the first person singular and plural at seemingly random intervals. This was distracting, and likely out of place in a document purported to be of an academic nature. Certainly, a consistent use of grammatical rules would greatly improve this section.

    Additionally, I thought that the ‘Methods’ section to have material from what properly could be considered ‘Findings.’ As using ‘Nessus’ to automate vulnerability detection on the target host was indeed a part of the methodology; it only seems logical that the results of this scan should be displayed in the ‘Findings’ section. This is a relatively minor detail, but one which should be an obvious target for improvement in future lab write-ups.

    In regard to the first part of the experiment in the use of ‘Medusa,’ I must ask: did you really obtain any results? I see screenshots of the tools running, and saw a description of the tool’s capabilities, but observed no indication that a real ‘exploit’ was achieved. Granted, this is a contrived situation, as the share passwords were surely already known to the testers; but could a more realistic experiment, using a share password unknown to the ‘attacking’ part of the team have been implemented? As it appears in the current report, nothing was really accomplished beyond running ‘Medusa’ against the target machine and sidestepping around the issue of significant results. Was anything learned beyond what was already apparent from the reconnaissance scan; or furthermore, what exactly was ‘exploited?’

    Finally, perhaps the most significant criticism lies in the ‘self selection’ nature of the vulnerability database content analysis. I believe the aim of the laboratory exercise was to determine, if possible, if any pattern appeared in the database listings ‘in entirety’ with respect to the OSI model. Choosing an equal number of samples for each layer, or category, of the OSI model and then drawing general conclusions from these samples is a most grievous error in using statistical analysis. I would submit that this team did not answer the question of ‘general patterns’ with respect to the OSI layer; furthermore, the assertions made that ‘most of the listings are DOS exploits’ cannot be taken seriously. This assertion may be true of ‘your’ self-selected data: but it certainly cannot be applied to anything other than this. I believe these lab exercises leave much of the implementation choices up to the individual teams; but I see no real purpose or usefulness in the approach adopted by this team in the vulnerability database analysis portion of this lab, it is simply a ‘nice’ orderly list which proves nothing.

  5. One of the first items that was noticed about this lab report, is the continuous use of the words “We” and “I”. The team should not be using the word I at all in their team lab report. Another item was the writing of the lab report. “Also a knowledge of exploits and there target victims” is not a complete sentence, besides the continuous misuse of “there” and “their”. There was a lot of inconsistent verb tenses throughout the entire lab report. While the abstract did state exactly what the team was going to do during the lab experiment, it read just like the objectives of the lab that were given to all the teams. The only exception is that the group does point on that they will be using Nessus and not NMAP. This lab report had one of this team’s best beginning paragraphs; they did not just dive right into the literature review, but first gave a brief synopsis of the main idea of all the articles. The citations for the literature review are still not in the proper APA 5 format. I am not too clear on what the following means “This forms the basis of their article as well as the basis behind the other articles for this lab”. I do not think that all of the articles deal with the fact that IP networks are not sufficiently secure, but rather only the He & Bode article. It is true that the other articles believe that the standard equipment and software is not secure enough, but the other articles are not only about IP networks.
    In the second paragraph of the literature review, the team puts an opinion and then puts a citation after it. Is the opinion in the article? The group states “This figure or flow chart…”, but they never put the figure number from the article, or even a picture of the flow chart. I found it interesting that the group talks about how some of the authors of the articles could use the other required readings for this lab as a baseline to their methodology. I wish the group would have gone into more detail on why the authors should have read each other’s articles before beginning their work. Even though this literature review was somewhat cohesive, it still seemed like a list. I had a hard time getting past some of the grammatical errors, for example “The only real issue I saw with the article was on how they chose no list possible expansion into non-SCADA systems (Davidson, 2005)”. Please proof read your paragraphs before submission. The group has a decent method section, but when listing all of the ports it lost its ease of reading. All of the separated information could have been combined into one paragraph. I do not think that the group clearly finished part 2 of this lab. The level of expertise is only talked about for one of the sources for exploits. All need to be talked about. While they group did put exploits into the proper grid, they did not talk about any conclusions made from this, nor did they have a sample to prove their case. While part 1 was just about all there, most of part 2 was not. And please once again, proof read your lab report before submission.

  6. The abstract is a good summary of what team 2 intends to do. Spell check and grammar check should be used before submitting your paper. The literature review contains your discussion of sources and is organized by publication. The literature review is a good summary of what the authors are trying to convey to us; however I do not believe team 2 does an adequate job of tying the articles back to how they relate to the lab exercise.
    The methods section is very short and seems to blend in with their results section. This made the paper hard to follow. Part 2 seems to be lost in the findings section. Again as with some of the other teams they take the route of explaining what the vulnerability data bases do as opposed to the level of expertise needed to run the tools on the listed sites. I don’t think we needed a definition from Merriam-Webster as to what vulnerability and exploit are. Rather than extensively describing to us what the vulnerability sites were used for, research into what levels of expertise were needed to run the exploits on these sites should have been provided
    Their conclusions were good and I agree with their thoughts on the third conclusion, however that conclusion should be a given considering the number of courses taken with Professor Liles.

  7. Team two gave a detailed overview of the laboratory exercise within their abstract section.

    In the literature review section, I was not sure what article team two was referring to when they stated “This figure or flow chart seems to be a visual representation of what James R. Davison wrote about in his 2005 article on a Vendor System Vulnerability Testing Test Plan”. I agreed with team’s two analysis of the article, Vendor System Vulnerability Testing Test Plan in that as team two stated “The article lists a very structured and complete documentation path for performing those tests, and is something that we should most likely look towards for our own red-teaming exercise at the end of the semester” I had noticed that other teams interpreted the article as being somewhat unscholarly. Team two described the article’s omission as “the parts that are truly lacking from his article are the decision to find or develop a new tool, and a good scoring system to assign to each feature to test in terms of importance.” However, I must disagree because the development of a new tool was out of the scope of the paper, for existing tools were to be used.

    In the methods section, when team two stated “The Nessus scan in lab 3 reported that target machine having five open ports and seventeen low vulnerabilities: they did not specify what virtual machine was being targeted. Since they listed the open ports as Port 137-UDP netbios-ns, Port 445-TCP Microsoft-ds, Port 135-TCP epmap, Port 123-UDP NTP, and Port 139-TCP netbios-ssn, it was some type of Windows machine, but they did not specify if it was Windows Server 2003, Windows Service pack 0 or Windows Service pack 3.
    Some of the tools that were used to find similar exploits included Medusa and Nmbscan.
    In the findings section team 2 listed the online sources they found for indentifying exploits, which included the Common Vulnerabilities and Exposures or CVE, the National Vulnerability Database or NVD, the US-CERT Vulnerability Notes Database, and the Open Source Vulnerability Database or OSVDB. Team two did not describe the expertise involved with the different sites.Team two did not place the exploits that were discovered on the Internet into the lab1 styled table/grid.

    In the issues section team two stated” There were some tools that were not complied in Backtrack, such as NMBScan but mentions that the software is available under Information Gathering”. My team has also come across problems installing some tools onto Linux environments. I do not know why they cannot install easily like web browsers such as Mozilla Firefox or sea monkey or Windows applications. That is one reason why UNIX or Linux will not be replacing Windows anytime soon in the main stream.

    In the conclusion section, I had to disagree with the statement “The first conclusion we drew was that finding vulnerabilities for each layer of the OSI model was not difficult, as three are listed for each layer.” Most of vulnerabilities were found in the Application layer of the OSI model.

  8. I think that group 2’s write-up for lab 4 was very good. The abstract for this lab was good and accurately described the laboratory. The literary review was good and adequately reviews the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done correctly. For part 1, the group answered all of the required questions and looked at and tested many different stand-alone tools to back up their claims. Part 2 answered all of the required questions as well, however, the group did not find exploit databases but rather vulnerability databases. Even still, the group actually discussed exploit code and how it differs from a vulnerability. The group also included a very extensive table that indicates many vulnerabilities and how they relate to the McCumber Cube. Finally, the conclusion was written well and accurately sums up the laboratory.

  9. Team 2 starts off with their abstract and defining what that is going to be accomplished within the lab and what tools that they will be using. They chose to use Nessus and Backtrack 3. They chose these tools based on what they had learned in the previous lab. They then go onto their literature review and in the first paragraph discusses security vulnerabilities and how they present issues to systems and services. Then they discuss the ranking system within the different articles. This gave the question what does the team think would be a good system to rank vulnerabilities? Would it be usefully to include the McCumber cube coordinates and OSI seven layer model locations? They then go on to the methodologies and describe what they are going to be working with in this section. They where smart in using the same exploits found in the last lab and using them for this lab and using this information for the additional tools. But was Nessus up to date to find vulnerabilities within the system? Also was any other test run against the operating systems within their virtual environment? They then go onto describe the databases that they had found and about each of them. With all the different databases that are available does the group believe that there should be a standard database? Or is it good that there are numerous databases some paid and some not? Also the group explains that some of the tools do not work the same because of being in a controlled environment? Would the teams get different results if these networks where connected to the internet? They the go on and discuss the issues that they had with backtrack and the limitations they had when they tried to use some of the tools. At the end they conclude their lab with what they had found for the various layers of the table. When looking at the vulnerabilities for the applications and operating systems, what is the biggest issue through them all? Could some of the problems that each group found be resolved before being released? One opinion is when teaching programming that the thought of security being programmed as the application is being compiled through out the development life cycle. (http://www.devx.com/security/Article/30637) Is this a good idea for future programmers to keep in mind? Overall the team did a good job and met most of the lab requirements, there was just a couple things missing. I noticed that only one operating system was tested, or that was just gathered from the reading. Where the other operating systems tested and if they where it would be good to see any differences between them.

  10. Team two’s abstract feels like a list of learning objectives. I wouldn’t read any farther if I wasn’t required to. Watch syntax poor grammar and spelling hurt the overall delivery.

    I understand that the team is trying to integrate the articles into one cohesive literature review, but the way your discussion bounces between them is confusing. What do you think of what He and Bode have to say? Is it of any value? Are they on the right track? You transition quickly and rather poorly into Davidson’s article. The process Davidson lays out is as dissimilar to He and Bode’s flowchart as it is similar. I don’t think Davidson was talking about just reconnaissance with the baseline test. I think he was looking at a full penetration test of the system with the manufacturer’s default settings. Why are we assuming Davidson didn’t look at Thompson and Chase? Are you saying that Davidson didn’t have a vulnerability scoring mechanism in place? Did you read to the end of the article? I think you are trying to make artificial comparisons between Davidson and Thompson and Chase. They were really doing to separate but related things.

    I have no idea what the group is trying to accomplish in the first part of the methods section. What am I looking at? What’s the point? I could reproduce the steps in Backtrack, but what am I attacking, and again, why? The methodology for part two for the lab is better, but still not entirely clear.

    Your findings for the first part of the lab are as bad as the methods section. Retracting information? Do you mean retrieving? I understand you used Medusa. Shouldn’t you have explained (clearly) how it works in the methods section? I think you were also trying to explain the lab setup. This too belongs in methods. What was the end result? What did you get from performing the tests you theoretically outlined in the methods section?

    When discussing part two, you keep referring to Bugtraq as a code repository. Is it really or is it something else? I like the table you created for part two, and I agree with the majority of your findings listed in it. The problem I see comes from the team looking for exploit code in the methods, but ending up with vulnerability databases in the findings. You attempt to define vulnerability and exploit and then appear to use them arbitrarily for the same thing. Is there a difference?

    The issues section is garbled. Something was wrong with Backtrack and you had to recompile? The conclusion overall is simple, but well stated. It lets the reader know what you learned and what was accomplished. Your conclusion for part one of the lab is surprisingly good given the rest of the information. In part two, I think your methods may have lead you to false conclusions regarding vulnerabilities. While denial of service is surely common, what about exploits used to gain access (authentication) and/or breach confidentiality?

Comments are closed.