April 22, 2025

9 thoughts on “TECH 581 W Computer Network Operations: Laboratory 4, Team 1

  1. Team one begins their lab with an abstract that meets the requirements of the syllabus, both in explaining what is going to be done, as well as in length. The abstract listed all the steps of the lab that will be completed while not making it sound like it came directly from the lab design document. Upon completion of the abstract the team goes directly into the literature reviews. The introduction to the literature review does not seem to flow with the cohesion of the rest of the section. It explains how the literature review was used in relation to the lab and nothing more, explaining how NMAP was used over NESSUS which should be in the methods or findings rather than literature review. In the literature review team one does tie the lab to the individual articles, but there is little cohesion between the three (or four?) articles reviewed for this lab. They seem to use “et al” for any source that has more than one name associated with it. I’m not sure on this, but if there are two authors, “et al” might be overkill, unneeded and against APA styling rules. Please check the latest APA rules online and confirm this. The methods section is rather short almost to the point of nonexistence. They close out the literature review by saying that all articles relate directly to the steps of the process in the lab design. I disagree with these findings. The topic of the lab was on researching exploits, and while Herbert & Chase article as well as the Davidson article explain a process of documenting a path to performing a penetration test, the third article lists exploit tools primarily. While researching exploits should’ve led the team to the tools, they really do not directly relate to the results of the lab. The methods section of the lab is short almost to the point of nonexistence. This is not a scholarly or academic approach to listing the strategy and technique used in completion of the lab, the methods are also broken into section making cohesion in this section impossible. In the findings section I agree with their opening statement on how NMAP is a recon tool and not an exploit tool. I question however that it took team one until now to realize that, making them switch tools to ettercap mid-lab. Based on the previous labs they should have already known this and not even used NMAP as an exploit tool. They list strategy in their findings section related to running an actual test attack, which should be in the methods section. Their first table has no label making it hard to ascertain its actual use, and is mostly empty making me question their completion of part 1 of the lab. They list a number of supposed vulnerability databases in their part two research, but said research is not formatted well, nor is there a table supporting any conclusions based on sampling possible exploits. They fail to list the OSVDB in their list, since it was in the literature I question the actual research into selecting vulnerability databases to research. Their conclusions are complete, and I agree with them, except for the final part of noticing that each lab builds on the previous, that should’ve been apparent from the beginning.

  2. Team 1 begins with an abstract of the lab that describes the tasks in the lab assignment. They list their objectives for this lab in two parts. Part one is to find standalone tools that work with nmap. Part two is to find depositories where exploit codes can be found, compare them to the seven layer OSI model, and find any patterns that may exist. They discuss their decision to use nmap, namely because the team members are familiar with its operation.

    The next section is a literature review. It begins by giving a general description of the readings as dealing with red teaming and penetration testing. They begin with the article Red-Team Application Security Testing (Thompson, Chase, 2003). They make the statement “the authors setup a plan for red teaming, just like the team did for this lab”. I would have to disagree with this statement since this article discusses decomposing applications into functional components and testing each individually. That is something which we are not going to be able to do in this lab. In fact, the article states “Using firewalls and testing at the network layer is not the answer.” This statement goes against our method of conducting testing by performing network scans to identify vulnerabilities. Perhaps a better assessment might be that it establishes the need for testing at the application level, since we’ve found in this lab that this is where most of the vulnerabilities originate.

    It continues by reviewing Network Penetration Testing (He, Bode, 2005). They relate this to part two of our lab by stating that the article lists one of the basic steps in red teaming is making a list of vulnerabilities and exploits. Part two of this lab was to research exploits. I believe there were several other ways in which this article can assist us with this lab. This most obvious is the list of published vulnerabilities web sites that are available (He, Bode, 2005, p 4). This article also includes a listing of penetration testing tools and known exploits that may assist us in future labs.

    The next article that they reviewed is Vendor System Vulnerability Testing Plan (Davidson, 2005). I agree that this article differs from our current laboratory assignment in that it applies to SCADA/EMS systems. This article does serve as a good example for organizing, scheduling, and documenting penetration tests. These concepts will assist us in our labs and in ‘real world’ testing.

    In the findings section, they discuss the conclusion that nmap does not actually perform any exploits. It just sends ICMP and SYN packets to a host to determine what services are available on that host. They compare nmap to “knocking on a door” to determine vulnerabilities. This, I feel, is a good analogy of port scanning. They include a brief explanation of how operating system fingerprinting works within nmap, and how this process can be considered an exploit (denial of service attack) in rare occasions because certain systems will fail if they receive a malformed packet. They also include a discussion of how nmap can spoof IP and MAC addresses. They proceed to include a script for using nmap and Ettercap to perform MAC address spoofing. Although they provided an interesting discussion concerning whether or not nmap can be considered an exploit tool, I believe they missed the point of the laboratory assignment. My understanding of the laboratory assignment was to use nmap or Nessus to scan the network and determine what vulnerabilities are present, then to locate and test “stand alone” tools that exploit these vulnerabilities. Although they state their objective at the beginning of this lab report is to “find stand alone tools that work with Nmap”, they didn’t find any exploit tools for the exploits that nmap discovered.

    They then discuss seven websites that contain information about security exploits. For each, they give a brief description discussion of the skills needed to perform the exploits listed in the web site. They do not list any of the specific tools listed in the different web sites. They do state that most of the exploits are in layers 6 and 7 of the OSI model, however do not provide any statistics to support this claim.

  3. This group starts off with an abstract that ties into the rest of the previous labs in class and briefly describes what is going to be done in this lab. The group decided on using Nmap for the examination of the types of exploits used in Nessus or Nmap. They chose Nmap because the group was already familiar with it. I believe that the group could have done less describing the lab and more of explaining why this lab is important and how is pertains to the whole class. Next the group goes into their literature review. At the beginning of the literature review the group does a nice job explaining how this lab ties into the previous labs and what this lab is about. They also tell how all the articles tie into each other by explaining that the articles are about setting up a penetration test. Throughout the paper I keep seeing grammatical errors, but not too many. The group steps through the lab and ties the papers into how they apply to that part of the lab. The group first describes how the article Red-Team Application Security Testing (Thompson, Chase, 2003) is similar to how the group is to test and verify the tools that they find in this lab. The group’s literature review continues on about each of the articles given in the lab. The group does a great job in explaining how each article pertains to the lab. They also briefly explain the research that the writers did on the article. The group also gave a description of the theme of the article along with the explanation on how it ties into the lab. The group did leave out if any of the articles had a research question, the methodology of the paper, and any errors or omissions. Next the group gives their methodology for this lab. They explain that in part one, the exploits used by Nmap were researched and categorized in accordance with the seven layer OSI model. The group does a good job in describing how they set up the machines that they were going to use to do a research on the exploits that are used in Nmap. They also did a decent job in explaining how they found stand-alone tools and tested them against their systems. They did not go into any detail how they went about the previously mentioned test. Then the group talks about how they went about accomplishing the second half of the lab. They missed out on some details on how they did this. The group could have described how they determined if a site was worth using. They also could have explained how they determined the level of expertise of the site. The group could have done a much better job in explaining the methodology of the second part of lab four. Next the group gives their findings. The group explains that Nmap does not actually use exploits, but rather gains information by sending ICMP and SYN packets and studying the responses sent back. The group then justifies that Nmap is not using exploits because it uses legitimate means to gather information that it needs to determine a way to use an exploit to gain access to that system. The group does a nice job in giving an analogy to this by comparing gaining access to a system using Nmap, with gaining access to a house by first knocking on the door to see if anyone is home. The group then explains how Nmap does use a limited amount of exploits in determining what operating system is on a system and spoofing MAC addresses and IP addresses to gain information. The group explains that in order for Nmap to determine the operating system, Nmap modifies a packet with cretin flags that different operating systems respond to in different ways, thus exposing the operating system. The group then explains how almost all operating systems would be vulnerable to Nmap’s exploits, because they target TCP/IP. The group also does a good job in explaining ways in how to avoid giving away information when Nmap scans a system and how to detect if Nmap is scanning a system. Next the group does a very good job in showing how Nmap can be incorporated into another program as a command line command in a script that does a specific job. The group shows this by creating a script that combined the use of Nmap and Ettercap together. Nmap is used in the script to spoof an IP address and a MAC address as the source addresses that Ettercap uses to do an attack with. The group then summarizes part one in at the end of the findings of part one. Then the group explains that Nmap is not a tool that uses exploits but attacks that accumulate information without compromising the target system. The exploit that Nmap does use is IP/MAC address spoofing. Also it was shown that the exploits were effective through several reasons that the group gives in the paper. Because of the small amount of exploits found in Nmap by the group the table that was created did not consist of very much. Next the group gave their findings on the second part of the lab. In this part the group gives eleven sites that provide lists of current exploits. For each site the group gives the level of expertise the user needs to know in order to use these exploits. This part of the lab could have been interpreted in a couple of ways. One way would be the way that this group interpreted the lab in that the level of expertise of the user of the exploit. The other way to interpret the step would be the level of expertise of the company or individuals that created the list of exploits and how they described the list of exploits. The group then finishes the results explaining that the exploits that were found affect layers six and seven on the OSI model. The group then points out that if you can affect the lower layers, like the physical layer, you can have more control of the system than if you were to affect the upper layers. On the other hand if the upper layers are not secured right, the lower layers will fail. The group had almost no issues. The only one was a problem finding non-vendor specific depositories for part two of the lab. Last the group gave a conclusion on the lab. The first line of the conclusion seemed as though it was not completed. They claimed that most of the exploits existed around the sixth and seventh layers of the OSI model. They also claim that most of the exploits used in Nessus and Nmap are “man in the middle” attacks and are not necessarily attacks but a way to gather information. They also showed were this lab fit into the class compared to the rest of the labs.

  4. The literature review is a good improvement over previous labs but still has some issues. In the first section of the literature review the authors mention an article on red teaming and say it relates to the activities in the lab exercises but doesn’t go into enough detail about how researching exploits equivocates to red teaming. The relationship of the Network Penetration Testing paper to the lab activities was much better matching it to the specific part of the lab where it was useful. I think saying the Davidson article wasn’t related isn’t true. The Davidson paper, while analyzing a SCADA system, still provides a good framework for a test plan that relates to the plans presented in previous two articles. The review of the Creating the Secure Software Testing Target List, just misses the point of the lab exercises. It was even set up with the quote given about “standardized knowledge” regarding vulnerabilities, that’s precisely what a vulnerability database is. This section of the lit review could’ve been tied into the other papers and the lab exercises a little bit better.

    The methodologies section was very brief. While there was a short mention of the activities that were going to be performed there wasn’t enough detail to make this reproducible. For part one, the use of Wireshark from the BackTrack VM wasn’t necessary for these lab exercises, if it was going to be used for a specific part of the vulnerability testing, it should’ve been mentioned. The second part was even more brief and missed the heart of the lab exercises, tying the information from part one to part two. A minor word choice error for discussing the vulnerability databases, “repository” would be a better choice than a “depository” though that could be because two of you are bankers.

    The findings section had much more detail than the methodologies section. Quite a bit of the information found in this section could’ve gone in the methodologies section instead, particularly the script used for nmap. The use of nmap and the output described in the findings don’t really appear to fall into the scope of the lab. How can this be related to a specific vulnerability in a public vulnerability database? I agree with the statement that this isn’t actually an exploit but, rather, an information gathering tool. How is spoofing an IP an information gathering tool? If you’re spoofing the packet with another IP and MAC address how are you going to get the reconnaissance data back? The list of vulnerability databases was pretty extensive and many of them mentioned high levels of expertise in many areas. What could be concluded from this? The conclusions section is very brief and doesn’t tie a lot of the lab data together.

  5. I found a number of positive aspects of this team’s lab write-up worthy of mention. I admire the bold move to analyze ‘Nmap’ as an ‘exploit tool,’ certainly this is a departure of note as compared to all of the other teams performing the exercise. The definition of ‘exploit’ and the rationalization of the term with respect to ‘Nmap’ showed a nice amount of creativity. I also thought the literature review to be reasonably well written, with tie-in to the laboratory exercise noticeable for each article. Finally, I found the inclusion of an ‘attack script’ an interesting detail: it was informative, perhaps in more ways than first apparent.

    A number of problems were apparent in this report, though. I believe that as the team found little to address as far as ‘exploits’ within this exercise; the bulk of the write-up essentially became an exercise in ‘self-justification’ as to how this examination of ‘Nmap’ was significant with regard to the initial instructions given in the lab. I question the assertion that ‘setting the flags in a packet header’ could be considered an exploit ‘in some cases’ due to issues with ‘malformed packets.’ I think this is a stretch, as though a danger might exist for certain systems, the primary goal of ‘Nmap’s’ fingerprinting is not to take down the host: this is more or less an accidental occurrence. I think it even becomes more obvious that it is ‘reaching’ if one notes that the reason ‘Nmap’ is being run on a network is because the number and configuration of hosts on the network is at this point ‘unknown.’ If someone should connect to a busy server, and by doing so cause an overloaded service to crash; would this chance occurrence also qualify as an ‘exploit?’ I believe for an exploit ‘to occur,’ there must be intent associated with a known opportunity: else such things as cosmic rays corrupting memory might be considered to be an ‘exploit.’

    The experimental setup I judge at the very least to be ‘contrived.’ The ‘Nmap’ documentation from http://nmap.org/book/man-bypass-firewalls-ids.html indicates that with a spoofed IP address “… you usually won’t receive reply packets back (they will be addressed to the IP you are spoofing), so Nmap won’t produce useful reports.” How then, is this script “tested to be working” as you describe? Does it work by using ‘Nmap’ to detect broadcast packets triggered by the port scan? If not, and the packets never return to the ‘Nmap’ tool host, how is it that a list of hosts is generated by which to target ‘Ettercap?’ While I respect the proficiency indicated by this script, I confess doubt of its real usefulness. For a better effect, one in theory could use another host which has the actual spoofed IP address to redirect the packets back to the ‘Nmap’ tool host. In fact, the use of the machine running ‘Wireshark’ might prove an ideal host to use in this ‘spoof and redirect’ scheme. As it is, with the limitations experienced in the last lab due to switched networks; I wonder how this team’s test configuration for this lab was actually implemented: this was also unexplained.

    I also wonder at the assertion made in the last paragraph of the ‘Findings: Part 2’ section. The implication of the statement made is that ‘upper layer flaws’ of the OSI layer would be hard to “defend against …at lower levels.” Is this not the function of firewalls, where ports can be monitored and even completely ‘sealed’ from outside usage at the transport level? I don’t quite understand this section (perhaps due to the poor grammar); certainly, vulnerable ‘high layer’ services are a problem, but these problems ‘can’ be addressed from a ‘lower level,’ even if it means simply concealing the running service from exterior utilization.

  6. In the abstract section of the laboratory report, the statements “This lab will build on the research and findings from the previous labs” and “This lab will go into more detail about some of the exploits researched previously” seemed too vague.

    In the literature review section, team one did a good job analyzing the article Red-Team Application Security Testing. The team was able to point out an important omission, the authors did not have any references. The team was able to relate all of the articles to the laboratory exercise. In regards to the article Vendor System Vulnerability Testing Test Plan, I could not figure out why the team stated “it seems that is actually less of a scholarly article but rather a document to be used by a penetration tester and can be used by any organization that uses a SCADA system.”.The document had a methodology to it, but did not actually include the results of the experiments.

    In the method section, I was not sure why group one chose to capture all of the traffic on another virtual machine running BackTrack 3 using Wireshark. This was a requirement for a previous lab, but it was not required for this lab exercise.

    In the part 1 findings section I had to disagree with the statement “When considering the McCumber Cube, Nmap does not attack confidentiality, integrity or availability but rather confirms availability (for the most part).” A reconnaissance tool such as Nmap, which does not attack a system but reports information about it would still reveal what ports are open on a particular machine, thus reveal information that would be confidential. However, the group is right that is does not directly attack any of the three pillars of security. The findings section seemed contradictory at times. In the first part of the findings section, the group stated “After researching Nmap’s exploits, it became apparent that Nmap, while it performs many functions, does not actually perform many “exploits”” However, in the third paragraph, team one stated “When considering the Nmap scanning tool, there are some features of tool that can be classified as exploits.” The script that was created by team one was impressive it combined Nmap with Etthercap. The group stated “The above script allows an attacker to run an Nmap scan, which offers better results and firewall evasion, instead of using Ettercap’s built-in host discovery.”

    In part 2 of the findings section, there was not a table that contained some of the exploits that were discovered on the numerous sites that were researched by the group. The group interpreted expertise as the skill level required to use the exploits. Other groups interpreted expertise as the quality of the description of the exploit, for some sites gave mediocre descriptions of the newly discovered exploits. Some of the sites that were chosen included Public Advisories List/iDefense, Microsoft TechNet, Secunia, Security Focus (Bugtraq), Argeniss Information Security, Application Security Inc., Red Database Security, SecuriTeam and Security Docs.com.

  7. The team starts off with their abstract explaining what is to be done and the tools that they will be using. They decided to use NMAP as their main tool for researching vulnerabilities within systems. The team then goes into their literature review and describes how the lab and pieces of literature will combine. At the end of the literature review they describe that the articles were cohesive in how the lab was to be performed. They then go onto the methods and explain what tools where being used and against what system. What I did not notice here is that later in the lab they have bash scripting for the root user, but Linux is not described within their first part of the methodologies. They then go onto described what the second part of the lab will be and comparing the exploits to the OSI seven layer model. Upon going into the findings section for part one the testing that was stated in the first part of the methodologies changed from Windows XP sp3 to Ubuntu and more Unix/Linux environment. There was a brief description about exploits with all different operating systems, but what they said they where going to try and exploit just disappeared. Part of the findings almost felt like a review of NMAP instead of the purpose of finding the exploits for this section and using tools against the vulnerabilities. When they added the bash scripting it was a nice touch, but then questions came from that. Are there any other tools out there that can do the same thing within Backtracks list of 300 tools? The section just seemed to drop after showing the results from the script that was run. Is there anyway that this issue can be resolved and maybe deny the script from running against the targeted system? Next they go into their table of exploits. One thing that I noticed was missing was application layer exploits. Many exploits that are found on the databases they found list application exploits. Yes it was hard to fill the entire table with different type of exploits but attacking applications to gain access to other parts of the system is something that happens many times. Many times there are incidents with Internet explorer being exploited and users able to get to the core of Microsoft Windows directory. They then end with their conclusions and their thoughts on the attacks against targeted systems. Why would exploiting a system not “really” attack the system? The malicious user is knowingly going after these vulnerabilities would this not be considered an attack on a system. For example a thief on the street going up to a user and asking them how much money they have in their wallet before mugging them. That would be part of the thief’s attack strategy. Overall the team did a good job and did what they could for lab 4.

  8. Team one submitted an awkwardly written lab that covers the basic points but suffers from inconsistencies and subdued but noticeable changes in writing style. The abstract adequately explains to the reader what will be presented. The team discusses the decision to use NESSUS here. This is oddly placed. It should be in the methods section.

    The literature review appears to be an awkward combination of review and method that is hard to follow and inappropriately placed. I understand the group is trying to relate the articles to the lab, but some of the connections are dubious. Are Thompson and Chase talking about red teaming in the same sense as we use it, or are they considering it to be part of application development? You say that they follow the same steps as those in the lab. Are these steps analogous to best practice? You state that He and Bode “just” listed some tools, but do you think maybe there was something important about their methodology? Davidson’s “Article” is a test plan, that’s why it doesn’t seem to be much of an article. The team made an attempt to add evaluation into the review this time, which is good to see, but it needs to be more clearly stated and expanded upon.

    The methods section is vague and unrepeatable. Perhaps this is because the methodology is strewn about the rest of the paper.

    The findings for part one of the lab are very well explained. I especially like the fact that “exploit” was clearly and correctly defined, and that you back your findings with outside examples. Some of this information belongs in the methods section, however. I can separate out steps in your results that I could follow in order to duplicate your findings. For part two, the list of sites is fairly extensive. The group makes judgment calls about the level of expertise needed to use the “exploits” listed on each site. What are these based on? The sites are listed but not really discussed or evaluated in terms of usefulness. The use of the term “exploit” in this section somewhat contradicts the definition given in for part one. What does the term mean? I’m unclear as to how you got these results. Again, a more detailed methods section would serve you well.

    The group’s conclusion does a poor job of summarizing the research. What did you learn, other than the fact that the labs are building on each other, which should have been self-evident? You mention here that the majority of attacks are at layer 6 or seven and deal with SQL injection. This should have been in the findings section, but I saw nothing to support the idea.

  9. The team started with a strong abstract and indicating key points of there laboratory. They identified which tool the team was going to use Nmap or Nessus. The team decided on Nmap over Nessus however was wonder why the team decieded to go with Nmap over Nessus. The team also talks about researching exploit code but not from a anti-virus vendor. There literature review started off talking more about that lab methods. The team does cover the reading about verifying tools in the the article Red-Team Application Security Testing. They also talk about how firewalls and other services that help protect services are not the only protection a system and that penetration testing offers more of an idea of how an attacker would attempt to gain access. Knowing this information can better protect a organizations infrastructure. Penetration testing has the part of running exploits which the team does in part 1 of the laboratory.

    In part 1 the team discovers that Nmap actually does not perform exploits. They backup there finding by providing a link to insecure.org which talks a lot about Nmap. They team goes on to talk about how Nmap works and how there is not a way to perform exploits. I do ask this question, is it possible to send a customized packet using Nmap to exploit a system? Nmap does send ICMP and SYN packets as the team describes as common packets. Then team then goes on to talk about how stand alone tools and a script was created to use Nmap and Ettercap to do ARP poisoning attack which will become a man in the middle attack. This script is well written, provides commits, and does checks. The script checks to see if the user is root and if not tell the user to rerun script in super user mode. The team also provides proof of this script functioning by showing a portion of what was captured in Wireshark on a different virtual machine. Then team then goes on to talk about the findings for the research done on exploit code. The team claimed to have found several SQL injection code. With this finding would the team believe that SQL is very vulnerable and unsecured enough to be exploited?

Comments are closed.