May 13, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 4, Team 4

  1. Team four presents a lab that does fit all of the requirements of the lab design document, but is lacking a number of areas that need to be improved upon for future lab reports. The abstract while explaining what will be performed in the lab is not nearly long enough as per the syllabus. They seem to imply that the tools NAMP and NESSUS are the tools used as a first step in penetration testing, I question that as those are not the only two tools available. The literature review is lacking. While it does explain each article that was part of lab four, that is all it does. Each of the articles is laid out and individually explained. There is a total lack of cohesion among the literature reviewed and it does not give a good explanation of the state of the literature on the topic. It also does not in any way tie into lab four itself, since all articles did in one way or another relate to the steps of the lab, I question the overall completeness of the lab on the part of team four. The literature review is nothing more than a list with APA citations, and needs to be improved upon in future labs. If assistance is required the Purdue Online Writing Lab is a good resource. The methods section as provided by team four is lacking. Three short paragraphs do not denote a scholarly or academic discussion of the strategy and technique used in the competition of the lab requirements. Team four claims that the second part of the lab was mostly researching vulnerability databases. I would argue that it was actually all of the part two. Anything that was not research was reporting on the findings of that research. In their findings and results section team four begins by stating that they discovered the best way to learn about system vulnerabilities through NESSUS was to run NESSUS against the system in question. This equates to discovering that getting results out of the tool requires running the tool. This is beyond obvious for graduate work, and needs to be explained better. In team four’s first table, they list a number of server message block (SMB) exploits at layer six, the presentation layer. Unless the ISO has made a change I am unaware of, SMB is a layer five or session layer protocol. This calls into question the level of research that was performed by team four as not being scholarly as this should be obvious. For layer two they list Ethernet card brand as an exploit. I fail to understand how the brand of Ethernet card by itself could be an exploit. In their second table team four lists Sun Solaris as being the technology being a presentation layer exploit. I was not aware that Solaris was a layer six protocol or tool. I was under the impression Solaris was an operating system. Finally, the tables should’ve been listed in a figures and tables section after the conclusion, and referenced in the lab discussion, not in the lab itself.

  2. Team 4 begins their lab report with an abstract stating their objectives; they will be using two tools, Nessus and Nmap, to discover vulnerabilities that are present in the target machine. This will lead them to determining what tools are needed in performing the penetration tests.

    They begin their literature review with Red-Team Application Security Testing: Testing techniques designed to expose security bugs (Thompson, Chase, 2003). They summarize the article as an argument for securing software applications rather than trying to secure the network around them. I believe this is a pretty fair assessment of the article. They continue on to describe how the article discusses breaking down the components of an application by function and testing them separately. They relate this to our current lab by stating that we are learning to research exploits, and that we need to look for ways to exploit applications using their normal activities. I believe this article also is a hint for a portion of the second part of the lab. Since this article states that applications cause the most vulnerability, it stands to reason that we may find that most vulnerabilities lie within the application layer of the OSI model.

    They proceed to discuss Vendor system vulnerability testing test plan (Davidson, 2005). They listed the procedures and methods outlined in the testing plan. The related this to our current laboratory assignment only in that the testers have a strong understanding of the systems that are being tested prior to the test. They state that the document is in error because it did not specify whether or not the operator or developer consoles had removable storage. I disagree that this is an error. In the abstract it is described as a “generic test plan to provide clients (vendors, end users, program sponsors, etc.) with a sense of the scope and depth of vulnerability testing performed at the INL’s Supervisory Control and Data Acquisition (SCADA) Test Bed and to serve as an example of such a plan”. As a generic plan, it leaves room for modifications to the plan to fit the testing of a particular system. I also believe that it can also serve as an example to help us to model and document our own penetration testing.

    The next article that they review is Network Penetration Testing (He, Bode, ). They discuss the explanations contained within the article, such as announced and unannounced testing, and the difference between blackbox and whitebox testing. They mention that the article contains a list of exploits and tools that can be used to perform penetration testing on those exploits. They relate the lists of exploits and penetration testing tools to be a benefit to our current lab assignment.

    They continue with their methodology section. They made sure that Nessus had the most recent plug-ins and then ran it against the target machine. They tabulated the results and sorted them by OSI layer. Then, they took two of the exploits that they discovered and ran them against the target Windows XP SPo virtual machine. They discovered that Nessus was the better way to discover vulnerabilities. They also discovered that the vulnerabilities that it found were in the upper layers of the OSI model, namely the application and session layers. They also found that Nessus and Nmap found many of the same vulnerabilities, however Nessus found more.

    For part 2 of the labratory assignment they identified Obcomputerrepair.com, SECURINFOS, Securitytracker, and Insecure.org as sites that contain databases of known vulnerabilities. They tabulated the vulnerabilities that were discovered for the month of June and sorted them by OSI layer. The concluded, as did our team, that the majority of the vulnerabilities lie in the application layer.

  3. The literature review only treated each of the assigned readings individually. Instead of a cohesive write up with the literature and the lab tasks, each reading is simply summarized in paragraph form. Stating that the literature would be a benefit in the lab isn’t enough. The only article that was given any further thought beyond a summary was Vendor System Vulnerability Testing Test Plan. The critique of the assumptions of the article is hard to agree or disagree with because little explanation is given or logic behind why this particular section was selected. Removable storage capabilities could certainly pose a risk to these particular systems. Valid users could inadvertently attach a removable drive containing a virus or worm which could affect the availability of the SCADA network. Concerns like this are also the job of the person evaluating the risks in the system, insider threats pose a significantly higher risk factor that outside entities as we’ve seen in previous labs and literature.

    The methodologies section is too brief to be reproducible. For the first part, why was only one VM tested? If you’re looking to have more data to support your findings for section two, running Nessus and nmap against at least two different operating systems would be a good start. The method of evaluating a vulnerability database’s expertise isn’t sufficient to make a good determination. What was the author’s methods for determining if the descriptions were given were “good?” Is linking to another database a bad thing? What if that database was the original source? Wouldn’t it be better to simply catalog the basic data and refer users to the source?

    The table in the findings contained quite a bit of data for layer seven which would fit with the findings the group discussed. The data in the lower levels doesn’t really constitute as exploits or vulnerabilities, particularly the items in layers four and two. The sentence immediately following the table “The group then ran Nessus and Nmap against the Windows XP SP0 machine and discovered that the two scans picked up the same vulnerabilities, except Nessus picked up more vulnerabilities than Nmap” should be removed, the two statements conflict with each other. One of the vulnerability databases listed, obcomputerrepair.com can hardly be considered a vulnerability database, the critique of it is valid but based on that alone it should’ve been replaced with something more reputable. The extensive table showing vulnerabilities would’ve been better with information on the sources used to compile it and possibly links to the entries in those databases. One item totally missing from the findings was the handling of the tools used to test the exploits that were found. Even if the tools were found to be local only, as listed in the issues section, it would’ve been good to list them and their corresponding vulnerabilities along with a discussion of why you didn’t believe these were valid in the context of the lab exercises.

    The conclusion is a good summary of the activities of the lab along with results but doesn’t tie the topic, the literature, and the results from the lab exercises together.

  4. Once again, I must admit that I found this team’s lab write-up to possess substantial depth in investigation and analysis. I found the literature review informative, with a few good questions raised. Additionally, the research of vulnerabilities for the first and second part of the exercise appeared the product of substantial effort. Finally, I found the ‘Methodology’ section to be reasonably detailed, and the ‘Results’ section to be informative, if brief in discussion.

    That is not to say that some problems cannot be found with this report, however. The literature review, while being of generally respectable literary quality, lacked anything more than a trivial reference to application within the scope of the lab exercise. Additionally, the ‘very’ long paragraphs used, while cohesive in subject matter, should have most likely been broken down into smaller excerpts. It also appears that the reviewer ran out of creative drive toward the end of the review, as we see a progression of sentences which read: “Next the writers…, Then the writer, Next the writer…, Then the writers…” Not a strong finish from such a promising start.

    Also of note was the missing discussion of attack tool testing. One of the requirements of the lab exercise was to ‘test’ tools found which match vulnerabilities discovered in the team’s experimental system: this was totally absent. I would submit that the heading on the result table for part one should read ‘Vulnerability’ rather than ‘Exploit,’ as an area of vulnerability is not in itself an exploit. An ‘exploit’ proper would need to include a means by which to utilize this security flaw: i.e. an attack tool. In fact, I could not locate ‘any’ attack tools listed in the entire lab write-up. I admit the research done on vulnerabilities appeared well done, but without matching tools (where available) to take advantage of these opportunities, it is hard to classify any of this work as truly describing ‘exploits.’ I also thought the table for the first part of the exercise to be somewhat poorly formatted. It is obviously mostly a copy-and-paste from a ‘Nessus’ report: a little more care in organization and presentation would be appropriate.

    Of minor note, I noticed that ‘screen shots’ were mentioned, but did not appear in the report anywhere. It appears the only way to include images of any kind on this blog is to use an image hosting service, such as http://imageshack.us/ , and then link to the uploaded image via an image frame or HTML object in WordPress. It is unfortunate that this team was ultimately unable to share their results because of posting issues: this significantly affects the ability of a reviewer to evaluate the research this team has done.

    Finally, although I found the database exploit table interesting, I thought that confining the listing to “the month of June” an odd choice. I must confess I don’t believe one month to be a ‘significant’ representation of the ‘known vulnerabilities’ in these databases. I would say that, due to the huge number of ‘prior’ vulnerabilities existing in these databases, a listing such as this is heavily biased toward the OSI application layer. Consider this: operating systems generally decrease in the number of security flaws as they age, therefore ‘recent’ vulnerability snapshots will likely not show the ‘substantial’ number of ‘OSI lower layer’ vulnerabilities accumulated over time by aging operating systems. In fairness, it appears this team used appropriate methods on the information obtained: I do not question the procedure. I simply suggest that the data set chosen is a heavily biased sample, and therefore questionable in the scope of making accurate statistical measurements.

  5. This group did not follow the tags that are required for submission of lab reports. Right off the bat I need to ask the question, are these lists really up-to-date? Who is in charge of updating these lists, and wouldn’t the list only have the known exploits? If that is the case, then we really can call the lists up-to-date, but rather a list of currently known exploits. The team went from having too many citations to basically having only one citation (not in APA 5 format). If you use text from the articles you MUST cite them, just don’t go overboard but cite when you must. This literature review reads like a list. It needs to be more cohesive, combine the articles when you write. It is obvious that different people wrote different sections of the literature review. Before submission, the literature review needs to sound like one voice. Make the changes, or if needed, have one person write the literature review. The SCADA article got a lot more attention than the other articles. All of the questions were answered for this article, but not for the other two articles. I think this is because of the different team members writing the literature review.
    I am wondering why this team was the only team to use both Nessus and Nmap. I thought the lab stated to chose one of them. The methodology seems like a rehash of the objectives of the lab experiment, which belongs in the abstract. The group keeps mentioning the utilization of Nmap, but there is little detail on what was done with it. Nessus seemed to be the primary tool that this team used. How is it possible that the two scans picked up the same vulnerabilities, but Nessus picked up more than Nmap? If they picked up the same vulnerabilities, then they should not be different. I question the validity of Obcomputerrepair.com. This does not seem like a vulnerability database at all. I think the second table was not necessary, nor was it required. The team did not give any reason as to why the patterns they found occurred. There needs to be more detail in the results section, more explanation as to why the group believes their findings are correct. What did the group do to get around the issues they had? I don’t think we can really count the websites that were just using the Bugtraq list. The lab stated that the groups needed four different databases, not ones that are basically mirror images of each other. The team should have research the He & Bode article to see when it was published; all they needed to do was Google it and they would have found the year it was published. I don’t find n.d. as an acceptable date for the article.

  6. I think team 4’s abstract was well written and explained in detail what they were going to accomplish in lab 4. Their literature review contains discussions of the sources and is organized by publication as opposed to combining all the publication into one massive summary.
    Their methodologies section is too brief in comparison to how lengthy their literature review was. Perhaps more time could have been spent with this section. The table in the findings section contained a lot of data for layer seven which is representative of the group’s findings. I don’t agree that the data in the lower levels accurately depicts them as exploits or vulnerabilities. The table displaying vulnerabilities was lacking the information about the sources used to find the data. Perhaps a link to the sources would have been beneficial. In addition I did not read any specific levels of expertise needed in being able to run the exploits listed in the table. The conclusion is a good summary of the activities of the lab along with results but falls short of tying the whole lab together. I must say though that the writing skills in this lab were much improved over their past labs.

  7. I think that group 4’s write-up for lab 4 was poor. The abstract for this lab was adequate and provided a short overview of the lab. The literary review was good and adequately reviews the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done well and all of the pages were included. For part 1, there seems to be many problems. At first glance, the section appears to be very short. When reading, I got to part 2 and didn’t realize that part 1 was over already. Part 1 consisted of only three paragraphs. The findings for part 1 only answer about two of the seven questions required. No research was done for the exploits, they were only listed as an output of Nessus. Also, the group included both Nessus and Nmap when the direction only asks to choose one. The directions asked to list the vulnerabilities by system and the group did not. Absolutely no stand-alone tools were listed for part one and since they were not, they were clearly not tested either. No conclusions were made for this section, let alone explained. No strategy was included for how this knowledge was gained, only a list of the vulnerabilities found and a short description of those vulnerabilities. This section was very weak and seemed as if the group did not even read the lab questions before writing it. For part 2, it seems that the part was very weak. The group included BugTraq, which was not supposed to be listed. The group only discussed the level of expertise involved for one site. The group did add many vulnerabilities into the grid and all of which were related to the McCumber Cube. However, the links included are for vulnerability repositories and not exploits. A vulnerability is not an exploit and the lab required research on exploit code repositories. Also it appears that the group had trouble finding sources when they include “Kansas City’s most celebrated “no wipe-out” computer repair specialist” that looks like a geocities page. The conclusion was adequate and summarizes what was covered. Overall, few of the required questions were answered and the lab could have been much better.

  8. The team starts of by describing what they are going to do within the lab and describe the steps that will be taken to accomplish the task. They then go on to the literature review and again this week it is not cohesive. They just describe what happens within each article and there is little argument within the literature reviews. They do relate them to the lab and some things that can be implemented, but this does not make for a well rounded literature review. It makes it really hard to review this section when there is little arguments between the articles or their opinions for or against different the different points expressed within the articles. Do not be afraid to back up your team’s opinion. Yes, not all the times will it be correct but that makes for interesting discussion and it will create learning experiences. If there are questions left from the article try to answer them and research different possibilities to gain more knowledge on the topics. They then go onto the methodology section and describe what is to be done within each part. In this part they define the tools that where going to be used within the testing environment. Then for the second part they described that they where going to be finding databases with exploitations. In there results and findings section it was a little disorganized and the findings where not describe in detail. Just that they had found exploits within Windows XP SP0. The other operating systems on their virtual networking are not even mentioned. Yes they did provide a list of exploits that where found within the month of June, but what they provided within the section does not backup their findings. Within the second part of the lab they listed some of the databases but did not describe the databases and seem to only find the minimum required. There where many databases available, why was a government standard database for vulnerabilities discussed such as National Vulnerability Database? This would allow for a standard to be discussed and how each of the other databases has been affected by the standard or comparing and contrasting them. The team goes on to describe their issues, and states that their biggest problem was shifting through the exploits with Nessus. After this they go on to their conclusion and discuss what they learned. They included some information within the conclusion that could have been placed better into the findings. The conclusion could have used some revising to better summarize the whole of the lab.

  9. Team four presents a report that is much improved over previous attempts, but still has a long way to go. The writing flows much better, and the information is for the most part coherent. An obtuse methods section and unclear results mar the effort. The group does not differentiate between vulnerability and exploit and uses the terms interchangeably throughout the lab, to their detriment.

    In the abstract, this team states that the lab is about performing scans to detect vulnerabilities. In actuality, this is what the last two labs discussed. This portion is really a precursor to actual learning objectives of the lab. The remainder of the abstract simply states the objective of the exercises, without going into great detail.

    The literature review is much improved over previous weeks. The writing style is coherent, and the articles are well summarized. The team attempts to relate Thompson and Chase’s article to the assignment, but could use more detail. What are similarities between researching exploits like objectives for the lab and what Thompson and Chase propose for application focused red teaming? How is it different? The team evaluates Davidson, pointing out flaws that they perceived in the test plan. Are these actually flaws? Might there have been some requirement that made seemingly superfluous tests necessary? The group only weakly relates the document back to the labs. What is it that is different about what Davidson is doing? The group summarizes He and Bode, but neither evaluates or relates to the writing more than superficially. Is there something special about their methodology?

    The methods section is vague, unrepeatable, and flawed. In part one is it NESSUS that actually performs the exploits? I’m not really certain what you did other than run NESSUS and NMAP. In part two, what were the terms you used to find sites? You use vulnerability and exploit interchangeably. Are they the same thing? Did you really look at every entry on all the sites? How did you fit that into a week? Is a reference to another source indicative of a lack of expertise, or an attempt to pass accurate information?

    I’m not sure how the table supplied for results in part one is any different then what was done in the previous lab. The group discusses running the tools against XP service Pack 0 but I’m not sure what the results were. This piece really belongs in the methods section, but what were your results? For part two you list among your sources Obcomputerreapair.com, which is the site for a pc repair shop in Kansas. How is this in any way authoritative? You criticize “secureinfos” for having a lack of detail. Is this perhaps because the original site is in a foreign language? Perhaps something is lost in translation? Why do you suppose that insecure.org and security focus share so much information? The group’s methods state that you will analyze the expertise of the various site, but your findings don’t reflect this.

    I’m not quite sure what the group is trying to say in your issues section. I think you were attempting to use the hard way eluded to in the assignment. What, if anything did you do to overcome your issues? The conclusion does a decent job of recapping the information contained in the lab.

  10. In the team’s abstract they indicating some key points of there laboratory and claimed that they were going to utilize two tools, Nmap and Nessus. The abstract was short and only mentions discovering vulnerabilities, tools, and exploits. The team does cover the reading about verifying tools in the article Red-Team Application Security Testing. The team says the writing is mainly about how using a piece of software to secure a network. This is similar to what the other teams say. More then software is needed to protect a network, penetration testing can help show where areas of a network needs more attention and can also show where an attacker would attack.

    It seems unclear which tools the team seems to user of if the team perhaps to use both tools. In the findings and results section they have a chart, titled Exploits Found on a Windows XP SP0 Using Nessus. The exploit that seem to stand out the most to me was the exploit at layer 2. The exploit at layer two is Ethernet card brand. The question I am wondering is did Nessus provide this information as an exploit or was it just extra information that was returned during an exploit? The team also talks about researching exploit code but not from a anti-virus vendor. They provide a chat with vulnerabilities that were discovered within the month the June. Where did this come from?

Comments are closed.