April 23, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 3, Team 4

  1. The first item that I found glaring at me was the format of this lab report. In numerous places question marks appeared in the middle of words for example in the first sentence of the second paragraph in the lit review was “e?ective”. What happened to these words? Was there an issue posting to the blog and did this not show up when the group previewed their post before submitting? In other places, the question mark was followed with “le”. These errors made reading the first part of this group’s lab report very difficult. For the future, please make sure that everything looks good before submitting the post. The literature review reads like a list. The group talks about one article, and then talks about the next. There needs to be more cohesiveness within the literature review. Do not make the literature review should like you are answering the list of items that are required to be put into it. You can answer the items just by reviewing it instead of saying “The article related to the lab” and stating it. Integrate everything together. The group quoted items from the articles a lot. The only items about the articles that were in their own words were the list of items that they had to answer. Read the articles and understand the process. Jung et al reviewed 26 applications and not 20. I don’t know if because of the formatting issue that appeared that the group actually put the word twenty six or just put twenty.
    The group did not separate the methods section from the results or discussion section. It appears that all of their information was part of their methods section and that there were no results. The first two paragraphs of the methods section read more like an abstract stating what was going to happen in the lab and not the actual steps done to get the results. I liked that the unneeded technology column is finally gone from the table of tools and how they relate to the McCumber cube. In the methods section, while the tools are being discussed, the group once again has basically everything that was done is taken from a source. The information found seems to be just sentences from other places. The group needs to use their own words more. The group does not discuss how to slow down the tools. This question was completely missed. I would like to have seen more detail in their results. This is one of the first labs that this group actually had issues with the lab. Problems are a way to learn from our mistakes. Did this group ever think about performing this lab on actual equipment like team 3? Team 3 thought that there might be problems using the Citrix environment so they used their own equipment. For future labs, please make sure that the formatting is correct before submission and make the literature review more cohesive.

  2. The literature review still treats each of the articles separately and does not relate any of the content to the activities of the lab (except for one mention of Wireshark) nor the topic of passive reconnaissance. The beginning of the literature review contains a direct quote from one of the assigned readings for the lab. Instead of copying the text with quotes surrounding it and a citation, it appears that the text was copied via OCR software turning the “ff” in “effective” into a “?”. This shows a lack of proof reading, surely that wouldn’t have passed a built in spell checker. The same problem is seen later in the paragraph for more text that was directly copied from the source material. Overcitation is a major issue, especially in the paragraph about the methodology used in (Jung et al., 2008). If the source material is being quoted that often, this is more of a literature summary than a review.

    The methodology for part one of the lab doesn’t have enough detail. Where are the tools coming from? How did you find them? How are you classifying them? Links to the tools in the table would have been helpful too. The answer to the question in the lab of the effect of time on the process seems out of place and doesn’t really relate with any of the surrounding materials. The methodologies of part 2a of the lab don’t say how you overcame the limitations imposed by utilizing a virtualized switched network. The traffic wouldn’t have been visible to the other hosts on the virtual switch.

    The findings section seems to be included under the methodologies heading. The findings for part 2a of the lab don’t discuss the vulnerabilities found in the machines that were scanned. It would’ve been helpful to see the vulnerabilities found in each machine that was scanned compared and contrasted with each other, particularly the two different service pack levels of Windows XP. There weren’t really any methodologies present for part 2b but there was an extensive list of tools with security flaws, only a few of those listed actually contained exploits from malicious tool authors intended to compromise a host. The discussion on methods of verifying tools mentions source code auditing and says that it would be difficult in an enterprise but the authors fail to identify the issue with availability of source code. Some commercial tools won’t have source code available, if you’ve made the decision to audit the code of all tools, how will this be dealt with?

  3. Team four begins their lab with an abstract that meets all of the requirements of syllabus. It is the required length, and explains the topics will be covered in the rest of the lab. The literature review presented by team four does not represent a scholarly or academic literature review. While both of the required readings are analyzed and the questions presented in the syllabus answered, the literature review lacks any kind of cohesion and is nothing more than a listing of the articles to be reviewed, with the reviewer’s comments and APA style citations. Team four needs to do a better job in this area. We have completed three of seven labs this semester and thus far all three literature reviews have been completed in this manor. There were also a large number of in text citations, and did make the literature distracting and slightly difficult to read. I must question the statement that there were no errors or omissions in the Godefroid article. Since that article was nothing more than a reference to what MS. Godefroid was going to speaking about, there is no data given to support her conclusions. This should in and of itself constitute an omission. Team four lists a unified methods section that makes the lab simpler to read. However, there is a lack of a findings section, and all of the data should have been listed in the findings section is scattered without explanation in that methods section. While the methods presented are lengthy, they do not list strategy or technique used to complete the lab, but rather just what was performed. This does not represent a scholarly or academic methods section. The table that is presented should also not be in the methods section but should be listed at the end in a figures section with reference given to the table in the methods section. The table itself while shorter than the active recon table presented by team four, which on the surface suggests an evaluation of the entire toolset, but upon further examination it becomes obvious that team four just removed the active tools that considered to be active tools without much consideration or actual use or layer of the OSI model they work at. The tools listed in layer 6 relating to HTTP are questionable since HTTP is actually a layer 7 protocol in the OSI model. I also question the layer 5 protocol nbtscan which is not passive in nature and would be discovered by most intrusion detection tools as well as an IP spoofer at layer 2. Last time I looked, IP was a layer 3 protocol, not a layer 2 protocol. The lack of a findings section makes the lab difficult to read and understand and also calls into question the scholarship of the lab and the team. Team four did however agree with team three and disagree with team one on the bias shown by NMAP and Nessus. Team four has stated that there is a UNIX system bias for these tools a conclusion that I also agree with.

  4. Team four’s abstract was unclear. The muddled writing style makes it difficult to sift out the content. Where did your definition of passive reconnaissance come from? The first sentence of the second paragraph is unclear. Is the tool passive or is the traffic flowing passively? I don’t understand the meaning of the phrase “Cause the target to become available?” Are you saying that vulnerabilities in tools can be exploited to gain control of the system? The last two sentences of the abstract are also unclear. You are analyzing and analysis, but what for?

    The literature review, though verbose, offered very little. I could have (and did) read the articles. A shorter synopsis would have sufficed. You don’t relate the Godefroid article to the lab at all, and give a really thin explanation of how the Oracle Privacy article is related. You don’t evaluate either of the articles. What’s good about them? What is bad? Are they useful? How? If I didn’t know better I would think you are trying to cover a lack of understanding of the article with extremely long retellings in the hope that no one would notice the missing evaluative content.

    The group’s methods section is weak. The first paragraph is vague and in no way gives a repeatable methodology. Several of the tools listed in the table are suspect. Do they all have a reconnaissance function or are some used for other things? You have an IP tool in layer 2, why? You discuss slowing down an attack. How did you get your information? You mention the sieve rate when discussing part 2A. What is it? Where did you get this term?

    Several issues come to mind in regard to part 2A. Do you think running NESSUS against Backtrack had anything to do with the fact that the vulnerabilities were focused on UNIX based operating systems? Why does it matter that NMAP is designed for “those with experience in UNIX/Linux environments? It should be a common skill set for graduate students in an information technology program. You didn’t really need to conduct the tests to find out what port NESSUS uses, it’s in the documentation. Would the information gained from passively scanning a network while an active tool is running be at all useful?

    In section 2B, how did the team find its case studies? You say that the cases you detail show a pattern of DoS vulnerability, but only one tool is listed as being vulnerable. How is that a pattern? You offer several vulnerabilities but are they all “incidents”? Is there a difference? You state that you are trying to determine whether or not a tool is hostile. Can a tool be hostile? Hostility implies intent. How would a vulnerability scanner detect malicious content? Vulnerabilities, yes, but would it show an altered tool? Why all the quotes about source code auditing? Was there a point to that paragraph? Are there alternatives to outsourcing source code auditing that might be more viable for an enterprise? It seems like auditing every piece of code would be expensive, outsourced or not. How do you verify that the contractor was diligent if you do outsource?

    In your issues section you state that problems with Citrix limited your ability to get results in your lab. When you had problems with Citrix did you contact anybody? The technology not functioning should never be an excuse from technology students, especially graduate students. What trouble did you have installing the tools? What problems did you have with the commands? If you add detail here, it helps if someone tries to repeat your work.

    Your conclusion says that the tools are mostly in the upper layers of the OSI model, but your table does not reflect this. You assert that the tools mostly attack confidentiality. Doesn’t reconnaissance by definition attack confidentiality? The last paragraph makes no sense. I get that you are trying to recap section 2B of the lab, but it is unclear what you are trying to say.

  5. Group 4 begins with an abstract that compares the lab 2 assignment to the lab 3 assignment and describes passive reconnaissance. They then explain each part of the lab assignments. They list passive recon tool selection, how passive recon tools can be used to test active recon tools, and case studies of recon tools that had been exploited. They further describe what knowledge is to be gained from each one of these activities.

    Next, Group 4 included their literature review. They begin their lit review by briefly explaining the articles that were read this week and the common theme between the two. The first article they reviewed was “Random Testing for Security: Blackbox vs. Whitebox Fuzzing” ( Godefroid, 2007) , although they didn’t actually list the title in the review. Although this review covered the article well, in my opinion, too much of the wording was copied directly from the article rather than written in their own words. Next, they reviewed “Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing” (Jung, Sheth, Greenstein, Wetherall, Maganis, & Kohno, 2008). Although this article was very thorough as well, again much of the wording appeared to be copied directly from the original document. They then relate the article to our labs by simply stating that “Wireshark could also be used to analyze the same application programs to determine if valuable information is being leaked”. Although I believe that this is part of how it applies to our labs, I believe there are other inferences in the article that apply as well. One example is the use of virtual machines as a testing environment to isolate the tests from outside influences. Another example is the use of snapshots, so that the system can be returned to its previous state before another test is performed. A third example would be how seemingly innocuous applications can send private information over the network. Therefore, passively listening on the network may reveal this private information.

    Group 4 then described the methodology of the lab. A list of passive recon tools is created. They created a very valid explanation of how slowing a tool can make it more passive. They still, however, consider it an active recon tool because it sends information over the network. Group 4 gives a detailed explanation of the test environment that was used in this lab. They further described the processes that they used to accomplish the lab. They continue with a detailed discussion of issues that they encountered, such as the slowness of the scanning tools and the large amount of data that has to be sorted in order to arrive at a conclusion. They discuss what was discovered from this portion of the lab assignment. For example, Nessus uses port 4482 when requesting information gathered from the network. They also discovered the types of packets sent over the network when the scan was being performed.

    Group 4 was able to find five recon applications that had been exploited, Wireshark, nmap, Snort, Ettercap, and Dsniff. They further discussed procedures that could be used to test for these vulnerabilities. They concluded with a discussion of what knowledge was gained in this lab.

    There were some issues with this lab as I’ve previously stated. One issue that I didn’t mention is the number of question marks that seem to be randomly placed throughout their document. This, admittedly, may be an issue with WordPress. In this report they alluded to a dislike for the command line interface of nmap. I recommend using Zenmap, which is a graphical front end for nmap. This will help to avoid using the messy command line interface.

  6. I would like to comment that I thought this lab exceptionally well worded and organized. The literature review was an excellent summary of the articles under review. The discussion of results was ‘extensive’ with content which was generally descriptive and informative. I thought the section on ‘exploited’ security tools very well done: case studies and example were well chosen. I judged the conclusions drawn to be substantially perceptive; especially with regard to ‘tool’ operating system bias and exploit layer patterns on the OSI model. I believe this group to have addressed directly every area which was required in the lab research: well done.

    Upon examination, some issues and questions can be found with this write-up, however. There appeared to be some character encoding problems which made certain passages hard to decipher: more care in using WordPress should eliminate this flaw. Additionally, while the literature review presented an excellent summary of the articles, this is really mostly ‘all’ that it did. A significant comparison of concepts common to both articles, and contrasts if found; further, more than a trivial reference to application in the lab exercise: these inclusions would improve this section. Also, it appears that the ‘Results’ heading is missing, possibly due to WordPress submission issues.

    I searched for a clear definition of what this team defined ‘passive reconnaissance’ to be, and found a vague description in the abstract section set out as “the target being [un]aware” of the reconnaissance. This definition seems to be further refined in the discussion of ‘slow’ active tools, where the team asserts that ‘active’ implies that “[data is] being sent to the target” and this characteristic is not changed by the rate at which it is performed. This leads to the converse property, that under no circumstances may ‘passive’ type tools send data to the target. This is initially presented, as I am uncertain as to the rationale of the inclusion of some tools in the ‘passive’ tools table, since they do not fit the definition being evolved within in this write-up. For instance, why are ‘Nbtscan,’ ‘Unicornscan,’ ‘Spoofed IP 5.1,’ and ‘ARP spoofing,’ included in this list? These all violate the implied ‘no data sent to the target’ definition, and therefore would fall under an ‘active reconnaissance’ definition. This is by no means the ‘only’ problems seen with the tool list: there are a fair number more.

    In a further discussion of the results, I would agree with the assertion that ‘Nmap’ and ‘Nessus’ are primarily biased toward UNIX-like operating systems; however, the reason given for ‘Nmap,’ specifically that the “setup of the command environment” implied that it was UNIX biased is unsatisfactory. In my experience, no consistent patterns really emerge in regard to command-line executable parameters and operating systems (especially in UNIX-like systems), if this is what is meant. If it is being implied that the ‘use’ of the command line is ‘UNIX’ indicative, Microsoft’s ‘Powershell’ relies on the “setup of the command environment” exclusively for its operation (and is in itself a powerful security testing implement). Does this imply that it is also biased toward UNIX like systems? I do not see a significant correlation between these two independent attributes.

    Finally, I would comment that the description of the ‘meta exploit’ tests were rather vague. From the description of the setup, it appears the test was done correctly. It is obvious from the write-up that an entire network scan was done. This certainly would generate a large amount of captured data: what is missing is a description of the hosts and connections found in the traffic encountered. I believe that the data which was analyzed most likely to be majorly the product of the single ‘monitoring’ host being scanned by the ‘attacking’ host. I am certain if the traffic was filtered by ‘conversation’ filter in ‘Wireshark ,’ significantly different interpretations would be made. I will not recount the issue of ‘switched networks’ once again in this review, but if curious, it is addressed in (our) team three’s write-up. I would submit that since the bulk of information being gained in the ‘observer’ and ‘attacker’ scenario is from the single event of the ‘observer’ host being remotely scanned, and as the attacker already ‘owns’ the ‘observer’ if ‘Wireshark’ is being run from it, then no real information of use is gained in this situation. This is similarly true if a system network administrator is ‘observing’ from this machine: the only thing learned is that a certain machine is scanning the network, but by this time the IDS would be far into alarm mode already.

  7. I thought their abstract was well written and explained what they were planning to do in Lab 3. Reading through the post I saw a lot of formatting errors. There were question marks in words and grammar mistakes. I think this group should make sure their paper is spell checked and grammatically correct before posting it to the blog. It makes it difficult to read when there are such errors. I liked the way the organized their lit review. I thought it was easy to read and they tied the articles together nicely with the Lab exercise.
    The group talked about the methods used to test Privacy Oracle. But then they had a separate Methods section. Shouldn’t this have been in the Methods section? They did not separate the methods section from the results or discussion section. I would have liked them to separate parts 2a from 2b. Again, they discussed issues in the literature review but then had a separate issues section. This was confusing to me. Their discussion of tools that have been exploited and the cases were good. Also, their discussion of source coding was well documented. Just a reminder that in the future be sure to spell check and do your grammar check before posting to the blog.

  8. First thing that can not be ignored before I go into the abstract was what happened with the question marks in the literature review. They stuck out like a sore thumb and should have been caught before being posted. When reading the abstract I found that near the end the rights put something that looked like it belongs in a conclusion rather than the abstract with “This analysis will how analyzing these tools are not viable in an enterprise environment.” The reason behind an abstract is to setup the lab and then in the conclusion this sentence would make more sense because supposedly at this point it is just the setup for what is going to happen. Sorry if I am wrong but it just seemed out of place. Now back to the literature review. Again this section was hard to read because of the issue with the questions marks. This was just a review of the two pieces of literature and could have some more depth. Try finding more information to put within the literature review to help support or argue with the opinion of the authors. When writing the literature at the require size and just using the two pieces of literature it becomes extremely hard and the point will be repeated and beat to death. The literature review also needs to be more cohesive, it is still broken up and the pieces need to compare and contrast with each other and what is going on within the lab and any conflicts that may arise. Next we go on to the methodology section. In this section the methodology and findings section merged together. Part of the findings is what the student found out while doing the lab and not just we did this and then this. That just makes it sound robotic and that nothing is being gained from actually gaining hands on experience with penetration testing. Also in the finding section could have been what view do the students take and till make the viable or not viable argument understood for the enterprise environment. After this the lab goes into the conclusion section. I found that the conclusions section had some information could have been used in the findings. Also when reading it seems to drag on some using In conclusion and then lastly in this section made the reader believe that there may be another point. Also describe why the testing tools are not viable in an enterprise setting. Are there any tools that could be used in corporate environments? In my opinion there are tools that could be useful when working on a system. I would not likely choose to run anything like Nessus across my network due to the large amount of attacks it sends at the network. But there are other tools that can help find issues, and then the engineers can resolve them. Why would wire shark not be a viable tool if packets need to be analyzed going across the network? In what ways do these tools leave the environment exposed? What makes the tool hostile, non-hostile? Next time include not just how the tool may work but also ask questions to whether or not it could be used outside and environment.

  9. I think that group 4’s write-up for lab 3 was good overall. The abstract for this lab was adequate in terms of length and consistency. The literary review was good overall. Group 4 answered almost all of the required questions. The group did discuss how the reading related to the lab, but did not discuss whether or not they agreed with each reading. All of the citing for the literary review was done well and the page numbers for the references were also included. Once again, I feel that the literary review was cited too much and seemed more like cliff notes rather than a comprehensive analysis. The thing that strikes me most when reading this lab is the formatting. The group did not check this over before submitting it. Spacing seems to have too many in some places and none in others. Random question marks and other characters are also common. While the content is good, the formatting makes it difficult to read. The group seems to have an accurate analysis for part one. It seems a bit short, but accurate nonetheless. For the methodology for part 2A, the group covers the process well and performs a good analysis. Group 4 decided to use filters in Wireshark to capture only the correct packets (good idea!) and determined that a longer attack is less likely to become detected based on the large amount of packets on the network. The group also assessed different types of scans and the relationship between speed and information obtained. When comparing the vulnerabilities to the grid, the analysis had the correct direction, but came up short. It seems for section 2A, all of the correct testing was performed and the analysis seemed to be very accurate; however, the group did not go very far in depth with their findings and could have had a very good analysis if they had elaborated more. For part 2B, the articles chosen were very good and pertain to the lab. The issues and problems section accurately described the issued the group faced and the conclusion summed up the groups findings well. Also, it seems that only the literary review suffered from the strange formatting.

  10. The team started with a strong abstract and indicating what they were going to talk about in there laboratory. There literature review was in depth. They talk about what passive reconnaissance is and how they will use it. They covered different tools that the team was going to use for there reconnaissance and monitoring. In methods the second part of the lab the team indicated that they installed Nmap on Backtrack, Nessus on Windows XP SP3 and Wireshark on Windows XP SP0. For the instructions, were one machine was going to scan a target machine while another listened. Nessus and Nmap were to be installed on one machine. Backtrack, already has Nmap installed, how was Nmap installed on this VM or was Nmap updated to a newer version? How was Nessus installed? Other groups showed how the install of Nessus and Nmap were done. Wireshark seem to be the more popular scanning tools over the groups and mostly done in Backtrack. The team then talks Nmap being prone to potential insecurities. They get this information from Symantec and only mention that the problem exits. Does this problem exits on the local host if running on a Windows machine? Does the file creation vulnerability happen in Backtrack where the local hard began unmounted. The group then goes on to talk about Snort, Ettercap, Dsniff, fragroute, fragrouter. They mention that Dnsiff, fragroute, and fragrouter contained a backdoor if download on May 17, 2002. Source code auditing can be a great way to review mistakes that the coder had made in the software. In the teams issues they mention that wireshark was originally installed on Windows Server 2003 but ran out of hard drive space then witch to a different VM and then had later problems with overflowing the memory and locking up. Perhaps would it have been better to have used a different VM such as Backtrack? Could this have solved your memory problems since wireshark is pre-installed on Backtrack?

Comments are closed.