April 22, 2025

9 thoughts on “TECH 581 W Computer Network Operations: Laboratory 3, Team 1

  1. The abstract, while not a major part of the lab exercises is more of a restatement of the objectives than a summary of the activities performed in the lab. The literature review has a few spelling and grammatical errors that make it slightly difficult to read. One major criticism is the difference between blackbox and whitebox testing is not explored in depth enough. Saying that whitebox testing “looks at the inside perspective” doesn’t really tell the reader much about whitebox testing. Inside of what? Another major criticism is only looking at the topic of passive reconnaissance in light of the readings given out with the lab exercises. Passive reconnaissance is much more than blackbox and whitebox application testing. The lab exercises are a good example of passive reconnaissance against a network.

    It would’ve been nice to see methodologies before presenting the table with passive attack tools to give the reader a reference for how the information was gathered and formatted. Maybe that would explain the presence of tools that are most definitely active reconnaissance tools in the table such as XProbe2 (a port scanner and TCP/IP fingerprinting tool) and GFI LanGuard (a network security scanner.)

    The findings paragraph mentions tools that are not in the table. Are these finding separate from the work done to create the table? The last part of the findings section regarding timing of the attack or script traffic is weak. It describes what may happen when delaying the traffic from a program or script of an active reconnaissance tool but doesn’t view this information in light of the subject of passive reconnaissance. The statement that it will be “easier to detect a script or tool if it takes milliseconds” needs some backing. What if the attack is so fast that it’s missed by the IDS system that only samples every couple of seconds?

    The findings for part 2a were insufficiently detailed. How does the fact that Nessus scans for 1000s of vulnerabilities allow an attacker to sieve through the information more quickly? How do you know it only takes a few packets to discover a vulnerable port? What patterns might emerge from putting the vulnerabilities Nessus finds into a table? The operating system bias is valid but what other patterns might emerge? Are there any cross platform vulnerability types that occur more than others?

    I think the point of 2b was missed entirely. The point of this exercise was to evaluate tools that had been advertised and published as security tools but had secret backdoors installed that compromised the systems of the attacker. The section of 2b where determining whether or not a tool is hostile is on the right track with the purpose of this part of the exercise. Finally, the citation for “Precision and accuracy of network traffic generators for packet-by-packet traffic analysis” isn’t properly formatted in APA5.

  2. Team one did a decent job in explaining what was going to be accomplished in the lab. The Abstract did not meet the requirements as per the syllabus in length, and did read more like the list of objectives in the lab three design guide. However, the abstract explained the tasks that will be performed in lab three. In the literature review section, team one began with an introduction explaining what the general topics of the articles in the literature review were, and then went right into the first article, the articles from Patrice Godefroid. From the literature review it is apparent that team made an attempt to create cohesion between the articles which should have been simple as there were only two of them, and the second one seemed to pick up where the first left off. That attempt however did not end in success but rather team one has once again created a list of the thoughts presented in each article almost entirely independent of each other explaining what the article was about, followed by the literature review author’s personal opinion on each article. There appears to be no attempt to answer the questions presented in the syllabus for literature review, and no attempt to integrate the literature into the lab. I call into question the sentence that begins “Some of the findings that Privacy Oracle found.” The tool found findings? In my experience researchers generate findings from data created by the tools they use. The methods section of team one’s lab jumps right into the table that needed to be created as per part one of lab three with no explanation of what is going on outside of the abstract in the beginning. I question team one placing Stealing mail as a passive attack. Stealing mail would be the same intercepting and diverting a network packet, the act of not getting the packet (or parcel) means that the victim would be aware of the attack making it active and not passive in nature. Each part of the lab is broken down into a methods and findings section for each particular section of the lab. This does not create a lab document that flows from beginning to end, and allow others to recreate the experiments of lab three in any meaningful way, nor does it follow the guidelines as per the syllabus. There are no unified methods or findings section, and that left me confused upon first reading. The methods that team one does present are lacking and are not at all a form of academic or scientific methods and fail to explain their strategy or techniques in answering the questions they were presented in the lab design document. A unified findings section appears to be at least attempted, but after review it is apparent that the three members of team one did not collaborate well in the creation of a final lab document. Team one states in their findings that nessus and nmap have a bias towards windows machines; team three later states that nessus and nmap have a bias towards UNIX style machines. This calls into question both teams’ results as either a guess or lack of understanding of the lab. Team one’s case studies seem to suggest a lack of understand of that particular part of the lab. They present tools that “hackers” are known to generally use, and how they use them. This was not the goal, but rather the goal was studying tools used as security defense tools that were themselves at one point or another the attack vector in an exploit, tools that turn the attacker into the attacked.

  3. Team one’s effort this week is an improvement over last week. There is an issue with a change in voice, and the authors should be weary of writing in the first person singular.

    Your literature review thoroughly summarizes the articles, and the group makes an attempt to evaluate the literature as well as relate it to the labs. You use first person singular voice in the literature review, but there are three of you in the group. It wouldn’t be a big deal, except that the author submits an opinion. Do all of you agree or has one of your members gone rogue? I find it interesting that this group was able to glean so much from the Godefroid article, given that it was an abstract to a presentation. Is there a relationship between Jung et al and passive scanning in particular?

    The section covering Part 1 is very vague. I’m unclear as to what your methods are here, and what the table represents. Are all the tools listed supposed to passive reconnaissance tools? I don’t understand how your findings relate to the table.

    Part 2A is completely different. The methods section is written well. Screenshots would be nice but aren’t really necessary since these are command line tools. I think I could repeat the experiment with the given steps. I like that you based your thoughts on the tools being biased on numbers from documentation rather than one test of one operating system. Your set up for the second scan should be in the methods section but it is still very detailed and easy to follow. Your analysis of the passive scan was exceptionally detailed and very well done.

    In Part 2B, where did you get the information for your case study? What about exploiting vulnerabilities in penetration tools themselves? You give good examples of tools that are used in or to create exploits, but the idea was to look into the issue of the tool being used against the operator. In your prevention methods, you advocate the use of digital signatures. Is there a chance the signature could be forged? Can a reliable source be compromised without their knowledge? Creators of open source tools may have perfectly legitimate reasons for wanting to keep their identity a secret. What if the tool created is considered by some governments to be a weapon, for example? I don’t think it’s valid to use a desire for anonymity as basis for judging the safety of a tool. If you outsource code auditing, what is the potential that something will be missed? Who is liable? Does this process become more difficult with increased complexity of the code? You ask the question, “What are the risks of using untested or exploited penetration tools?” but the paragraph that follows doesn’t really answer the question, or even have much to do with it.

  4. I would comment that I thought the literature review to be decently put together. I believe this group excelled in comparing the two articles given in a head-to-head fashion: something which few other groups attempted. The way in which this literature review was written made for fairly easy reading; although I found the use of the first person in certain areas to be out of place. I thought the ‘Findings’ section to attempt a reasonably detailed examination of Part 1: certainly a step in the right direction. Additionally, I found the discussion of Part 2B, that on exploits in security tools, to be nicely done, even if I cannot agree entirely with some of the ideas presented. Finally, commendable effort in ‘presentation’ is noticeable from this team; in this regard I believe this team to be continually improving.

    These positive points aside, numerous problems were detected in this write-up: some relatively trivial, others severe, possibly fundamental errors. First, I found the use of ‘the student’ a strange choice in wording for the abstract section: this reads like a lab instruction sheet, and not an abstract; consider using ‘we’ or using a personal passive voice, such as “the biases were determined” or “tools were identified.” Additionally, I found no ‘methodology’ or ‘procedure’ detailed under the ‘Methods’ section for Part 1, but found what appeared to be the ‘results’ listed instead. Furthermore, this table of ‘passive’ tools did not appear to be well conceived. I noted ‘ping’ listed, along with a number of ‘spoofers’ and active scanners (‘superscan,’ ‘unicornscan,’ etc.). I would ask: how can you possibly classify ‘ping’ as a ‘passive’ tool? It appears to me that little research effort went into assembling this tool listing, as many tools included are of an obvious ‘active’ nature. Additionally, I tried to rectify these dubious tools to some logically consistent pattern in the report, i.e. did the team present that ‘slowing’ an ‘active’ tool could reclassify it as ‘passive?’ Conclusively, this could not be an explanation: they are adamant in asserting that ‘speed’ does not change the nature of the tool: hence, these are likely errors.

    The lab exercise appeared to be performed improperly for some stages of testing. For the ‘meta exploit’ test, which was to involve three hosts, the data presented almost certainly points to only two hosts being used. I find it likely that ‘Wireshark’ was run on the same machine which was designated to be the ‘attacker,’ and so find the reported data to be flawed. I think it should be obvious that if a ‘sniffer’ is run on the same machine as a ‘scanner,’ this ‘sniffer’ will naturally be privy to all the network traffic the ‘scanner’ is. This is not true when the ‘sniffer’ is run as a separate host on the network (at least in the virtual ‘switched’ network environment used in this experiment), and so I deem the results of the ‘meta exploit’ test to be fundamentally flawed. Furthermore, I question the assertion that ‘Nmap,’ and more so ‘Nessus’ are predominately biased in favor of Microsoft based operating systems. I, too, initially thought (before researching it) that this would be the case: but a cursory examination of the ‘Nessus plug-in’ link that is provided proves this is ‘resoundingly’ not true. I would ask: what evidence leads to this conclusion, as nothing which supports the ‘Microsoft’ argument is found in the write-up?

    I find a number of problems present in the ‘hostile tools’ discussion ( 2B ) . Foremost, I do not find the case study chosen in relation to the ‘Metasploit’ framework to be relevant in the discussion: no flaw was found in the framework itself, rather, it was used by security professionals ‘to find’ a flaw in VML. How is this representative of a ‘hostile’ or ‘exploited’ penetration tool? I believe the ‘Netcat’ case study legitimate, but take exception to the ‘gaping security hole’ assertion. The abuse of a tool is not really ‘a security hole,’ nor do I think ‘Netcat’ a case of exceptional note. Any application, such as Mozilla Firefox, or telnet, or SSH, or notepad, ad infinitum can be used ‘abusively,’ yet I do not believe that many would classify the programs ‘to have gaping security holes’ because of this. Does a screwdriver have a ‘gaping security hole’ because it is often used to force locks?

    Finally, I found the discussion on ways to counter these ‘hostile’ tools to be a bit vague. Many ‘name-able’ concrete methods are available in this area (MD5 hashes, Tripwire, jails, etc.), so why resort only to generalities? Additionally, the section which addressed the risks of untested or exploited penetration tools seemed to miss the point of the question entirely. The question asked is not about the dangers of penetration testing, but about the dangers of ‘trojan’ or ‘compromised’ tools used in penetration testing. I would submit the ‘real’ danger in this case is: that nothing unusual happens during the test, that no machines are taken down; and because of this no one notices that sensitive data is altered or stolen.

  5. The group’s abstract talks about the different steps in the lab and how they will accomplish that part of the lab. The group briefly describes each part of the lab, but they do not tell what this lab is trying to convey very strongly. In the first sentence they say that the lab is about passive reconnaissance but nothing more. Instead of just going over each part of the lab in the abstract they should have discussed what the importance of this lab was, gave a brief definition of what passive reconnaissance is, and briefly described the results of the lab. Next the group goes into their literature review. The group starts off with a good description of what both papers are trying to convey. They talk about what passive reconnaissance is and the difference between blackbox testing and whitebox testing. The group does a good job in comparing the two papers that were given in this lab. The group gives their opinion on the paper by saying that they agree with Godefroid’s ideas of using whitebox testing instead of blackbox testing. The group then continues by giving an explanation of what Privacy Oracle is and how the Jung et al paper used Privacy Oracle to gather information leaks from many applications. Then the group talks about how Jung et al set up their test and the applications used in the test. Last in the literature review the group concludes with a discussion on the ups and downs of using blackbox testing on applications. In the literature review the group does a great job on describing the method of the papers and how they compare with each other. The group does not mention how these papers tie into this lab though. They do mention the theme of each of the papers at the beginning of the literature review, but they do not mention the question that the paper is trying to convey. Also the group does not go over any errors or omissions in the papers. Next the group starts the first part of the lab by creating a table that has the passive tools from the first lab’s table. The table was put together nicely. The table shows how the passive tools used in this lab tend to lean more toward the application layer more than any other layers. It also shows how most of the tools attack confidentiality more than integrity or availability. The group then gives their findings on a couple of questions given in the first part of the lab. They discovered that a good tool to use to recreate packet streams passively is Snort. Also in the findings they talk about how to slow down a tool or script and why that would aid in disguising the attack. Next the group goes into the second part of the lab. In this part the group started off by explaining how they obtained, installed and ran Nessus and Nmap on one of the virtual machines set up in the first lab. The group dedicated a lot of this section in describing how they set up and ran Nessus, but did not cover much of Nmap. This could have been because Nmap was easier to setup and ran. Next the group discussed their findings. They start by saying that because Nessus has a lot of tools it is easier to sieve through the information. They also mention that the attacks from Nessus will fit into the OSI model and McCumber’s cube. They do not give any type of data on how these attacks will fit into the OSI model or McCumber’s cube though. Next the group talks about the bias of Nessus and Nmap toward operating systems. The group mentions that Nessus is biased toward Windows operating systems because of their popularity. The group also mentions that Nmap does not have a bias, but could be used more on Windows systems more than others. Next the group described how they ran Wireshark against Nessus and Nmap. They did a nice job in explaining the command used to run the programs and how the test was set up. They explained how all the packets, both sent and received, were captured by showing an example of both. The group then discusses how the data from Wireshark can be used by an attacker to follow Nessus using Wireshark and gaining the information they need. They also mentioned briefly how an attack from Nessus can be seen using Wireshark. The group could have discussed more on how passive tools could be used to detect active attacks on a system. The group is looking at this from how an attacker can use this data to perform an attack. They do not look at this from someone who is trying to stop an attacker from compromising their network. In the next section the group does show how snort can be used in conjunction with Nessus and Wireshark to help prevent security testers from giving away important information when scanning a network. Last in this section the group does a nice job on concluding this part of the lab. They give some nice examples of how this information could help in preventing attacks on a network and some ideas of what needs to be done to a network to reduce the risks on a network. Next the group went into the last part of the lab. The group starts off explaining how tools that were once used to help networks are now being used to break into and exploit networks. They also explain that these tools can be obtained by both sides easily. The group puts together three case studies to show how penetration tools were used in a harmful way. The three tools that the group used were Metasploit, John The Ripper 1.0, and NetCat. In each case study the group does a good job in describing each tool. Then they tell how each tool can be used in a bad way. They back that up with giving a case study that uses the tool in a bad way to exploit a computer. Next the group discusses ways to ensure that the tools you are using are not hostile. The group gives some nice examples of how to determine if a program is hostile or not. They also mention that these methods do not insure that the program is safe, but are indicators. They mention that the best way to ensure the safety of a program is to allow users to examine the source code. The group then explains the process of auditing source code. They concentrate on the use of a tool called Pscan. I believe that the group could have expanded on this part of the lab better and showed how source code auditing occurs. In the last section of this part of the lab the group discusses how untested penetration tools can harm a company’s network. The group explains how using untested penetration tools can range from just slowing down a network to damaging the network. The group could have explained and given some examples of how this could have been done. At the end of the lab the group gave a description of some issues that they had with using the Debian VM. Then the group gave the conclusion to the lab. The conclusion does a nice job of going over what was done in each part of the lab. I believe that the group could have done a better job at describing the findings of each part of the lab and they could have given a better summery of what was learned in this lab.

  6. I have to disagree with the statement “Whitebox testing is the same method as blackbox testing but it looks at the inside perspective. “While Whitebox testing involves help with the staff, blackbox testing is closer to what a hacker would do without the knowledge of the organization’s Information Technology team. In the statement “I think the best way to test the system is to see it from the inside perspective as opposed to blackbox testing, while looks at it from the outside perspective, which Privacy Oracle does”, what was the rationale for thinking that the inside perspective was better than the blackbox testing method? Without the rationale, that section seemed incomplete.

    I have to disagree with the statement “The author purposed an alternative to whitebox fuzz testing. “

    In the paper, Patrice Godefroid did not propose an alternative to whitebox fuzz testing, but gave an alternative to blackbox testing, which was whitebox fuzz testing.

    Groups one’s method part one section did not contain any type of explanation, but only consisted of a table. The group should have stated how the table was to be set up and go into more detail about passive reconnaissance tools.
    In the findings section the group stated that “One can make a script or tool slow down by setting the time for the attack to take longer”, but the group did not explain how this could be accomplished. However, I do agree with the group’s statement “Though, it will still be considered active as opposed to passive.”Slowing down the attack time would not change the nature of the tool but will only change the amount of time it takes to perform its functions.

    I found it somewhat odd that group one had two separate methods section. What was the rationale for splitting the methods section into two separate sections? The group did a good job describing how they set up the Nessus in their virtual environment, but the description of Nmap was somewhat skimpy.
    In the findings section, I have to partially disagree with the statement “This does allow an attacker sieve the information more quickly. This is because it only takes a few packets for Nessus to discover a vulnerable port on the target system.” With the ability to discover over 1000 vulnerabilities, Nessus would find more vulnerabilities, but with the more plug-ins that are installed, the longer it would take for Nessus to locate all of potential vulnerabilities. In the statement “I think that if the Nessus vulnerabilities were put into a grid, like the tools have been in previous labs, patterns would emerge”, the group said a pattern would emerge, but they did not say what that pattern would be. I have to disagree with the statement “Based on the numbers obtained from http://www.nessus.org/plugins/index.php?view=all, most of the vulnerabilities pertain to Windows operating systems (Tenable Network Security, 2009)” because several of those plug-ins were Unix/Linux based as well. In the statement “When an attacker performs an Nmap or Nessus scan against a network, it’s a good idea to slow down the attack to prevent an IDS system from detecting the scan”, the group did not explain how this could be accomplished.

    The group did a good job answering the questions in section 2b.

  7. Team 1 begins with an abstract describing the goals and procedures of their lab project. They proceed with a literature review of the articles that were assigned reading for this week. Team 1 describes the difference between blackbox and whitebox testing. They make the statement that they believe that whitebox testing is better. I don’t believe it’s a matter of choosing whether blackbox or whitebox fuzz testing is better. The methods have their own specific purposes. In the article on Privacy Oracle, they are testing proprietary software, and therefore likely don’t know the internal structure of the source code. Their goal is not to test the internal operation of the program for errors, but to determine if it is sending private information to a third party. Team 1 included a list of applications that were tested using Privacy Oracle. The list of applications is a bit unnecessary here since our goal isn’t to learn what applications they tested but what techniques they used in testing, how those techniques relate to our lab, what the testing results were, and what we can deduce from the results.

    Team 1 describes the procedures and findings from testing Nessus and nmap. They discuss running Nessus and nmap while running Wireshark and how the packets from the scan could be captured by an attacker. They introduce Snort, which is an intrusion detection and prevention system (IDS/IPS). They discuss how slowing a tool may help to avoid detection from an IDS system. Their findings are that it is possible to gain telemetry from an active tool by using a passive tool because both sides of the conversation are captured. Our own tests show that effective packet sniffing depends largely on the where the packet sniffer is placed. To effectively capture all of the packets the packet sniffer needs to be placed in a position that the packets would pass through.

    Team 1 includes a case study of an Internet Explorer vulnerability that was found. I believe that Internet Explorer falls outside the bounds of “network penetration tools” as described in our assignment. They also include a discussion on how attackers can use Netcat and John the Ripper as attack tools. I believe the goal of the lab was to find network penetration tools that had been exploited to be hostile against the users of the tool. These are simply tools that can be used for good or evil, depending on whose hands they fall into.

    They then include several good suggestions for protecting the organization against exploited tools. They include a brief discuss tools and procedures for testing source code. They conclude this section with a discussion of the risks of using untested or exploited penetration tools. They make a comparison between white box penetration testing and black box penetration testing.

    In my opinion Team 1 spent too much effort describing how they installed the tools and not enough effort explaining what they did with the tools and what was discovered. I believe their inclusion of Internet Explorer falls far outside the definition of a network penetration tool as required by the lab assignment.

  8. The team first starts with their abstract and identifies that they will be looking at passive attacks for this lab. They the go onto the literature review and describe the papers that where read. They do a good job comparing and contrasting and even putting their thoughts into the discussion of the topics. One thing that could help, as I told the other groups is to seek out additional information that may support your stance or could strength the arguments between the literature. The team then goes into the methodologies and explains what they are going to do and the tools that they will be using. Then the thing that threw me off was the multiple findings section. In the future putting together the findings and then splitting that into sections will create a more organized lab. When upon more reading it was noticed that this group had installed Nessus onto a Linux Operating System rather than one of the Microsoft Operating System. Was there a particular reason that this was done? Also the fact that Nessus is more gear to find vulnerabilities in Windows is an agreeable statement. However do you think that this might change in the future when another operating system becomes king? Are windows systems the onslaught to numerous attacks because of it wide use? This being that an attacker would want to get the more “bang for his or her buck” when going after a system. Next the authors go onto discuss the second part of the methodologies section and it was noticed that the findings for this section was inner twined within the section some and was not posted in response to the actions they took. This made review the lab a little more difficult, which may have caused some confusion upon this readers thought. They did however describe what would some of the criteria for using a tool within an enterprise or company setting would be deemed appropriate. It is too often that there are administrators that become lazy and try to find the quickest way to do things which may not be the best or standard way. This helps others create a checklist of approved tools for such uses as checking for any vulnerabilities. Which of the tools that where used in the lab is considered hostile or non-hostile? What would allow for standard use outside the lab environment? The team then goes onto discuss the issues that they had with the lab and it was an issue with storage on the vm’s that affected multiple teams. They then go on to finish up with there conclusion and what they found based upon the lab. The conclusion seemed a little simple but it did do the purpose in describing what they learned within lab three.

  9. The team started with a strong abstract and indicating key points of there laboratory. They covered different tools that the team was going to use for scanning packets. There literature review was in very depth. They covered blackbox and whitebox testing as the main topic of reading. At the end of the literature review the team asks a question, does the software itself need to be tested before it can be used to test other application?
    Software should be reviewed and tested before with known results, otherwise software that compiles correctly may not function as desired or incorrectly. Using software that has not been properly tested is like taking a prototype car out for a high speed test, speeds exceeding 110, with out check to see of the ball joints are properly install or if they can handle speeds greater than fifty five miles per hour. Production products without testing them can result in improper telemetry readings. These improper reading can give a false positive or false negative. If the tool allows for a user to prevent from being detected, is used without testing, and the tools reports a false negative, then the tools has failed at the purpose the tool was created for. Anther way to look at this, is if a tool was created for defending and was not tested or reviewed. There would be no way of knowing if the tool is accurately severing it’s purpose. This defensive tool may not be defending at all and attackers now have a sure way into the network.
    The team then has a chart in methods part 1. The chart is well organized and easy to read. Perhaps a short paragraph explaining the chart would have been great for readers, especially readers that have not read the laboratory report. Readers, such as people outside this class would greatly appreciate an explanation. For part 2a, I understand that the Debian VM is short on space after installing packages, however why choose Ubuntu as the replacement VM? If you already have Backtrack why recreate another VM when most of the tools are already installed on backtrack. Backtrack was used later in the lab report. Nessus was the only program missing from Backtrack, there are tutorials available for install Nessus on backtrack. There was even on posted on Blackboard. Another question would be why use a Linux VM? If the VM was too small on space why not switch to a Windows machine, the software required can be obtained for both operating systems.

Comments are closed.