April 22, 2025

10 thoughts on “TECH 581 W Computer Network Operations: Laboratory 3, Team 2

  1. The first item I noticed right away with this lab’s submission was that it was missing the proper tags that each lab report is supposed to have per the requirements. I found one particular sentence in the first paragraph hard to interrupt what the group was trying to say. This sentence was “While laboratory two dealt with the topic of active network recon, laboratory three focuses on the topic of passive network recon, meaning performing information gathering on a target network with limited possibility of being discovered”. The group needs to thoroughly read the sentences to make sure that the wording sounds correct before submission, and look for things that Word might not find. The formatting of the citations in the literature review does not follow the APA 5th edition formatting. Page numbers are to be included while citing a source. I do not agree on what the group stated about Godefroid article. In this article Godefroid presents the downside of black fuzz testing and instead proposes to use white fuzz testing instead because of the downsides stated. The group keeps saying “personally identifiable information” and I think they mean personal identifiable information. Once more reading the literature review I found that numerous words were missing for the sentences making them harder to read.
    Instead of listing each author of the Privacy Oracle paper, the group after citing the authors for the first time and put Jung et al to save some space and not have to repeat all of the authors’ names each time in each sentence. The last part of this group’s literature review looked like it was just a list completing the required answers as per the syllabus. For the next literature review the group should put these items into their discussion over the articles and not throw everything else together in the end. Make the literature review more cohesive. I would liked to have seen the group research more into Godefroid’s article to look into the references that the author used and maybe even the conference itself that the argument in the article presented. This would be the supporting data for this article. Do MORE research for literature reviews.
    The group had a very thorough methods section but the screenshots would have been better in the methods section instead at the end of the lab report. According to what the group said about getting their tools for the table from the first two labs, they should have been able to subtract the 2nd lab’s table from the 1st and that should have given them the 3rd lab’s table. I agree with the group’s point of view on whether it is harder to detect an attack is it takes longer. The group hits on the major point that recreating the packet stream is difficult because of the size of the attack and it can hide in all the “noise”. The screenshots the group had were very hard to read, it looks like they were shrunk down from a different size making them harder to read.

  2. The literature review is very narrowly focused. Instead of evaluating the topic of passive reconnaissance in light of the literature provided along with other literature found about the topic, the concepts of blackbox and whitebox fuzzing are expounded in excruciating detail using only 2 sources. Following an summary of each of the articles, the reader is shown a brief comparison between the two testing methods but never any ties to the lab exercises for this particular lab. Also, one minor quibble about the citations of the Privacy Oracle’s authors, with that many authors, it should be cited as (Jung et al., 2008).

    The methodologies section for part 2a shows promise with screen shots. The Nessus section lacks detail. What host was it installed on? What is ultimately disappointing with the level of detail given is the lack of actual data that was captured by the host running Wireshark. The screen shot provided as figure 1-11 shows broadcast traffic that was sent out through the virtual switch. None of the core traffic of the scan was captured by this host due to the switched network they were connected to. Only capturing ARP traffic and SMB browser announcements hardly accounts for capturing attack traffic. How would this be differentiated in any way from normal network traffic? Is any broadcast traffic then to be classified as possibly malicious? One of the unspoken requirements of this section of the lab was overcoming this limitation of our virtual environment and the statements made in this section show a lack of basic networking knowledge. The targeting of the XP SP3 VM is incorrectly detailed. The authors state that the machine has “little use” making it more difficult to break in to. What the authors don’t say they considered is the fact that the XP SP3 VM has the firewall turned on by default making reconnaissance against it much more difficult than the RTM one.

    The findings of part 2b regarding the exploited security tools. The authors state that the purpose of the exploited tools is to “…turn them into spring boards for counter attack by the attacked.” While this is certainly possible and is detailed in this article on security focus, http://www.securityfocus.com/infocus/1857, the primary goal of the programs studied was to load a Trojan horse and surreptitiously notify the tool’s author of the infected host so they could use that machine. Counter attack was never an objective nor was it seen in either of the case studies. The finding that both of these exploits were made possible due to “changes in the C based source code that need to be included during compilation…” doesn’t make sense. This would be the method by which the exploit was done but hardly indicates a pattern between the two.

    The methods suggested for auditing security tools are sound and look at all of the aspects of the problem from the network traffic generated by them after the attack and the auditing of the source code. One thing that is not addressed is source code availability, what if the tool is commercial and closed source?

  3. Team two did an excellent job with this week’s lab. Most of the faults I can find are minor. In your abstract, the team says that reconnaissance is necessary so an attacker won’t get caught, but couldn’t an attacker get caught in the reconnaissance process? The team says passive means “performing information gathering on a target network with limited possibility of being discovered.” I took passive in this case to mean something that is acted on rather than acting. If you definition were used, you would need to include several tools normally considered to be active. You also state that you will be gathering information without interaction with the target network. Is that really possible? Do all of the tasks in this week’s lab relate directly to passive reconnaissance or are there multiple learning objectives?

    The group’s literature review is excellent, but you leave me hanging. How can both articles be used? Why is the Privacy Oracle article more useful? Is it maybe because what they did amounts to a very complex form of passive reconnaissance? Would this actually be useful, or are there easier ways to do this? What gaps does it fill?

    In the methods section, the team states that you considered every tool in backtrack. Are they all reconnaissance tools? Wouldn’t it have been better to just use those in the relevant groups? When running the scanning experiment, I like that you describe how to install and use the tools in question, except Wireshark. Are you certain that NMAP and NESSUS are both based solely on ARP packets? Would you have gotten better results if your target were not essentially idle? Did the packets captured by Wireshark mean anything? When discussing the safeguards for using open source tools on an enterprise network, your group mentions fuzzing. Isn’t fuzzing really just a form of code audit on the quick? Also, you suggest using Privacy Oracle. Where are you going to get Privacy Oracle and how are you going to determine that it will detect efficiently across your systems since, it is its self experimental? The discussion of your three (actually two) safety measures is fairly flimsy. Tell me more about each. What are the pros and cons? All technology based. Are there other methods that could also be applied? Is code auditing practical in the enterprise? You never answer this question, which was required as part of the lab.

    In your issues section you state that problems with Citrix limited your ability to get results in your lab. When you had problems with Citrix did you contact anybody? The technology not functioning should never be an excuse from technology students, especially graduate students.

  4. I found this lab write-up to be presented in a visually pleasing manner and also significantly informative in content. The abstract was particularly well worded (however, I would say this section read more like an ‘introduction’ than an ‘abstract’ proper: but this cannot be faulted by the ‘canonical’ lab structure). The literature review was in depth and quite lengthy. The procedure section was detailed, and the results discussed and clearly illustrated by screenshots. Also, I found the ‘passive’ tools selection in the results table to be informed, if brief. Furthermore, I thought the discussion of ‘compromised’ tools to be interesting and well documented. Finally, the section addressing the ‘safety of penetration tools’ discussion was concise and well laid out.

    Some significant omissions and oversights did exist with this write-up, however. Of first concern, the literature review appeared disorganized both in structure and conceptual placement. The overly long first and last paragraphs were distracting, and the individually definable sections of the review, while reasonably good by themselves, were diminished by the patchwork nature of the body taken in its entirety. In sum, the review lacked internal consistency and organization: something which can be easily remedied in future write-ups, however.

    I found the ‘Methodology’ and ‘Findings’ sections to be intermixed, with a discussion of results proper being done in the ‘Methodology’ section. Here to, better organization would benefit the content presentation in this write-up. I would also submit that the overly detailed description (which for all purposes appeared to have been a slightly modified copy-and paste from an instruction sheet) of the Nessus installation was relatively spurious in the scope of the exercise, and could have been greatly condensed. Also, the question of patterns in Nessus exploits is ‘given a hand wave,’ but amounts to only a long worded ‘yes,’ with no description of what these patterns might be.

    The discussion of the findings (in the methodology) raises some serious questions about the conclusions drawn. The assertion is made that: “[Nessus and Nmap]… use small amount of packets to gather large amounts of information such as open ports and vulnerabilities…” I would ask, as witnessed by the screenshot provided (1-11), how can ‘connection-based’ services be engaged by the packets shown, which are obviously ‘connectionless’ in nature? I am certain that the correct data was gathered, but the conclusion drawn is wrong. Could it be that the actual ‘connection-based’ traffic is invisible to the third observer (and therefore unrecorded), rather some ‘incredibly efficient ‘capacity inherent in the two scanners coming in to play? I think it becomes fairly obvious that only broadcast and multicast packets are being recorded by the third party: the bulk of the traffic remains ‘unseen’ to the remote observer. This, of course, makes any conclusions drawn from this ‘meta exploit’ experiment of a dubious nature. To be fair, this team indirectly addressed this ‘switch hiding’ issue by referring to “the SPAN port,” or port mirroring setups later in the lab: but the reason for use of these types of ports was never explained, and certainly not applied back to the experimental setup.

    Furthermore, I wonder at the inclusion of ‘Locality Buffering’ in a listing for ‘passive’ penetration tools. According to a quick Google search, ‘locality buffering’ is simply a buffer algorithm for increasing the ‘speed’ of packet analysis: it does not really appear to be a passive reconnaissance ‘tool’ in and of itself. Additionally, tangentially related, the compromised ‘TCP Wrappers’ program mentioned later is essentially a firewall application: is this really an exploited ‘network penetration tool?’ Finally, I would note that the topic of the ‘operating system bias’ of the two penetration tools used in the lab is never raised.

  5. I thought their abstract was well written and documented what they planning to do in Lab 3. I did find some grammatical mistakes in their writings. This made it very hard to follow. Be sure to check for these mistakes before posting your paper to the blog. The formatting of the citations did not follow the APA 5th edition formatting. They neglected to include page numbers in their citations.
    This groups Methods section was very thorough and included screen shots that were helpful in understanding what they were writing.
    The group did not organize their paper by separating out parts 2A and 2b. This made it hard to find and follow which tools they found that were originally intended for ethical use but eventually used for unethical purposes. Once I found this information they did a good job of describing the tools that were exploited, how they were exploited, and provided 2 good cases studies.
    They did a good job of explaining how tools can be tested for exploits before using them for penetration testing. I agree with their conclusions and would overall that their paper was well written.

  6. Team 2 begins their lab report by defining passive reconnaissance as “performing information gathering on a target network with limited possibility of being discovered”. They then described what they are going to do in this lab. I did notice the word “where” where I believe they intended the word “were” in the second paragraph of this report. They also used the word “though” where I believe they meant to use the word “through”. There were a few other typographical and grammatical errors disbursed throughout the document, but wanted to specifically point these out as examples.

    Team 2 proceeded with a literature review on the two articles that were our assigned reading this week. They described the Privacy Oracle system, which tests common applications to determine if they are sending private information over the network. One issue here is that they state that AutoIT “controlled the installation of each of the 26 applications” when the article they referred to states that AutoIT automated the data input for the applications (Jung, Sheth, Greenstein, Wetherall, 2008, p280). It didn’t control the installation. The article on Blackbox vs. whitebox fuzzing was also discussed. They describe the use of SAGE to gather constraints from the application so that all of the application paths can be tested. I disagree with the statement that SAGE and Privacy Oracle could be considered to be complimentary to each other. They are two completely different testing environments with two completely different goals. The goal of Privacy Oracle is to test for private information that is deliberately sent over the network from the application, without the user’s knowledge or consent. SAGE however, is a system to analyze program code to determine all possible execution paths. This is done so that fuzzing will follow all of the possible paths of execution. Although these two techniques are not necessarily mutually exclusive, their goals are vastly different. The sentence “These topics and themes work into lab three by showing us one possible way to gather vulnerability information in a possibly remote and passive way” needs explanation. What possible way are we talking about? This literature review never actually tells us how these articles relate to our laboratory assignments, only that they both have to do with conducting an experiment in an enclosed environment.

    The methodology section reads like a very detailed how-to for installing and running nmap and Nessus. In my opinion this group should have put less effort into describing how to install and run the tools, and more effort into describing the information that was obtained when the tools were run.

    Their final section discussed their research into network penetration tools that had been exploited. The two tools that they mentioned are TCP wrappers and TCPdump. Although TCPdump is best described as a packet sniffing program, TCP wrappers does not fit into the classification of a penetration tool. TCP wrappers is better described as a network security program for Linux. It provides control over network services and which hosts are allowed to access them (see ITSO: TCP Wrappers – http://itso.iu.edu/TCP_Wrappers ).

    Team 2 lists three methods for verifying the safety of their tools; fuzz testing, measuring output with a packet sniffer, and code auditing. The second two methods are fairly straight forward concerning what they intend to accomplish. In their first method however, they suggest using Privacy Oracle and SAGE together to test the tool. Privacy Oracle is a method for determining if a program is sending personal information over the network to a third party. It does this by repeatedly running the program in a virtual machine, inputting different data into the program each time, and capturing the output. The outputs are then compared to each other to discover changes that are caused by varying the input. SAGE uses a system of whitebox fuzzing to ensure that it follows all of the execution paths and tests the entire program. Most of the scanning tools that we’ve used so far simply require a target IP address, or range of IP addresses, and perhaps some configuration parameters. What do we expect to accomplish by fuzzing the IP address or configuration parameters, and how does this ensure that the program is safe? If we remove the input procedure from the first method (Privacy Oracle and Sage), we are essentially left with the second method (virtual machine and packet sniffer). I just feel that this needs some further explanation.

    I believe that Team 2 needed to spend a bit more time performing research to verify the statements they made in this lab report. Some of their conclusions needed further explanation. They would have also done well to proofread the report to eliminate the grammatical and spelling errors. In my opinion, they spent far too much time and effort creating a how-to for the installation and running of nmap and Nessus and not enough effort into documenting what knowledge was gained by conducting this experiment.

  7. Group two did a nice job in creating an abstract for the third lab. They start off the abstract with an introduction to why we would use passive reconnaissance and what passive reconnaissance is. They transition from the second lab into this lab nicely. Also in the abstract they do a good job in briefly explaining what is going to be done in this lab. Next the group goes into the literature review. This literature review was well put together. The group started off by explaining each of the papers and comparing them together. The group talks about the methods used, or not used, in each of the articles. They describe the programs used in each test, in a detailed manner. The group also gives their opinion on how the writers could have used a virtual environment to help in the testing. The group then goes into a more detailed comparison of the two papers and ties them into the lab. They also do a great job on discussing the methodology, research data, and research question of each paper. The group stated that they found errors and omissions in the paper on the Privacy Oracle program, but they found a lot missing from Godefroid’s paper. Next the group started their discussion of the methodology of their lab. They quickly covered the literature review first. Then they talked about the second part of the lab. In this section they discuss how they put together the table that was required by the lab. They did a good job in explaining how they did the search using Google. They also explained in a detailed manner on what was going to be included in the table. They then quickly discussed the other questions of slowing down the programs and how that would affect the detection of the attack. They could have expanded on this by explaining what they researched for and how they did the research. Next the group goes into a detailed explanation of how they installed and ran Nessus. They gave step by step installation instructions complete with screenshots that were included at the end of the lab paper. Then the group goes into an explanation of how Nmap was installed and ran. Because of the simplicity of Nmap this explanation did not take as long. Next the group very briefly talks about how there would be similarities in Nessus exploits if put into a grid and how Nmap would have a low bias on operating systems. This part of the lab was lacking. The group seemed to just skim over these questions. They did not explain what patterns would occur if the exploits in Nessus were put in a grid. They just simply said that there would be patterns. Also the group just gave a brief explanation of how Nmap has a low bias toward operating systems, but they did not mention Nessus at all. The group could have done a much better job on these questions. After the questions above were addressed the group goes into a decent explanation of how they used Wireshark to capture the packets sent and retrieved by Nessus and Nmap. They also gave a brief explanation of the types of packets sent and the information that was retrieved by these packets. Next the group discussed the results of using Nessus and Nmap on their Windows XP SP0 machines. They mention what ports were exposed and what operating systems were found. Then they explain that Nessus broke down its findings to make it easier for the user to read. They also state that if Nessus exploits were put in a grid, they would create a patter, but they do not go into any type of details on the pattern that would be discovered. Last in the methodology the group discusses the last part of the lab. The group did a nice job explaining what this part of the lab was about. They also gave an explanation of what case studies they were going to use and why. Then they explain that they were going to go into how source code auditing would be used to detect if a particular program was being used by an attacker in a malicious manner toward its user. Next the group goes into their findings. The first part of their findings talks about what is included in the table of passive reconnaissance tools. They mention that tools that recreate the packet stream passively are mostly found in layer two or above of the OSI model. This explanation seemed very vague. They could have expanded on this part some more. Then the group goes into explaining what happens when a script or tool is slowed down. They give a good explanation of why slowing down an attack can prevent detection. Also they do a good explanation of the difference of an active attack and a passive analysis. They explain how tools that passively re-create a packet stream are of high value. Last the group talks about the last part of the lab. In this section the group talks about choosing two tools that were used to do counter attacks toward the user of that tool. The group chose TCP wrappers and TCPdump for their case studies. They started with the explanation of the TCP wrappers. They gave a good explanation of what a TCP wrapper was. Then using a case study that involved a program developed at a university, they demonstrated how the TCP wrapper was altered to allow root access to any system using the TCP wrapper program. Then they use a second case study to explain how TCPdump was infected to create a backdoor in a system and allow a remote shell to be installed on the compromised system. The group then talks about how each of these compromises show common issues and patterns in the exploits. This section was a nice way of explaining what to look for in a compromised program. Next the group explains three ways that a tool is not infected with some malicious code toward the user. These three detection methods are using a virtual environment with fuzz testing software on it, using an isolated virtual network with passive tools on it, and the last was to use source code auditing. The group then does a nice job of explaining the first two steps. They tied all the previous parts of the lab into this section, which nicely showed how the rest of the lab could be applied. Then the group does a detailed explanation of what source code analysis is. They give three ways of doing source code analysis. Then the group explains how source code auditing is viable in an enterprise environment, and backs it up with some data. Then they explain how a contractor that is assigned to penetration testing is liable for any negative effects of their software. The group next explains that they had a couple of issues with finding passive tools and access to the Citrix network. In the conclusion of this paper the group explains that even though passive tools are not as prevalent as active tools the passive tools are more powerful. Also the lab revealed to the group that attack tools could be turned against an attacker or vice versa. They also showed how passive tools are valuable in gathering information on what is running on a network. They also explained how this lab was valuable to the rest of the semester. At the end of the paper was the table that was created. The table was fairly short as the group said. The table did reveal that most of their tools were located in the data link layer of the OSI model and that all of them attacked confidentiality, transmission, and technology. The table even included some nice tools in the kinetic and people layers of the extended OSI model.

  8. Team two gave a good overview of passive reconnaissance in the abstract section of their laboratory report.

    Group two did an effective job summarizing the articles in the literature section and addressing the faults in both articles. Group two pointed out what other groups did not in regards to the Godefroid article. Group two stated “The long abstract explains what white box fuzz testing is, that is better than black box, but does not present any data to support that claim. “
    In the methods section, I was surprised to find that Group 2 had to go through many other steps besides installation to get Nessus to work. My group also downloaded Nessus onto Windows XP, but the version we used did not require all of these additional steps. I have to disagree with the statement “Nmap database contains 1684 signatures, which means some 1684 different operating systems versions” because Nmap located numerous open ports in the same operating system, so there are a certain number of signatures per operating system version. When the group stated “On a different machine Wireshark was used to capture packets that were sent by Nmap and Nessus”, which virtual machine did the group place Wireshark onto? I had to partially disagree with the statement “This gives the reader easier and quicker access to the data the user requires because the more plug-in s that are installed, the longer the program would take to gather the information about the vulnerabilities. In the statement “If Nessus exploits were put into a grid there would be several similarities.”, of what similarities was the group referring to?

    In the findings section group 2 came to a similar conclusion to group 1 when they stated that” The act of reviewing packets of the course of a network attack is passive; the act of speeding up or slowing down a network attack is not passive. “I had to partially disagree with the statement “The speed at which a particular attack runs can play a huge role in the ability to detect an attack or not” because while slowing the speed down may make an attack harder to detect, some attack tools produce certain types of network traffic that could be distinguished from legitimate traffic regardless of how fast or slow the attack tool is running.

    Group two was able to identify two types of penetration tools that were compromised; TCP wrappers and TCPdump. According to group 2, “Both of these tools are commonly used to intercept information, in a passive manor, off of a network. “These tools that were identified were both plagued by Trojan horses.

    Group two did not address what risks would there be to a production network if untested penetration tools were executed upon it.

    In the Issues section, when the group stated “First it is apparent that the list of passive network recon tools is much smaller and harder account for than active network recon tools”, I did not see this as an issue for some tool types are more plentiful than others.

  9. Team 2 did an overall good job and this lab made me think about the items within the lab other than the grammar and editing. They started of with their abstract and it explains what was going to be done and what they will be looking for. They then go into their literature review and do a good job at comparing and contrasting the papers, as well of explain what happened within the documents. One thing that could had made any arguments stronger was in the future if there are only to articles do not be afraid to go research additional papers or articles to prove ore disprove the authors and this will make the review even stronger. When having just the two papers it makes it hard to argue any other point because it almost makes it one side or the other. Do you think that Godfried could have done something or waited to release more information within the abstract to support his argument? The authors then go into the methodology explain what was going to be done and explains what resources where going to be used. After this section they go into their findings and results from the lab. There where a couple things that where left out, such as is there a biases towards either of the systems. Secondly would any of these tools be useful in a corporate environment, or would they all be prohibited from use. One thing that stuck out and that gave me a question was about the slow attacks. Would there be a way to prevent these types of attacks? Are their any tools currently that will help monitoring this type of traffic coming into a network? This made me research further and gave some results upon doing a Google search. This brought a few different papers and tools that would assist in these types of attacks. A company called EiQ puts out a tool that helps assist in these types of attacks (http://www.eiqnetworks.com/solutions/Low_and_Slow_Attack_Detection.shtmlTheir). Their tool works by monitoring the system over a period of days and correlating the information. Would this be a useful program or would it take to long to detect a “slow” attack? Then the authors go into the issue section and it is understandable that the passive tools be difficult to account for? Is it because when attacking many people think the best way is just by going all out at a system? Or could many of these tools be self-made by the “elite-hackers” and not distributed among the many script kiddies? This then leads to another question would a “home brewed” tools be more effective when attacking a system then one that is already made? This section really sparked thought and then they went onto their conclusion and described what was done. Is it harder to find passive recon information because it has not been explored as much? This team is not the only one too have troubles with finding additional information and it is agreed that this is a big step towards the class and what is to be learned in the future labs. This team’s lab was good and pushes to expand the readers mind and want to know more.

  10. I think that group 2’s write-up for lab 3 was good but lacking adequate assessment in some sections. The abstract for this lab was good and accurately described the laboratory. The literary review was good. Some of the diction in the literary review was repetitive at times (black-box, white-box, fuzzing), but it didn’t really make the section difficult to read. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done correctly. For part 2A of the lab, the initial description of installing and using Nmap and Nessus is very accurate and easy to follow. Information about the vulnerabilities Nessus scans for was shown and cited correctly. However, once the group started discussing packets that were captured in Wireshark from Nessus and Nmap, everything changes. Grammatically, it would seem as if that portion of the lab was outsourced to South Korea and then translated back to English using http://babelfish.yahoo.com/ (example: “The method that Nmap and Nessus were that they send ARP packets to the target hosts and information that wireshark return about the ARP packet was asking how is ”). Grammar aside, there is another obvious problem with the methodology used to analyze the packets captured during the test. ARP packets are NOT the packets used by these tools. For instance, Nmap uses packets such as TCP SYN/ACK, UDP and ICMP. ARP packets are automatically sent between hosts to determine the MAC addresses of other hosts to match their IP addresses. I’m not sure how this was overlooked, because even looking at figure-1-11 (also note, when changing the size of a screenshot, the pixels-per-inch should also be increased so it’s not so fuzzy when you shrink the images) the ARP packets are being generated from hosts other than the ones being tested. Analysing the actual results of Nmap was good. When discussing Nessus, the group assessed that the vulnerabilities would fit into our grid, but gave no indication how or why. The assessment about the likeliness of attacks being discovered based on the timing was done well and accurately answers the question. The last part of the section was done well and covered many vulnerabilities in security tools that pertain to the lab. Finally, the conclusion was written well and accurately sums up the laboratory.

Comments are closed.