Abstract
The purpose of this exercise was to create an experimental environment by which penetration testing could be conducted in a controlled way. Four virtual machines, three Microsoft variants and one Linux distribution, were selected and linked together via a virtual subnet in the VMware Workstation software suite. Various sources of penetration tools were found, evaluated, and a subset of these tools were selected for future use. The tool selection was categorized by a nine layer Open Systems Connection (OSI) Model, with a model layer chosen to classify each tool. Additionally, the tools were then assigned to a specific coordinate of the standard McCumber three-space information security model. This ‘laboratory’ setup will be the basis of future penetration testing experiments.
Introduction
Security of sensitive information has been a concern which knows no timeframe. From the earliest recorded accounts of human interactions, the concept of ‘data security’ or the protection of knowledge which would grant an adversary a substantial advantage, and the consequences of failure to protect said knowledge are present. In the current era, these concerns have become increasingly common, even to the point of being relatively mundane. This is due partially to the sheer amount of information which is generated every day, which supersedes by orders of magnitude a similar timeframe even just fifty years past. Additionally, the vast array of contemporary information has brought with it a much wider range of consequences when the security of sensitive data is breached, many unknown or even impossible in the past. These consequences can dire, including instantaneous poverty via the theft of electronic financial assets, the total collapse of national transportation infrastructures, and the immediate execution of huge population groups via modern weapons systems.
It is with the severe consequences of these security breaches in mind that many researchers have chosen to adopt a more proactive stance toward data security, namely in the concept of penetration testing. Penetration testing typically involves a group of professionals chosen to conduct outside attacks against the security of a network system which contains substantial sensitive data assets. This ‘red team’ uses various methods and exploits known to exist in the ‘real’ adversaries’ repertoire to attempt a penetration of the ‘secure’ network’s defenses. Any breaches attained are described to the target networks maintainers, with the intent that these defects can be rectified before a truly sinister entity exploits them for real harm. This concept of penetration testing is the motivation for this laboratory experiment, the fundamental concepts and procedures being simulated in the microcosm of a virtual networked environment. In this exercise, we wished to address three general research concerns, the first being the question: what are the steps and materials necessary to create the basic virtual penetration mockup? Secondly, we sought to examine the scope and relevance of penetration tools available to security community. Third, we attempted to classify the specific threat area of tools judged suitable for use in the test environment in relation to both network and security theoretical models.
Literature Review
Article: Red Teaming
One of the primary techniques used to test a system is the use of Red Teaming. Red Teams help us to think like a computer hacker, using the same resources and knowledge. Red teams will simulate a malicious attack, while other red teams try to defend against it. An example of Red Teaming in use occurs at Darmstadt University of Technology in Germany. Since 1999 they have conducted Hacker Contest which is a lab course where teams attack other systems and analyze the attacks on their own systems. Similarly the US Military Academy at Westpoint conducts cyber defensive exercises similar to capture the flag. Red team hacking should become more difficult over time, raising the bar on security.
Article: Components of Penetration Testing
There are various components of penetration testing. External network hacking involves attacks from outside the local network. These most common of these attacks involve firewall, routers, and web servers. Internal network hacking involves hacking from within the local area network. This is often done within the company using its own equipment. Application testing tests custom software for security vulnerabilities. Wireless LAN assessment involves war driving. War driving is the act of looking for insecure wireless networks. Social engineering is also sometimes used. Social engineering involves making contact with people within the organization to try to gain sensitive information that would allow access to the network. Trashing simply involves looking through the garbage to in an attempt to find sensitive information that has been discarded.
Article: Creating a Testbed for Red Teaming and Experimentation
Since March, 2004, University of Southern California has maintained a testbed known as DETER for testing network security. Because of the containment of their system, it has been particularly useful in testing malicious code and denial of service attacks. The level of containment can be scaled based on the level of threat. Experimenters are required to state the nature of their experiments prior to conducting them so that the appropriate level of containment can be set. The system can provide remote access for experimenters while not allowing the system to route packets outside the nodes of the testbed. This is accomplished by a firewall between the testbed and the internet. The system is designed to mimic a small version of the internet. The original disk images containing the operating system, configuration files, and input output states of nodes can be restored to return the system to its original state. This allows for experiments to be repeated precisely or modified in a controlled manner. The results of experiments are archived for future use. The system includes tools to measure the various metrics of interest for comparison with other experiments. The system has been used for testing worm behavior and both open source and commercially available antivirus software.
A similar testbed system is located at Indiana University of Pennsylvania. Their lab, known as a cyberwar lab, is a standalone lab using Linux on all of the machines. The goal is for one team of students to try to gain root access while the other team tries to defend against it. The attacking team attempts to map the network using ping, nmap, nslookup, traceroute, and dig. They would use Nutcracker to attempt to find passwords. A third team would work on forensics. Although Linux was chosen because it was free and open source, it would have been more realistic to build a network containing various operating systems. They also found that a Linux server that doesn’t provide any services is extremely secure, however unrealistic in a business environment.
The University of Arkansas a Little Rock has developed a testbed using virtual machine technology. The advantage is a cost efficient system that can be isolated. It allows for the simulation of a large network with minimal hardware requirements. It allows for disposable operating systems that can be discarded and replaced once they are infected. Examples of virtual machine products are VMWare, XEN, Qemu, and Microsoft Virtual PC.
A course named Cyberattacks is available at Washington and Jefferson College for non-it majors. In addition to classroom activities, students conducted labs involving viruses and antivirus software, spyware and adware, and password cracking. Students use malware creation tools known as script kiddies to create custom viruses and worms. They then use antivirus software to determine if their custom malware could be detected.
Article: Threat Assessment using Modeling and Automated Attack
An article by Ray, Vemuri, and Kantubhukta suggests using object oriented modeling techniques to assist red teams in understanding and planning for attacks. A threat model is a functional decomposition of an application using a dataflow diagram to demonstrate possible vectors of attack. The types of threats for each vector are then identified. An attack tree is constructed to model the steps in the attack. The root of the tree represents the compromised system. The child nodes are the possible steps involved in conducting the attack. The leaves are the different approaches to achieving the goal of compromising the system. The automated attack model uses sequence and state-chart diagrams along with XML to represent the attack and defense methods. Psuedo code is generated to show the programming logic. Information from the diagrams is stored in a database in XML format.
Article: Common Omissions
In a paper entitled “Broadening the Scope of Penetration Testing Techniques”, Ron Gula identifies several areas that penetration testers typically miss. The 14 things that he lists are DNS Spoofing, third-party trust, custom Trojan horses, database exploits, routing infrastructure, testing the intrusion detection system, Web site server side applications, TCP hijacking, testing the firewall, ISDN phone lines, network brute-force testing, testing non-IP networks, Ethernet switch spoofing, and exploiting chat tools. One of the reasons why some of these are overlooked is because testing them may cause some instability in the production system. Another reason is that the tester may not want to involve unsuspecting third parties. An example would be a custom worm attached to an email which makes its way to an employee’s home computer. Another area that is not always explored is zero-day exploits, which are new attack vectors not previously discovered. Because the degree of difficulty greater, they are most often created by experienced network penetration testers.
Article: Vulnerability Testing Using Fault Injection
In his paper “Vulnerability Testing of Software Using Fault Injection”, Aditya P. Mathur describes fault injection – changing the software environment to test the fault tolerance of the software systems. Fault injection is “the deliberate insertion of faults into an operational system to determine its response”. The faults are designed to mimic faults that may occur during the intended use of the system. The Environment-Application Interaction (EAI) fault model is created which emulates environment faults that are likely to cause security violations. The model emulates what a real hacker would do. The environment fault becomes input into the application, thus causing a fault within the application. An environment fault may also be malicious code that is called by the application, thus causing a security fault directly in the environment.
Methodology
The first task was to set up a lab environment. In order to make the same systems available to the entire group, we used VMware workstation over Citrix. We used the pre-created Virtual Machines (VM) provided by the CIT&G department. We used the virtual network settings tool to create an additional network with an address of 192.168.3.0/24. We then disabled the virtual adapter for the host machine in order to free the .1 address normally taken by virtual adapter. Each of the Windows VMs was then given static addresses as directed in the lab assignment using the windows networking utility. We edited the /etc/network/interfaces file in the Debian VM to assign the appropriate address to it.
The scope of the security tools available to the security community was examined in a tree based search pattern. Initially, three individual root tree nodes were represented by three distinct sources. One root node consisted of the security tools release compilation ‘Backtrack’, the second root node was the FreeBSD ‘Security’ and ‘Net-tools’ ports collections, and the third root node became a tool list on the security website ‘www.insecure.org.’ The sources and the tools presented in each were evaluated with regard to application in this exercise. Additionally, each tool examined was treated as a possible link to unknown tools by direct means, such as websites listed in tool documentation, or indirect means via results of a web search engine query. Through this method, a tree structured pattern of search emerged, with new sources emerging from those already discovered.
The classification of each tool with regard to security and network models was closely related to the above mentioned search process. It must be confessed that the entire search-evaluation exercise was begun with the premise of achieving near uniform distribution amongst the seven theoretical network layer classifications of the OSI network model (taken as security ‘threat areas’). This led to a perceivable bias in the classification of some tools, as a number of layers exhibited a much greater variety of applicable tools over other layers. Some tools suffered from ‘repurposing’ simply due to the order in which they were discovered. In theory, ‘repurposing’ of tools occurred if and only if the application of the tool was judged to be of equal worth at a different OSI network layer than first apparent, but due to time constraints and lack of knowledge of tool operation, some tools may not have been classified strictly by their strongest properties. Security model classification by the McCumber cube model was done after the tools were classified into OSI layer ‘threat areas,’ though classification often reflected the inherent difficulties in applying a theoretical framework to ‘messy’ real world functionality.
Results and Questions
The creation of the VMware Workstation based test environment proved straight forward. Virtual network configuration was easily accomplished through the ‘VMnetcfg’ utility, and appears to accurately reflect a ‘real world’ equivalent switched network topology. Furthermore, it appears trivial to add additional virtual machine images to the VM network, which will prove useful for the addition of specific guest machines configured to run standalone penetration testing distributions. The total capacity of the number of images the current VMware Workstation host machine can run concurrently is unknown, but could be determined empirically if necessary.
The general OSI model classification criteria used is illustrated in Table 1.1.The detailed results of the tool evaluation procedure are presented in Table 1.2. As noted above, some OSI level ‘threat areas’ were addressed by many more tools than others, and some of this can be seen in the distribution of tools in the table categories. Additionally, the extended layers, layers zero and eight, represent ‘tools’ of an abstract or solely theoretical construction. It is assumed that these will lie outside the bounds of experimental scope, as actual application of many of these ‘tools’ would be unethical and illegal, issues of practicality aside.
It is notable that nearly all of the experimentally viable tools fall into the McCumber ‘technology’ category. This is to be expected, as ‘technology’ is a locally predictable force which is known to have definable limitations and strengths. Technology is also a subset of reality which has been crafted by human minds; it is logical in its processes and controllable in the scope of its utilization. In many respects, technology resembles an ideal ‘virtual world,’ it is designed to function according to the scientific method, it can logically be constructed and deconstructed at will, and it is nearly uniform in behavior among subclasses of entities due to standardization. In this respect, technological devices present the ideal ‘mass victim’ in that one exploit is nearly guaranteed to work across the board for all same-class devices. This stands in contrast to the other McCumber classifications, as ‘policy/practices’ and ‘human factors’ exist in the ‘real world’ and exhibit neither standardization nor universal exploits, much less consistent functionality.
It is also true that many of the current security vulnerabilities exist only ‘because’ of the use of technology (i.e. electronic transaction vs. real goods bartering). Therefore it follows that these vulnerabilities can only be exploited by utilization or misuse of the technological construct—hence we use technology to defeat technology. It is true that non-technical devices can ‘defeat’ technology, but often ‘defeat’ is not synonymous with ‘exploit’ (e.g. a hammer versus a circuit board).
There can be no doubt that the true effectiveness of penetration testing is compromised by a bias resulting from commonly accepted tools and known exploits. The standard penetration test uses the standard tools, and so finds the standard problems. To paraphrase a well known saying: one never hears the fatal shot; so too, the truly effective and dangerous security exploit arrives suddenly and with no prior warning. While the use of existing penetration tools may be useful in preventing ‘copycat’ attacks, real pro-active assessment requires innovation ‘beyond’ the standard accepted procedure. The self-selecting nature of current penetration testing techniques in reality provides the same class of protection as most signature base anti-virus software: today’s vulnerabilities will only be detected by tomorrow’s update.
Problems
Of foremost concern, it was noted that a virtual test environment imposes limitations upon the extent to which ‘real’ penetration testing can be simulated. For example, many of the available tools and exploits addressed certain characteristics of hardware based Cisco routers. At this time, it does not appear possible to simulate a Cisco router in the virtual environment; hence some of the more powerful network exploits will remain unexplored. This could be remedied by using real hardware in conjunction with VMware Workstation in ‘bridged’ network mode, but this begins to stretch the definition of a ‘virtual’ test environment, and hence violates one of the primary aims of the experiment. A somewhat acceptable solution may be to use the VMware Workstation ‘NAT’ network configuration, which must by its nature implement some routing functionality; or, a virtual machine could be configured to act as a router on the virtual network: but this still excludes the use of specific platform tools.
Additionally, many of the tools examined within the scope of this exercise simply appear not fit within the scope of the virtual test environment. SIP eavesdropping, instant messaging snooping, email attacks: all these assume a dynamic environment filled with exploitable real-time human generated content. This seems impossible, or at the very least impractical to simulate within the virtual test environment. Here to, many powerful tools may go untested due to the nature of the test environment.
Finally, as stated earlier, some problems were uncounted fitting the ‘attack area’ of tools into the general theoretical frameworks of the OSI model and the McCumber cube. A ‘closest fit’ was always chosen out of necessity, but many tools fit equally well in multiple categories, the difference lying only in the intent the attacker. Theoretical models with higher levels of granularity would solve many of the classification problems, but much of the strength of the current models is in their relatively simplistic elegance. Yet again, one encounters the dissonance of elegant theory with ‘real world’ application.
Conclusions
In conclusion then, it was found that with the exception of specific hardware simulation and real dynamic human factors, VMware Workstation can be configured to provide a viable and ultimately useful virtual penetration testing environment. Additionally, a myriad of penetration tools were evaluated, selected, and classified by OSI layers and McCumber cube coordinates in preparation for penetration testing simulations. Even though some tool classification ambiguities were encountered, there is every expectation that each tool will be effective in regard to its targeted ‘threat area’ designation.
Charts, Tables, and Illustrations
Table 1.1 OSI Exploit Classifications and Examples
Table 1.1 OSI Exploit Classifications and Examples
OSI Layer | Technology | Host/Media Layer | Exploit Method |
Layer 8 / People | The protocol is cognition, politics, and process; the equipment is sneakers. | N/A | Social Engineering, Fraud, Confidence Games. Examples: Phishing – phony e-mails to solicit information; shoulder surfing- watching someone’s screen as they enter information; Trojans applications that appear innocuous but carry a malicious payload.; Flattery; Intimidation; Theatrics; Impersonation; Stealth; Resentment; Financial reward; Idealism; Deception; camaraderie; Romance; Extortion ; confusion; trust; ambition; ego; discord; fear; chaos; overconfidence; misdirection; gossip; boredom; congeniality; Anger; Pride |
Layer 7 / Application | This is the FTP/HTTP/SMTP protocols. | Host Layer | Buffer overrun / execution stack exploits, rogue executable insertion; DNS/DHCP/RIP based attacks search engine based recon . Examples – Google mail enum – uses Google to find e-mail addresses in a given domain; GHDB – uses Google to find sensitive information or vulnerabilities in websites; pirana – scans for vulnerabilities in e-mail content filters; relay scanner – looks for open SMTP relay servers; xspy – key logger; Rainbowcrack – hash brute forcer; pwdump – outputs local account password hashes; John the ripper – password cracker; Cain and Abel – password recovery tool Dsniff – traffic monitor, extracts potentially useful data; Metasploit – exploit framework; whisker – CGI vulnerability scanner; Brutus – password cracker; inguma – vulnerability scanner aimed at Oracle; Amap application scanner; Nikto – web server vulnerability scanner. Ophcrack – windows password cracker; pantera – web sever vulnerability scanner; paros – web server vulnerability scanner; Scanhill MS messenger scanner; Slurpie – distributed password cracker; uberkey – keylogger; VNCcrack – cracks VNC passwords; Braa – SNMP scanner; AIM sniff – Reads AIM traffic; crack – UNIX password cracking; gwee – web exploit utility. THC Hydra – password cracker |
Layer 6 / Presentation | Data translation and encryption. | Host Layer | Man-in-the-middle attacks, key cracking, data payload eavesdropping (Applies in many layers, but arguably fits best here).Examples: Aircrack – captures wireless packets to decode WEP keys; Ettercap – performs man in the middle attacks can be used to leverage exploits at other layers; Ike-scan – VPN/IPsec scanner; Psk-Crack – Ipsec cracker; SSL scan – queries ports for ciphers and certificates; l0phtcrack – password utility; scanSSH – scans for SSH protocol in use; PuTTY – terminal emulator |
Layer 5 / Session | Connections between machines. | Host Layer | NetBIOS / NetBEUI vulnerabilities, connection hijacking, credential impersonation. examples: SMTP-vrfy – uses verify command in SMTP to find user names NMAP: Network scanner – works at multiple layers to aquire information; Nbtscan – NetBIOS name scanner; KaHt2 – RPC Trojan; Unicornscan – network scanner works on multiple layers; MBSA – security auditing tool, works across layers; Hackbot – vulnerability scanner. NBAT – nnetBIOS auditor; KRIPP – displays clear text passwords |
Layer 4 / Transport | Reliability and message continuity. | Host Layer | Packet sequence exploits, Denial of service attacks, open port / service discovery, host fingerprinting. Examples: Firewalk – scans gateways for open ports; Protos – protocol scanner, looks for protocols running on a given host; Amap – looks for protocols running on specific ports; Packit – can capture and spoof packets also operates at 2 and 3;Superscan – port scanner; TCPdump – packet sniffer; Wireshark – packet sniffer; Hping2- portscanner/packet modification tool with functionality similar to Nmap; Nessus – port/vulnerability scanner; Netcat – scanner, tunneler; Data tunneler; SARA – scans network traffic. Used across other layers can find trust relationships; PBNJ – network monitor; Strobe – port scanner; ISN prober- reviews packet header information; p0f – system fingerprinter; sinfp – os fingerprinter; sbd – netcat clone; telnet; Socket APIs; |
Layer 3 / Network | Logical addressing such as Internet Protocol, routers and gateways. | Media Layer | IP address spoofing, router table modification, ICMP based exploits such as denial of service attacks or network reconnaissance. ASS – protocol scanner; Deep Magic Information Gathering tool (DMItry) – scans for host information; DNS-ptr – resolves DNS names; DNS-walk attempts zone transfer to gain DNS database (deprecated) DNSMapper – Maps subdomains; DNSpredict – uses Google sets to find subdomains. Dig – finds name servers for a given domain; DNS enum – finds name servers for a domain and attempts zone transfers; TCtrace – performs traceroute using only syn packets; SING – used to modify ICMP packets; route injection; Ping of death (deprecated) malformed ping causes DoS; Fragrouter – manipulates packets to avoid IDS;Angst – active packet sniffer; TCPshow – packet decoder; THC IPv6 kit – tools for hacking IPv6; Nemesis – packet injection tool; Netsed – packet modifier; |
Layer 2 / Data Link | Physical addressing such as MAC addresses, NICs, and bridges. | Media Layer | MAC address spoofing, low-level denial of service attacks, spanning tree / network logical topology reconfiguration exploits. Examples: GNU MAC changer – spoofs MAC addresses. Yersinia – can spoof several layer two and 3 protocols for various purposes. Kismet – passively collects packets to identify wireless networks. |
Layer 1 / Physical | The media layer such as radio or cables, dumb hubs and repeaters. | Media Layer | Wireless sniffers / jammers, cable taps, modified hardware, and premises infrastructure destruction / modification based on physical access. Examples: Netstumbler – detects wireless signals; Schematics; Specifications; Case studies ; Blueprints; explosives; Fire; Liquid nitrogen; Magnetic disruption; Voltage spikes; Ultrasonic disruption; Radioactive isotopes; Sledge hammer; conductive jumpers; wire cutters/razor knife; epoxies; disassembly; corrosion agents; hardware substitution; destructive resonance; multi spectrum data recording/electromagnetic imaging (TEMPEST); Ultraviolet light induced degradation; wedges; RFI; EMP; Microwave; Particle based disruption; Theft |
Layer 0 / Kinetic | Cyber activity is diverted into kinetic energy by servos or devices | N/A | Access to physical hardware, manual overrides or parallel controls, reconfiguration of physical or electrical properties. Examples: HMI; PLC; RTU |
Table 1.2 Exploit Tool Listing
Layer 0 | Human Machine Interface (HMI) | Technology, storage, integrity |
Layer 0 | Remote Terminal Units (RTU) | Technology, transmission, integrity/confidentiality |
Layer 0 | Programmable Logic controller (PLC) | Technology, transmission/processing, integrity |
Layer 1 | Schematics (know the system) | Technology, Processing, Confidentiality |
Layer 1 | Specifications (fault tolerance) | Technology, Processing, Confidentiality |
Layer 1 | Case studies (similar systems) | Technology, Processing, Confidentiality |
Layer 1 | Blueprints (as-builts) | Technology, Processing, Confidentiality |
Layer 1 | Explosives (unlimited physical disruption) | Technology, Processing, Availability |
Layer 1 | Fire (thermal disruption) | Technology, Processing, Integrity |
Layer 1 | Liquid Nitrogen (thermal disruption) | Technology, Processing, Integrity |
Layer 1 | Magnetic disruption ( Magnetic fields measured in Teslas) | Technology, Processing, Integrity |
Layer 1 | Direct energy injection (over-voltage) | Technology, Processing, Integrity |
Layer 1 | Ultrasonic disruption | Technology, Processing, Integrity |
Layer 1 | Radioactive isotopes | Technology, Processing, Integrity |
Layer 1 | Sledge Hammer (limited physical disruption) | Technology, Processing, Integrity |
Layer 1 | Conductive jumpers ( circuit alteration) | Technology, Processing, Integrity |
Layer 1 | Wire cutters / razor knife (circuit alteration) | Technology, Processing, Integrity |
Layer 1 | epoxies / adhesives | Technology, Processing, Integrity |
Layer 1 | Non-destructive disassembly (physical) | Technology, Processing, Confidentiality |
Layer 1 | Corrosion agents | Technology, Processing, Integrity |
Layer 1 | Hardware substitution (“Trojan” hardware) | Technology, Processing, Integrity |
Layer 1 | Destructive resonance ( see Tesla) | Technology, Processing, Availability |
Layer 1 | Multi-spectrum data recorders (“black box” reverse engineering) TEMPEST | Technology, Processing, Confidentiality |
Layer 1 | Electromagnetic imaging (“black box” reverse engineering) TEMPEST | Technology, Processing, Confidentiality |
Layer 1 | Ultra-violet light induced degradation (plastics, unprotected EPROM based controllers) | Technology, Processing, Integrity |
Layer 1 | Wedges (anything to jam gear meshes, solenoids, etc.) | Technology, Processing, Availability |
Layer 1 | Direct signal injection (analog local control, e.g. 4-20 mA systems) | Technology, Processing, Integrity |
Layer 1 | Radio Frequency Interference (RFI) | Technology, Processing, Integrity |
Layer 1 | EMP based attacks (destructive) | Technology, Processing, Availability |
Layer 1 | High power microwave disruption | Technology, Processing, Availability |
Layer 1 | Particle based disruption / surgical destruction (e.g. linear accelerators) | Technology, Processing, Integrity |
Layer 1 | Theft | Policy-practice, Confidentiality, Storage |
Layer 1 | NetStumbler | Technology, Transmission, Confidentiality |
Layer 1 | Aircrack | Technology, Transmission, Confidentiality |
Layer 2 | Kismet | Technology, Transmission, Confidentiality |
Layer 2 | ifconfig (MAC id) | Technology, Transmission, Integrity |
Layer 2 | GNU MAC Changer | Technology, Transmission, Integrity |
Layer 2 | Ettercap | Technology, Transmission, Integrity |
Layer 2 | Yersinia | Technology, Transmission, Integrity |
Layer 3 | Angst | Technology, Transmission, Confidentiality |
Layer 3 | Sing | Technology, Transmission, Integrity |
Layer 3 | Ass | Technology, Transmission, Confidentiality |
Layer 3 | igrp (route injection) | Technology, Transmission, Integrity |
Layer 3 | Packit | Technology, Transmission, Integrity |
Layer 3 | ping (DoS) | Technology, Transmission, Availability |
Layer 3 | Fragrouter | Technology, Transmission, Availability |
Layer 3 | Superscan | Technology, Transmission, Confidentiality |
Layer 3 | Tcpdump | Technology, Transmission, Confidentiality |
Layer 3 | Wireshark | Technology, Transmission, Confidentiality |
Layer 3 | Nmap | Technology, Transmission, Confidentiality |
Layer 3 | Hping2 | Technology, Transmission, Confidentiality |
Layer 3 | DMItry | Technology, Transmission, Confidentiality |
Layer 3 | DNS-ptr | Technology, Transmission, Confidentiality |
Layer 3 | DNS walk | Technology, Transmission, Confidentiality |
Layer 3 | DNS mapper | Technology, Transmission, Confidentiality |
Layer 3 | DNS predict | Technology, Transmission, Confidentiality |
Layer 3 | Dig | Technology, Transmission, Confidentiality |
Layer 3 | DNS enum | Technology, Transmission, Confidentiality |
Layer 3 | TCtrace | Technology, Transmission, Confidentiality |
Layer 3/4/5/7 | PBNJ | Technology, Transmission, Confidentiality |
Layer 3/4/5/7 | Unicornscan | Technology, Transmission, Confidentiality |
Layer 3/4 | Strobe | Technology, Transmission, Confidentiality |
Layer 3 | tcpshow (tcpdump interpreter) | Technology, Transmission, Confidentiality |
Layer 3/4 | THC IPv6 Attack Kit | Technology, Transmission, Integrity |
Layer 3 | Nemesis | Technology, Transmission, Availability |
Layer 3 | Netsed | Technology, Transmission, Integrity |
Layer 4 | ISNprober | Technology, Storage, Confidentiality |
Layer 4 | Nessus | Technology, Transmission, Confidentiality |
Layer 4 | Netcat | Technology, Transmission, Confidentiality |
Layer 4 | Firewalk | Technology, Transmission, Confidentiality |
Layer 4 | Socat | Technology, Transmission, Confidentiality |
Layer 4 | p0f (Passive OS finger printer) | Technology, Transmission, Confidentiality |
Layer 4 | SinFP (Frame finger printer) | Technology, Transmission, Confidentiality |
Layer 4 | sbd (Improved netcat) | Technology, Transmission, Confidentiality |
Layer 4 | Telnet | Technology, Storage, Confidentiality |
Layer 4 | C/C++ BSD socket API | Technology, Storage, Confidentiality |
Layer 4 | Pearl script with sockets | Technology, Storage, Confidentiality |
Layer 4 | Python script with sockets | Technology, Storage, Confidentiality |
Layer 4 | Protos | Technology, transmission, Confidentiality |
Layer 4 | Aman | Technology, transmission, Confidentiality |
Layer 4 | Packit | Technology, transmission, Confidentiality |
Layer 4 | Superscan | Technology, transmission, confidentiality |
Layer 4/5/6/7 | MBSA (Microsoft Baseline Security Analyzer) | Technology, Storage, Integrity |
Layer 5 | hackbot | Technology, Storage, Integrity |
Layer 5 | SMTP verify | Technology, storage, confidentiality |
Layer 5 | Nbtscan | Technology, Storage, Confidentiality |
Layer 5 | kaHt2 (RPC exploit) | Technology, Storage, Integrity |
Layer 5 | NBAT (NetBIOS Auditing Tool) | Technology, Storage, Confidentiality |
Layer 5 | NMAP | Technology, storage/transmission, confidentiality |
Layer 6 | l0phtCrack | Technology, Storage, Confidentiality |
Layer 6 | Ike-scan | Technology, Transmission, Confidentiality |
Layer 6 | PSK-Crack | Technology, Transmission, Confidentiality |
Layer 6 | SSLScan | Technology, Transmission, Confidentiality |
Layer 6 | ScanSSH | Technology, Transmission, Confidentiality |
Layer 6 | PuTTY | Technology, Storage, Confidentiality |
Layer 7 | Nikto | Technology, Storage, Integrity |
Layer 7 | Ophcrack | Technology, Storage, Confidentiality |
Layer 7 | Pantera | Technology, Storage, Integrity |
Layer 7 | Paros | Technology, Storage, Integrity |
Layer 7 | Scanhill (MS Messenger Sniffer) | Technology, Transmission, Confidentiality |
Layer 7 | Slurpie (Distributed pw cracker) | Technology, Storage, Confidentiality |
Layer 7 | uberkey (keylogger) | Technology, Storage, Confidentiality |
Layer 7 | VNCcrack | Technology, Transmission, Confidentiality |
Layer 7 | Braa (SNMP scanner) | Technology, Processing, Integrity |
Layer 7 | AIM Sniff | Technology, Transmission, Confidentiality |
Layer 7 | crack (UNIX based) | Technology, Storage, Confidentiality |
Layer 7 | gwee | Technology, Storage, Integrity |
Layer 7 | THC-Hydra | Technology, Storage, Confidentiality |
Layer 7 | KRIPP | Technology, Transmission, Confidentiality |
Layer 7 | Amap | Technology, Storage, Confidentiality |
Layer 7 | xspy | Technology, Transmission, Confidentiality |
Layer 7 | RainbowCrack | Technology, Storage, Confidentiality |
Layer 7 | Pwdump | Technology, Storage, Confidentiality |
Layer 7 | John the Ripper | Technology, Storage, Confidentiality |
Layer 7 | Cain & Abel | Technology, Storage, Confidentiality |
Layer 7 | Dsniff | Technology, Transmission, Confidentiality |
Layer 7 | Metasploit Framework | Technology, Processing, Integrity |
Layer 7 | Whisker | Technology, Processing, Integrity |
Layer 7 | SARA | Technology, Processing, Integrity |
Layer 7 | Brutus | Technology, Storage, Confidentiality |
Layer 7 | Inguma | Technology, Storage, Confidentiality |
Layer 7 | Google Mail-enum | Technology, storage, confidentiality |
Layer 7 | GHDB | Technology, storage, confidentiality |
Layer 7 | Relay Scanner | Technology, storage confidentiality |
Layer 8 | Flattery | Policy-practice, Confidentiality, Processing |
Layer 8 | Intimidation | Policy-practice, Confidentiality, Storage |
Layer 8 | Theatrics | Policy-practice, Confidentiality, Storage |
Layer 8 | Impersonation | Policy-practice, Confidentiality, Storage |
Layer 8 | Stealth | Policy-practice, Confidentiality, Processing |
Layer 8 | (Other’s) Resentment | Policy-practice, Integrity, Processing |
Layer 8 | Financial Reward | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Idealism | Policy-practice, Confidentiality, Storage |
Layer 8 | Deception | Human Factors, Confidentiality, Storage |
Layer 8 | Camaraderie | Policy-practice, Confidentiality, Storage |
Layer 8 | Romance | Policy-practice, Confidentiality, Storage |
Layer 8 | Extortion | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Confusion | Human Factors, Integrity, Processing |
Layer 8 | (Other’s) Trust | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Ambition | Policy-practice, Integrity, Processing |
Layer 8 | Surveillance | Policy-practice, Confidentiality, Processing |
Layer 8 | (Other’s) Ego | Human Factors, Integrity, Processing |
Layer 8 | Discord | Policy-practice, Integrity, Processing |
Layer 8 | (Other’s) Fear | Policy-practice, Integrity, Processing |
Layer 8 | Chaos | Policy-practice, Integrity, Processing |
Layer 8 | (Other’s) Over-confidence | Human Factors, Confidentiality, Storage |
Layer 8 | Misdirection | Human Factors, Availability, Processing |
Layer 8 | Gossip | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Boredom | Policy-practice, Confidentiality, Processing |
Layer 8 | Congeniality | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Anger | Policy-practice, Confidentiality, Storage |
Layer 8 | (Other’s) Pride | Human Factors, Integrity, Processing |
References
Arce, I., & McGraw, G. (2004, July/August). Why Attacking Systems is a Good Idea. IEEE Computer Society , pp. 17-19.
Benzel, T., Braden, R., Dongho, K., & Neuman, C. (2006, March). Experience With Deter: A Testbed For Security Research. IEEE , pp. 1-10.
Coffin, B. (2003, July). It Takes a Theif: Ethical Hackers Test Your Defenses. Risk Management Magazine .
Du, W., & Mathur, A. P. (1998, April 6). Vulnerability Testing of Software System Using Fault Injection. pp. 1-20.
Gula, R. (2001). Broadening the Scope of Penetration-Testing Techniques. Enterasys Networks White Paper , pp. 1-18.
Heien, C., Massengale, R., & Wu, N. (n.d.). Building a Network Testbed for Internet Security Research. Consortium for Computing Sciences in Colleges .
Holland-Minkley, A. M. (2006, October 21). Cyberattacks: A Lab-Based Introduction to Computer Security. SIGITE’06 , pp. 39-45.
Micco, M., & Rossman, H. (2002, March 3). Building a Cyberwar Lab: Lessons Learned. SIGCSE ’02 , pp. 23-27.
Mink, M., & Freiling, F. C. (2006, September 22-23). Is Attack Better Than Defense? Teaching Information Security the Right Way. InfoSecCD Conference ’06 .
Ray, H. T., Vemuri, R., & Kantubhukta, H. R. (2005, July). Toward and Automated Attack Model for Red Teams. IEEE Computer Society , pp. 18-25.
The abstract for this group starts off talking about what the purpose of this lab is. Then the abstract goes into how the lab will be set up and what is going to be used in the lab. Last the abstract talks about the tools that will be used in the lab and how they will be categorizing them into the OSI model and the McCumber cube. The abstract for this group did leave out how there was going to be a literature review and how that literature review pertains to the lab. Also left out of the abstract were the questions that needed to be answered. Next the group created a very in-depth introduction to the idea of penetration testing and why it is important. The introduction talks about how people have been defending the security of information for a long time and how that defending of the security of information is becoming increasingly more difficult. Next the introduction goes into what penetration testing is and how important it is to keeping up with the increasing threat. At the end of the introduction the group stated that there were three general research concerns: steps and materials to create the penetration mockup, the scope and relevance of the penetration tools available, and an attempt to classify the tools into network and security theoretical models. I believe that the introduction was a bit wordy and could have been simplified. Next the group went into the literature review. The literature reviews in this group’s lab were only a summary of the readings. The literature reviews didn’t explain the theme of the reading, point out the question of each reading, compare each reading to the lab, explain the supporting research, point out the methodology, and what errors or omissions there were in the readings. The group could have done a much better job at comparing and contrasting the articles to the current lab. Also I had to read the review and try to guess the article that the writer was talking about, because he didn’t label each of the articles properly. Next the group did their methodology. In the first part of the methodology the group explained how they set up their lab environment. They did a good job in explaining how they set up the virtual environment and each of the machines by assigning static IP addresses to each of the machines. Next the group defined how they were going to gather the tools they will need for this lab. The group divided up the search into three groups: security tools released in Backtrack, FreeBSD security and Net-tools ports collections, and last was a security tool list on the website http://www.insecure.org. They also took care in discovering new tools and links to tools when looking through these areas. With this collection they were able to create a tree structured to aid in their search. Next in the methodology they explained how they classified the tools they chose into the OSI model. They pointed out that because of a bias the tools could not evenly be distributed between the OSI model. Last the group explained that they were going to classify the tools accordingly to the McCumber cube. Next in the lab the group covered the results of the lab. First they explain that the setting up of the lab was straight forward. Then the group gave the resulting table that was created from the collection of tools and the categorizing of them. This table was in two parts. The first part gave the criteria for each of the categories. The second part actually categorizes the tools into the different layers of the OSI model and then applies the McCumber cube to each one. The way the group did this table worked out very well. The first table tells us what to expect in each of the tools in each layer, this makes categorizing the tools much simpler. Next the group talks about the question of why all the tools fall under technology in the McCumber cube. This group did an excellent job in explaining this question. Last in the results the group answers the question on if there is a bias for penetration testing. The group does a good job in explaining that there is a bias for penetration testing. Next the group discusses the problems that they face with the lab. First they state that because this is a virtualized lab a lot of tools will not be used that could be greatly beneficial. Next the group states that many of the tools that are mentioned work with situations that are impossible or impractical in a virtual environment as this one. The last problem the group had was that a lot of the tools fit into multiple categories in the OSI model and the McCumber cube and that it is the intent of the attack that determines were the tool fits into. The group makes a good point here. Even though we are learning to think like a hacker and learning some of the newest tricks, the scope of this lab is not good enough to cover a real-world view. It would be nice to go beyond the scopes of a virtual environment and make this as realistic as possible, but there are realistic problems that come from this type of lab also. Last the group gives a conclusion. In the conclusion the group states that even though the lab’s scope is not large enough to include all the real-world situations, this lab is adequate enough to cover what we need. Also they discuss the table and the tools that they found.
The group had a well stated abstract. They stated what the lab was about as well as what they were planning to accomplish during the laboratory experiment. The next part of the group’s lab report was an introduction. Although the introduction was well stated with a background about penetration testing, it made this lab report read more like a technical paper as opposed to a lab report. I agree with the statements about security and penetration testing. The next step of the lab report was the literature reviews. The reviews of each paper could have been longer. The reviews did not have all of the components that were required for a literature review. Also the literature reviews were missing citations with page numbers as well as the works cited used. The group did not compare and contrast the papers with the other papers that were part of the required readings.
The methodology section was very well stated with the steps of the process of performing the lab clearly stated. There were no screenshots of the steps of the process. The group then described the process of selecting the tools, putting them into categories and then putting them into the proper spots in the table. The next step of the lab report is the results of the lab experiment and the questions. The questions were quite lengthy but answered the questions fully and well stated. The group did not answer the question stating the difference between Ethereal and Wireshark. The problems section was also lengthy, but the problems were well defined. Most of the problems that the group had were also brought up by most of the other groups. The most common problem seems to be finding the right sections to put the tools into. In the conclusion section, the final results were not really talked about, but rather conclusions about penetration testing. The group seems to have found more layer 9 tools that the other groups did. The one main issue I had with the groups table is that it was separated into two different tables instead of following the example. This made for a hard time of reading the table. This made me have to scroll up and down a lot to see the tools and where they fit into the OSI model and where they fit into the McCumber cube. I thought the group could have done more with layer 0. The group seemed to focus solely on tools that dealt with technology than simple tools that could produce a kinetic effect such as a bomb or even a hammer. The only links they had were for the required readings. I do not any of the sites were they found the extra tools other than the backtrack suite. Overall I felt that this group had one of the most detailed lab reports.
The third team also presented a complete and well thought out lab exercise. The lab met most of the requirements as per the syllabus. There were no real apparent issues or problems that stuck out at first examination. However, there were a few items that could be improved upon. The abstract did not meet the requirements of the syllabus in terms of length. Team three also took a different approach to the literature then any of the other teams. Team three placed their literature into six primary areas of focus, and discussed the merits of the papers that fell into those focus areas. Their introduction was complete and read very well. The literature review was rather cohesive and was a unique way to complete it. There were only three issues that jumped out upon reviewing the review. The first was that there seemed to be no APA5 style citations as per the directions in the syllabus. The second was that the Arce & McGraw paper seemed to not have been reviewed at all, but rather was just a source listed in the works cited section. The third was that the literature did not seem to take a stance as to what the reviewer though about the particle reading. The questions that were presented in the syllabus to be answered in the literature review section did not seem to be present. Like the other teams, team three did agree with everyone as to how the objectives listed in the lab were worked out. The same format that was followed by all was included here in completing the technical portion of the lab. The technical merit of the team’s position cannot be questioned as there was no real position taken on the literature, and the other tasks of the lab were completed as per the syllabus instructions. The VMs were built as per the lab instructions, the questions that needed to be answered were answered, and a complete taxonomy was presented. The only real enhancement that can be made in team three’s lab is in the literature review section. Including in text citations, as well as answering the questions for literature review in the syllabus will show the stance the team has taken with regards to the literature. Like team one, as with all the teams, I’m sure material on working as a team could be helpful, as it seems to be the major problem with all the labs (including team two). Team three does have the most complete methods section out of the five teams that posted labs and those methods cannot be questioned as they follow the format of a complete lab found in the syllabus. They also had the most complete taxonomy in the entire group of five teams. And their answers for layer eight & layer zero were extremely well considered. The remaining seven layers were about the same as the other teams. All in all, team three’s lab was closely related to teams five’s lab, complete and well thought out.
The introduction to the article was excellent, I hadn’t considered an introduction to my team’s lab report before but this section sets the tone for the rest of the lab report and raises some interesting points about the history and necessity of security in general that provide a framework for the rest of the report. One small item lacking from the introduction was any citations from the literature. It appeared that some of the statements made were done so based on the readings and definitions given in the literature.
The literature review lacked cohesion between the various topics addressed in the readings as well as missed the connection to any of the work done in the class exercises. Each paper was only addressed individually and the output was basically a summary of the article, something that could probably be easily gained through each article’s abstract. Also missing from each article was a treatment of its methodologies and results, what the reviewer thought of the points the article made and any errors or omissions that were evident in the readings.
The methodologies section of the report was very informative, particularly related to the configuration of the different NIC adaptors on the VM hosts, particularly for the Debian system where some users may not be familiar with the specifics of configuring the network interfaces. The methodology for classification of the tools was very well thought out and presented. The discussion on legality and ethics issues about the relation of the tools to layers eight and zero was a good addition contrasted against the purely technical nature of most of the tools in the stack. The designation of technical tools as a “locally predictable force” was interesting and sounds like it comes from military-based literature but no citation was given for any frame of reference.
The problems area listed some problems with application of some of the tools found to the virtual lab environment. While some of these tools do not specifically focus on the environment created for purposes of this lab, they’re still useful in the context of the taxonomy and the class in general. One thing that should be mentioned is about the amount of tools focusing on Cisco. Someone tasked with securing a network would certainly be concerned seeing the sheer volume of tools that target Cisco equipment that they should ensure that they keep their configurations and patch levels up to date.
Understandably there is a lot of information in the taxonomy to attempt to fit into a 3” space and the inclusion of a description of each tool was beneficial but not having links to the tools could make it difficult to directly access the tools the authors are referencing. The layer eight examples cover quite a range of tools that could be used but one that stands out and doesn’t appear to fit is Trojans which seems like more of technical tool that would fit in layer seven, even with the component of deception involved in getting the user to run the tool. I was hoping to see its McCumber cube coordinates but it is absent from the layer eight list.
Mvanbode, if you look at layer 1, you’ll see that we have both a hammer and explosives listed. I disagree with your idea of what level 0 should be.
Team 3’s abstract was excellent. They described their lab and how they were going to use it. I appreciated their introduction. Personally, it helped to to better understand what we are trying to accomplish with this exercise. I liked how theyy organized their literature review. It made it very easy to read and understand how the articles tied back to the lab exercise. The methodology section was well written and the steps of the process of performing the the lab were detailed. Just like the other groups there weren’t any screen shots of the steps of the process.
I thought they could have done a better job of organizing their charts and tables. It seemd like they jumped around a bit and that made it hard to follow. Their problems were well documented and seemed to be representative of the problems other groups encountered. The conclusions section was rather light given the detail of the report but overall I thought their paper was well written.
At the beginning of your methodology you give great overview of what was done to the virtual machines that were assigned to the groups. Given was the name of the virtual machine software, the virtual network they were put on as well as the IP scheme of that subnet. Even more so the directory was given in Debian where the interface was changed to an assign address. For there they talked about how they gathered different security tools from different locations. This is good because different locations such as backtrack, FreeBSD security, and Net-tools. This increases the number of different tools however there will be repeats with the more popular tools. There matrix chart is a little difficult to read, mainly because of the clutter in the last section that might be Host/MediaExploit Method.
The Literature reviews were nicely divided into separate paragraphs and give a good concise response to them. Highly agree with short paragraph about at the Indiana University of Pennsylvania, there cyberwar lab running a Linux server that does not provide any services is extremely security. Also agree with that being unrealistic in the business world.
The introduction section did a nice job relating red teaming to the realm of network security and reflecting on how the concept related to the lab assignment.
The literature section contained a few discrepancies that I noticed right away. The summaries did not contain in text citations. Everything appeared to be paraphrased like it should, but even paraphrases require in text citations. The team did a good job summarizing the supporting data but did not address the methodologies used, what the research questions were if any, any errors or omissions found the articles if any, and compare the theme of the articles to each other.
The methodology was quite thorough for it described in great detail how your team implemented your virtual environment and the rationale for the classification of tools within the exploit table. Your team described the difficulty of classifying some of the attack tools because they did not always fit smoothly into the theoretical framework of the OSI model, but did your team also have difficulty in determining what exactly the attack tool would affect in the McCumber cube?
In the Results and Questions section I had to somewhat disagree with the statement “Additionally, the extended layers, layers zero and eight, represent ‘tools’ of an abstract or solely theoretical construction. It is assumed that these will lie outside the bounds of experimental scope, as actual application of many of these ‘tools’ would be unethical and illegal, issues of practicality aside.” The objective was to find exploits for these layers, which may or may not be tools. Since Layer 8 is the people layer, there would not be any available tools for download to extract information from people, but as your team has also pointed out there are characteristics and vices that make people susceptible to social engineering attacks, which could count as exploits. Just as a con artist could duke people into scams, a hacker with people-centric skills could extract useful data from people as he or she could from a computer. The kinetic layer is not really that abstract either because if a computer is able to attack another system or network that contains devices that interact with other objects or have the ability to interact with an environment, this will cause a kinetic affect. I have began to look to Industrial networking as a theoretical means for creating kinetic affects, which falls in the realm of Process Automation and Control but share similar protocols such as EtherNet/IP , which is very similar to Ethernet. Here is an article describing cyber threats to Industrial devices on an Industrial network from ISA.org http://www.isa.org/CustomSource/ISA/Div_PDFs/PDF_News/Glss_2.pdf
There were a few discrepancies discovered in the section that had the exploits table. Some of the sections such as the Kinetic layer and Data link did not contain the required number of 30 exploits. However, it seemed that everyone struggled to fill the Kinetic layer to the desired target number level. I was not clear on why attack tools and exploits were tabulated twice
Beginning with this abstract this team did a good abstract and explaining what was going to be accomplished. But I did notice that the point about tools being put in to a chart seemed almost repeated. I just think next time it could be more refined. Next the group set out to talk about an introduction. Which was a more in depth abstract version of the abstract. I was kind of thrown by this section because student are to explain about the lab in brief during the abstract and then go into the lab. This part should have been put after the literature reviews. That would also help with making the lab sound less redundant. The next part of there lab that they covered was going over the article reviews. I feel that this was this groups strong point as they did well with the reviews. They reviewed each paper and related it to class in a way that flows and is easy to read. One thing that can be improved on the article is that they can have an overall topic of the articles and how they relate to each other. The next part of the teams lab went into the actual lab environment and how it was created. One thing that I have notice throughout not only this group is that everyone got into describing how the lab environment was setup but the could be enhanced by the use of a flowchart of the environment giving the users a visual aid. Then the lab concluded on the on what they learned during this exercise. One thing that was off was the group had 2 tables and was at the end of the lab when they should have been put into their steps of process. Another thing with the tables was that it seemed like the group gave up on cleaning the second table up and making it more compact. They could have actual reorganized both and combined the tables into one. Another problem with the second table seemed that it was backwards. When going over the osi model it is usually the norm to start at the highest layer and work down to the lowest level. Other than that they did do a good job of categorizing what their thought for the position of the tools within the Mccumber cube. Overall the group did what they where suppose to do with the lab 1. They just need to clean their lab documentation up some and they will have better lab write ups in the future.
I think that group 3’s write-up for lab 1 is very good overall. The literary review was adequate, although could have been a little bit lengthier and should have answered all of the required questions. I was unable to find any proper APA 5 citing in the text for the literary review. Also, the page number for the references should have been included. The setup portion of the lab describing the networking of the machines was well done. The group specifically reported how they were able to properly configure the Linux network interfaces. The table containing the penetration testing tools was very good. The layout for the table was good and easy to read. Also, there were many explanations to why a tool was chosen for that layer. The group discussed which tools covered multiple layers, and also their reasoning for covering multiple layers.
@tnovosel: In answer to your comments/questions: We found that some tools have more than one possible coordinate set in the cube, as demonstrated by the table. What is the difference between “tool” and “exploit”? The point wasn’t that they weren’t feasible, just hard to test in a lab setting. I don’t really understand what you’re trying to say about level 0 of our table. If you’re trying to tell me it should be SCADA, you may want to read a little more. the things we have listed are generalized components of a SCADA system.
@shumpfer: A flow chart for a network setup at this level of the game feels a little pedantic. I can list the OSI layers any way I want to. It describes data communication, so it doesn’t matter if we start at the top or the bottom. information flows both ways through the layers. Also, it’s a model, not a law of physics.