Cyber is a short hand for something we don’t understand but we’ll lump it under the word cyber because it is sexy. Cyber can be networks, processing, storage of information, the information itself, cognition, social interaction, and so much more. Technophiles like to think they own the domain of cyber, but really, it is about human use of information. I like to think of cyber as short hand for cybernetics. The reason is that cybernetics is about the information and understanding the input and output of systems. The systems can be technological or human along with hybrids of both. With the understanding of cybernetics as being technological hybrids with human components you start to understand why it appears to be a huge issue. The systems are exceedingly complex. Cyberspace becomes the tool-mediated exploitation of an environment of information.
If understanding the cyber prefix is difficult, getting to the root of warfare is even harder. If you are, a lawyer or adherent to international legal convention war can only exist between two nation states. What if you are making war on a nation but you are not a nation? Legal precepts suggest that it is not war. Yet, anybody can see that war is occurring. Though I would be the last one to suggest we give up the rule of law, as I like a clean, tidy, safe nation. I would suggest that when the law is in contravention of reality it is the law that is wrong. War is war unless it is on the exam and then it is between nations.
Is espionage war? Yes and no. Espionage can have much of the same factors as war. Yet the goal of espionage is to define a mission set, create resource pools that can inform that mission set, exploit the resources, evaluate the resources, and then inform or report on the mission set. Wishy washy? Espionage is about determining information that you want and getting that information through different techniques. Cyber is a terrain that numerous techniques will work upon. Just like a spymaster (whatever that is) can bribe people, use a lead pipe, burgle, eavesdrop, or other things a person operating in cyber can do the same things. Espionage is different from sabotage, but all of the same patterns apply.
The first strategy for cyber tactics, techniques and procedures is a very well defined element of the red teaming concepts. You identify a target, recon the target, identify a set of exploit paths on a target, launch an exploit, create an access, harden or protect the access through resiliency or redundancy, and when needed activate the access for a particular effect. This pattern reflects the sabotage and espionage aspects quite closely. The access creation and exploit will often use information that is not intended to be shared.
The second technique set is to not as well defined but still part of the red team tool suite. A system has a set of inputs and outputs. The inputs are protected in some way by either software (host intrusion detection) or hardware (firewalls and proxies). Operating against the system as it was designed but for an effect nobody expects can create an access. Using an unidentified backdoor access that is part of the original code base but few people know about is one example. Unidentified backdoors can be software or network in nature, and as simple as an inserted USB drive. Depending on the architecture this possibly is done at all levels of the network topology. The intelligence to create this kind of exploit can be done by mocking up the adversaries network as closely as possible.
Finally, for this set of examples and discussion another way (though there still are many more) is to back into the exploit by attacking the human operators cognitive processes. There are some basic techniques that work well here. Subterfuge via technology where the human thinks one thing is occurring and actually something else results is a hallmark of spear-phishing and the Trojan horse. Subterfuge by impersonation is an entity masquerading as a trusted third party and then gets the operator to accomplish some task. Subterfuge is most daring when it appears in the guise of something normal but for onerous purposes in such a way as the operator would never detect the activity. Zero day exploits based on a targets browsing habits would be part of this technique. If you know somebody always browses a particular website and you inject a browser exploit into that website you can create an access on the target.
All of these vulnerability and exploit chains of events require that the adversary have a vulnerability to exploit. Is that in anyway a reliable expectation? What if an adversary has no known information communication and telecommunications? Does that mean we drop the use of the entire arsenal of cyber weapons? The use of exploit to attack requires activating an enemies weakness. This is good tactical thinking but does not allow itself to an operational environment. Those systems that could create an operational or strategic change are going to be the most protected information assets.
The issue of the exploit chain as discussed is that it reflects network centric warfare thinking. It is mature in consideration, but the relative defenses of network centric warfare have been gamed heavily and offense against the network can be mitigated. The information systems of an adversary have to be considered as a system of systems that are exploited on a time line of access and disruption periods. The vectors of disruption start with design of information communications and telecommunications equipment, are part of the engineering and processes of software development, and include the maintenance and upgrade processes. The act of disruption may be engineered and activated by pre-determined input data or associated behaviors. It doesn’t have to be a keystroke.
Another factor occluding the exploit and vulnerability chain is the associated bias toward terminus or end-point attacks. The impact of a holistic campaign against the entire technology stack of cyberspace disrupts and degrades the tools used to access the cyber domain. The end-point attack is obvious in execution and has specific behaviors associated with success. Unfortunately, it ignores the valid impacts that a well-planned campaign against infrastructure such as processing and storage devices might have. Consider the effects of an operation against the devices managing end-point connectivity that has an impact of thousands rather than singularly. The focus on long-term disruption and degradation by the mass-media ignores that mere seconds of disruption can interfere or cause catastrophic effects across multiple domains of conflict. Particular effects on cyber systems can have dramatic kinetic effects, but more importantly they can have cognitive effects.
There are other elements of how the technology exploitation and human exploitation path planning. The result of planning is a set of steps that are fairly well understood.
In the planning process, a target is chosen based on an strategic, operational, or tactical necessity. Though a wide swath of technology accesses may be picked up as a set of access points in general a target will be some entity associated with some effect. Technologists often confuse the target (as in country) for the access (as in network location). A target set should not be simply an entity with no rules of engagement. Though some would charge forward it is to make the problem easier we want to limit the engagement techniques. One reason is that there is a desired effect. Simply placing accesses on a network is exploitation but what do those accesses achieve? Insuring that an access has some specific capability that meets a goal statement of the operation is important.
There are significant issues in target identification that should be explored. Saying a target is Widget Company would be a broad range of possible exploitation paths. Operationally it would be better to say that a target is the work of the research division of Widget Company. The same error in selectivity in the opposite direction would be say page 2, paragraph 3, subsection 9 of the plans of the Widget Company. This is a little too specific. The other error is making the objective the target. The exfiltration of the wonder express widget plans is the goal. In an espionage target set this is fairly easy to game out through directed graphs or other planning tools.
What if the goal is not merely espionage but breaking things and killing people? In a way all cyber espionage is a form of sabotage. You are breaking systems (inclusive of people) to get a particular set of knowledge. This is a key point that is often missed. Espionage that is exploiting computer systems is breaking them. In killing people our target set is highly restricted. There are critical infrastructures and key resources that become targets. A subset of critical infrastructures are strategic targets but have little impact on the lives of people. Similarly a set of key resources have limited impacts on people. To kill people the set needs to be determined with a matrix of the key resources. Overlaying the cyber realm on top of the key resources allows for a target set to start emerging. Espionage target sets will be first to emerge.
The primary goal of target sets are to evaluate and find keystones in critical infrastructure that cascade with dramatic results. Fortunately, engineers design systems to fail to a safe state. So, contrary to most people’s thoughts the way to cascade fault critical infrastructure is not necessarily to cause fail. As an example is it better to cause the lights to go out in a city or selectively turn the lights out so people have traffic accidents? Is a distributed denial of service against a bank going to have more impact than an attack against the Society for Worldwide Interbank Financial Telecommunication (SWIFT) system? You might think distributed denial of service against SWIFT, but that kind of thinking is at least two decades behind the times. It would be better to attack the integrity of SWIFT and thereby erode the trust of the system. Attacking the tools and thereby the trust mechanisms through cognitive processes can have outsized impacts.
We need to expand on this idea of failure modes a little bit. Think past the hype of power outages and dams failing. Most systems can be caused to fail to unknown states given enough analysis. Whether that failure state is useful in context of an operational capacity may not be known. The engineering process in general will consider common modes of failure but there can be unknown modes that exist. Vulnerability and exploit chains often rely on these unknown modes.
A tool for analysis of failure modes is to watch the systems you are interested in and how they are naturally failing. There can be extensive knowledge gained from just watching what naturally occurs to various systems when they fail When it rains the storm drains fill to overflowing which results in electrical outages. What about filling the storm drains up with water from fire hydrant when somebody least expects it? Not enough water fast enough what can you add to get to the expected threshold? It can be a technique of exploitation with a particular outcome that is unexpected by an adversary. After a period of time a compendium of failures and causes can be engineered for at will use. Though you are looking for cyber effects be willing to consider cross-domain failure modes.
National intelligence agencies create products based on the political landscape of their particular bureaucracies. Most nation state intelligence organizations reflect the culture, political will, and acceptable behaviors of their nation state identity. This cultural fusion is a normal part of government service. As such, each nation state has a set of blind spots they will not consider for attack and thus will likely ignore in defense. The secondary aspect to this blindness is that one level of abstraction out from a target will get even less resources. Aligning a target in cyber with real world effects desired is a set of layered cake problems.
If you consider an attack against an entity as a set of protection measures that are then analyzed for vulnerabilities you get one aspect of the problem. When you consider that you are backing away from the actual target and attempting to find things that may be unprotected that have desired effects on your target this problem set grows rapidly.
The question quickly arises on why you would need to think about this abstraction and secondary effects scenario. Cyber targets that have physical effects are usually protected by design (engineered to fail gracefully) and have protection that keeps them from being negatively impacted easily. Thinking in only one dimension of cyberspace will neither allow for active attack or defense strategies. The concept of conflict in cyberspace is a multi-dimensional set of tools and capabilities that are enabled through tools and empowered by cognition.
1 comment for “Let’s talk about cyber warfare”