Bruce Schenier has his hollywood movie script for cyber terror, and Bob Gourley has a similar scenario at FedCyber. I wanted to answer the call before the presentations because I was going to put it into the perspective of my thinking processes and put a little meat on the bones before I hear how others look at it. I’m not on the panel so I’m not breaking anyrules.
Judging a nightmare scenario to be a strategic level effect that impacts by destruction, degradation, or disruption a large proportion of society through the use of technology broadly classified as cyber is not easy. Cyber physical effects through direct means are difficult to create. Engineers are smart and know that computers will screw up. They don’t trust the computers so they build the physical assets in ways to insure that that don’t fail with catastrophe in mind.
So, any effect has to take into account the overall goal, the capabilities in hand, and the unexpected secondary and tertiary results. Considering the lack of modeling or testing that will be possible you want something to work first try. One way to bypass this is to use past mistakes or accidents as patterns. In the western world we have a tendency to write extensive reports on mistakes or accidents as a way of communicating the way to do something right the first time. These can be digested into a format that gives a technology taxonomy that can illuminate structured attack methods. This is something previous students and I built called “Black Belt Cyber”. In a nutshell it is nothing more than vulnerability analysis.
A lot of people will focus their ideas of a horrible cyber event on an ICS/SCADA type attack. The conclusion being that obviously a physical effect is being made. Those attacks often focus on the electric grid or other large scale industrial systems. Those systems are specifically made to withstand significant physical effects on them like weather and foliage. Simlarly they work well with degraded or disrupted computing capability.
The goal is to disrupt, degrade, destroy at a level large enough to create a societal level impact. That impact must be enough impact to degrade leadership resources and allow for an adversary to disrupt decision capability. The impact desired is one that can be enacted at a nation state level.
With that given my scenario is rather short but I hope impactful.
Scenario: A company is the largest manufacturer of air-bags in the world. The air-Bag has become a system of air bags and is tied into numerous automotive systems besides the crash sensors. The air-bag system is a cyber physical sytem of systems with many safety critical aspects. Timing, g-force, and weight of passengers are all part of the processing of the system. The alleged company is going to have a large scale recall due to a failure of the system. Some design aspect when disclosed requires rapid mitigation and repair. The recall will result in an update to the airbag control system. Updates to these systems are normal and for some manufacturers even happen over the air via wi-fi or bluetooth. An adversary injects a bug into the software that is being updated in the hurry to respond to the recall.
The software implant uses the control system of the vehicle to pull time date data and speed data. If the speed of the vehicle is more than 30 mph, a driver is sitting in the car (duh!), and the door is closed (merely examples of malware decision criteria), the software on a time and date set by the adversary sets off the air bag. On a particular time and date perhaps world wide the air bags deploy while vehicles are being operated. People not driving their vehicles are bypassed, but once driven the condition for deployment occurs and “BANG”. The overall effect could be ALL vehicles are kept off the road for some amount of time. People with vehicles that don’t have air-bags tell everybody and black markets erupt for pre-air-bag vehicles.
The resulting chaos is large in scope. If only .001 percent of the effected vehicles on a 100 million vehicle recall actually deploy then it would still be catastrophic. Since the systems used have nation state aware controls in them (North American v. European) targeting is possible. Only a specific nation state can be targeted with this attack. There are control in place to keep this from happening. Previously the safety mechanisms included a physical element, but that is now interfaced in some vehicles through software controls.
A note: The scenario is meant to lay out a pattern. Like a few other scenarios I worked out with my previous students I don’t want to give a pattern for a real attack. I want to show how to think about an attack and then consider the defense strategies. As such I know that there are aspects to this particular scenario that likely would catch and remediate the malware implant into the automotive system. At least I hope those checks would work.
Oh and I wrote this on my iPhone so all typos are nothing more than hoping you find more intersting stuff.