Research Note: After Carshark and into ICS, SCADA, and embedded systems

After DefCon this year and CarShark last year I don’t know if there is any interest in exploring this aspect of cyberspace further, but thought we might look at a few things. The reason you should care is though there is some testing of the systems described here it is poorly researched and not well understood by leadership. There seemed to be some surprise at the capability to control a car through the embedded systems. There also seems to be a continued misunderstanding of what supervisory control and data acquisition (SCADA), industrial control systems (ICS), and embedded systems look like. Finally due to what I like to call “strategic blindness” there is a gap between knowledge of and risk of these systems.

First some caveats, I am not a military officer nor have I ever been a military officer. I was a corporal in the Marine Corps a few decades ago. I wasn’t a leader of men as much as I was an ammunition delivery mechanism of national power. So I won’t be playing at “this is how you fight in cyberspace” or hurling military acronyms like a major league baseball pitcher. I claim no expertise at SCADA, ICS, or embedded systems though I have written software and implemented those classes of systems. I am a proponent that the definitions used by government and specifically the military of cyberspace are artificially constrained creating particular forms of risk.

SCADA is an element of the larger industrial control systems environment, which is inclusive of something called, distributed control systems (DCS). SCADA is the coordination and communication component while ICS is the control and communication element. This is simplistic but workable as a definition to understand that embedded systems are not SCADA or ICS. Embedded systems use some of the same architectures but are not necessarily industrial in nature (they are found in cars, trucks, planes, busses, devices and more).

Embedded systems may be part of a DCS that is inclusive of commodity communications like television, radio, cellular, satcom, and more. Embedded systems can be networked but are not necessarily part of the TCP/IP (Internet) environment though their communications may be encapsulated (carried) within that environment. Finally, embedded systems can use standardized protocols like J1939, NEMA2000, and proprietary protocols like SEATALK (Raytheon) to communicate between sensors and controllers.

When talking about “cyber” there is a focus on the World Wide Web (DNS), the Internet (TCP/IP), and once in awhile the concept of OSI Layer 0 and 8 (bombs and people/politics). It is time to open our eyes slightly more than that. To understand the risks to embedded systems we analyze the transmission (inputs and outputs), processing, and storage use cases.

Consider general aviation. The amount of technology involved in keeping even a small airplane in the air is significant. The communication and coordination for a general aviation pilot is extensive and the technology substantial. Garmin created an avionics system called the Garmin G1000 that is basically a glass cockpit display system. It has the entire suite of navigation and instruments required to fly the airplane under instrument flight rules. With the addition of radios (UHF, HF, SATCOM, Cellular) and various additional packages the pilot has a substantial situational awareness. Where once there were stand-alone systems the new system is a series of networked embedded systems utilizing telemetry from external and internal sources.

I claim no special knowledge of avionics and in fact I’d say I have nearly none. All information was gathered simply by reading the webpage on the G1000. The expectation is to expose somebody to the principles rather than specific proven forms of attack. In the case of the airplane most people would sit there and look at a specific plane and think how to attack it directly. Unfortunately that path is likely going to be less than successful. Physical security is still important, but the interfaces (input and output communications) are likely fairly protected too. Though there appear to be some examples of interfaces on airplanes being left open, and even an FAA directive detailing this vulnerability the risk is unknown.

What is known is the avenue of a sideways attack against the navionics. The G1000 includes the ability to download weather data remotely. This appears to be a version of a web server that is accessed through a subscription. The data is downloaded and available to several systems including the autopilot for navigating around severe weather (not exactly sure how that works). 

The sideways attack is to create a piece of malware that includes a remote access tool that could be downloaded. Any system accessing another system has to have a set of protocols to determine the technique. Those protocols must run at some supervisory level. This is a classic place where buffer overflows, or injection type attacks might work. Looking at each of the radio systems and the various services that run on them as similar vectors of exploitation can be found. Though none of the systems themselves run on the Internet many of the associated data providing systems are connected to the Internet.

To make matters worse. Though the cause (not enough altitude, speed) of an aviation accident might be detailed in the event data recorder (black box) the erroneous data leading to the poor pilot decision might not be detected. Since the sensors are aware of the “reality” only the displays need to be fiddled with to create an integrity attack. The event data recorder would show the reality while the pilot was seeing something else. There appears to be a substantial physical forensic capability (detection of whether a lamp was lit at time of crash kind of stuff), but cyber forensics does not appear to be as well thought out.

Avionic systems appear to be tested against several physical and quality benchmarks but I can find no details on this being done from a “cyber” aspect. Whether this specific attack is valid or not the exploit pattern summarized most assuredly will work. If you add in the social engineering component you have the pattern of most recent systems exploitation successes. The successful exploitation will prey on the trust relationships of the various systems and systems owners. The real time (speed requirement) of embedded systems will directly impact the amount and resiliency of information assurance measures.

For some reason nobody wants me to get near any military equipment so I will hazard a guess that many of the above attacks are mitigated. Awareness or understanding of risk might be achieved by evaluating engineering proposals, and other contract mechanisms that are open source.  In the case of military vehicles and systems evaluation of the embedded systems inputs and outputs with associated trust relationships will help understand the threat and vulnerability aspects. The systems used to maintain sophisticated embedded systems are often ignored when red teaming or evaluating the security of systems. That allows for an exploitation of the inherent trust relationship of the systems that might not even be documented.

To explain the vulnerability of undocumented trust relationships I use the unattributed anecdote that the way on to your super secret corporate lab network isn’t through your ring of firewalls, or even social engineering. It is through the next software update of your Fluke local area network (LAN) analyzer. The access such a device has on a network is substantial and the level of accreditation of such a device is minimal. The LAN analyzer also represents a highly sophisticated embedded system.

 

Just a quick note.

Leave a Reply