Cyber security (cyber war) hype cycle writ large

In the late 1800s a new form of warfare was starting to rise into the collective understanding of the world militaries. Command and control warfare through automated means began with the telegraph. This allowed for a network centric conflict to rise and concepts like indirect fire to become accurate and nearly instantaneous. Warfare against command and control had existed long before that. To be sure command and control are only shadows of the deeper meaning of “cyber” as we’ve come to know it today, but the concept of command and control warfare exhibit much of the conflict elements that are important. The problem today is we are seeing the cycle move forward a notch further. We have done this a few times and newbies (youngin’s) haven’t seen it or haven’t studied it to realize the cycle.

In the late 1950s the first automated/electronic computers started making in-roads to the information processing environment. Mechanical computers that allowed for breaking crypto had been used extensively during world war two but the idea of using them for more than singular tasks just wasn’t on the horizon. By the early 1960s computers were becoming something that people and corporations could own at great cost. Concerns started to rise in late 1969 and early 1970 about the security of information in these centralized data repositories. In 1976 Bell and Lapadula write the first secure computing system paper that would become the hallmark of common criteria. Saltzer, Neuman and many others writing about the Multics operating system define the idea of secure coding standards. By the late 1970s the cost of securing these systems was determined to be to great and by 1980 was largely forgotten. The span of a mere decade security of information went from BIG problem to not worth anybody’s time.

In the mid 1980s personal computers were making their way to the desktop of people and being bought by individuals rather than corporations. The Apple 2, the IBM PC, and the hoard of clones swamped the environment. In early 1983 the movie War Games came to a cinema and people were SHOCKED that all of those computers were open to attack. There was great hand wringing and then the cute cuddly teens were forgotten fairly rapidly. It is interesting to note that the magazine 2600 comes out during this time (1984). The Trusted Computer System Evaluation Criteria (Orange Book) was written in 1983 and updated in 1985. The Cuckoos Egg by Cliff Stoll was happening during this period but the book was yet to be written until 1989. In that story you have foreign espionage, military plans being exposed to communist nations, attribution to the house level of the individuals, and the investigator lost his job and was nearly imprisoned. Most assuredly in that period we were not doing computer security. In fact there were many barriers to the concept. There is a bit of a drought from mid 1980s until the mid-late 1990s.

In 1991 John McCumber writes a paper detailing the risk management model known as the McCumber Cube. This would be instantiated in policy, law, and procedures across the government having by 1994 similar effects to the Bell Lapadula paper in 1976. In early 1993 we get the world wide web (DNS) and then we also get the movie Hackers. Though the movie was hated by critics who can complain about Angelina Jolie running around in leather? Government tripped about this time and realized they had a huge problem. In early 1995 military commands to study the problem were instantiated. The Defense Information Systems Agency had it’s authorities changed. The big problem for the military was sensationalized by the publication of Cliff Stolls book (note he wasn’t a hacker or security professional he was an astronomer). Military leadership has always been worried about the concepts of information security (work in 1974 was supposed to fix it).

Consider that a general in 1994 was likely a captain in 1974 and you have a learning gap. Studying the demographics of leadership a thread began and has continued that leadership wasn’t even aware of computers when they were in college which is a concept erroneously used as an excuse still today. This leadership was more than willing to use the capability to wage network centric warfare and decimate Iraq in the first Persian Gulf war, but they didn’t really understand the exposure the technology provided.  Here we see the onset in the late 1960s peaking around 1974, a similar peak in 1984 and a peak in 1998.

In 1996-97 a set of exercises were started and a particular set of intrusions were practiced against government systems by what would become known as the NSA Red Team (cue spooky music). Still since those first big issues in the mid 1980s (and the creation of the Computer Fraud and Abuse Act) computer security was basically ignored. Even the reports from inside the military security apparatus were largely not seen as “big” issues. Until 1998 when a computer intrusion nearly duplicating the NSA Red Team approach entered government systems and began whiffing off information. This was a bit of a problem because a major operation was occurring. One of the largest computer investigations in history was began. It is not flippant to say military leadership freaked out. This “freaking out” is one part how come we didn’t know this was possible, and two parts we hope nobody realizes we just had an exercise that told us this was possible.

When technologists look back at that period we have a tendency to focus on 1999 and Y2K. The people charged with protecting information assets focus on 1998. Y2K became a convenient excuse to buy lots of goodies and protect the infrastructure as much as possible. The diaspora of people from that moment in time is very interesting. Many people who are famous computer security experts today were generated at that place and time. By 9/11/2001 most of the fears about computer security were starting to be forgotten in the wake of the dot com bomb, the World Trade Center collapse and the ensuring global war on terrorism.  We have a rising cycle of computer security interest mirrored from around 1997 to 2001 then rapid drop off again.

After or around 2005 a series of events started to pique peoples interest in computer security again. A series of high level exploits started to be released. The SEC would soon require corporations to disclose customer data breaches. It is no surprise that HIPPA and other laws started to take into account confidentiality. Government started detecting technology based intrusions (notice the dropping of “cyber”). The hyping of cyber as a security issue was starting. By the summer of 2008 most people were focused on the collapse of the stock market but a select few had started focusing on “cyber” issues. In January 2008 Anonymous began project Chanology and took on the Church of Scientology. Dorthea Denning had written nearly a decade previously about Hacktivism (2000) and Anonymous was getting their hack on.  In 2010 the wheels came off as previously unknown capabilities were being publicly disclosed.  In June 2010 Stuxnet is discovered in the wild and is labeled the first cyber weapon. An appellation lacking in meaning, but likely the touch point for this cycles peak.

Each cycle a major story breaks creating interest, and in general lots of equipment is sold to companies to “fix” the security holes. Yet, the reality is that this equipment acquisition just increases risk and doesn’t take care of the actual problems in the technology stack. At no point in the last five decades have we actually fixed the security problems. In general an engineering and technology problem is handled as a political problem and as such never actually solved. Once the media cycle has moved on the security guys are fired, the admins are paid less, and a set of experienced middle managers in information security move on to banking, finance or selling cars. There is a core of people who remain for whatever reason. I came into this world of information security in the mid 1980s fully two decades after the first security papers were being written.

We can see the end of this cycle in the derogatory information being suggested by relatively new people to the space of information assurance and security. These are not “evil” or even wrong people. They are focusing on the hype rather than the issue and that would drive anybody to distraction. If you focus on the technology stack, the principles that are important, and realize the inherent risk relationships then appreciation for the issue is much greater. Politicians talk about the end of civilization, or generals spout off about great wars in cyberspace, and yet none of those people take any actions that are really significant in solving the problem. Some of the criticism of the contrarians is warranted. I’ve seen those stories at least three or four times before. I could give them the script about the issues with the current computer security culture.

That being said there are huge risks. Principles that are building on previous principles and patterns that instantiate those principles are continuing to be prevalent in the cyber/technology realm. That previous sentence is purposely recursive. Convergence, connectivity, melding of the man-machine interface (cybernetics), and so much more are allowing for new threat-vulnerablity vectors. Industrial control systems, and commodity processing systems are not “new” but the breadth of that knowledge is a threat-vulnerability vector. In general there are two ways to solve computer security. Minimize the threat-vulnerabilty tuple by spending vast amounts of money on countermeasures and kicking the problem down the time line to the next generation or spend money on minimizing the impact of an effectuated threat-vulnerablity tuple so an effective attack is meaningless.

In some ways the dichotomy between those two ideas is exactly what the argument between the contrarians and the politicos is hinged upon.

Those who are experienced and knowledgeable about the totality of information assurance and security will focus on both sides of this equation. To me it is interesting that hackers focus on vulnerabilities and politicians focus on countermeasures. Look at what the hacker collective publishes at DefCon and BlackHat (exploitation). Then look at how laws are written by politicians to share information, and control the technology  (constraints). What some of the contrarians are reacting to is the lack of focus on the impact vector or remediation of the technology stack used in commodity computing. Some will argue that impact is already negligible but absence of evidence does not suggest absence of capacity to inflict impact. Succinctly there are a lot of guns in America but not everybody is shooting everybody else regardless of the media accounts. There is substantial evidence to suggest capacity to inflict damage but motive and opportunity have likely not aligned. Much like driving down the road we are trusting that other drivers are not going to do stupid things and in general we are right.

Just some thoughts, but also some caveats:

1) This kind of narrative is fitting evidence to a pattern. It is not good science, but I hope it resonates a tiny bit with the reader. It is explanatory while not expected to be empirical.

2) If you’re interested, mining the literature of computer security and piling bibliometrics up against known events, rips this whole theory apart. Adding in high level (NY Times, LA Times, etc.) news stories then gives some idea to this pattern. This is also likely to the fact legislative agendas follow the pattern pretty closely.

3) I’ve got no hate toward the contrarians. I think it is fun to watch what I’ve seen before and be able to realize that this is just a repeat. I do think the wheels come off this pattern if there is ever a really big event, and I thought the Enron fiasco might have been that. No luck it has faded already from memory.

4) I hate the term cyberwar. It should always be cyber war, like land war, sea war, space war, etc… Though I’m coming around again to cyberspace.

5) Even though we have had computers since before most generals were born the thread continues that nobody can keep up with the technology. One of the effects is we allow people to run information assets that have no clue about information security as an expectation. In military terms we’d never allow command of a military unit by somebody with no experience in that branch of arms. Yet we do this with computers. The tool to combine and control those arms.

6) This was written from memory in about five minutes, so if I’ve gotten the spelling of anybody’s name wrong, dates wrong, etc.. My apologies.