Mr. Lewis Shepherd of Microsoft came to Purdue to give a talk for CERIAS awhile back and he talked about how equating the Manhattan Project to the world of cyber security is completely wrong. I liked his talk quite a bit, and it aligned closely with something I’ve been talking to people about for awhile. Talking to people is important. I know my impact on the world is going to be negligible but I’ve dedicated my life to infecting the youth of the world with a few stray ideas. They call it teaching, and it doesn’t pay much. I think Mr. Shepherd was making a good case that scope, and cause and effect, and process of one program might no align realistically with another program. The secrecy, single mindedness, and type of problem that was the Manhattan project has almost nothing to do with the quite different project of cyber security. Much like I’ll never be able to equate my teaching to Socrates, the cyber security community shouldn’t really think “Manhattan Project”.
I wish I could make a billion dollars off my ideas but as an academic that is almost for sure not going to happen. I do have a compensation path planned. If the students get rich they’re supposed to buy me an endowed chair. But, you as a reader showed up to find out how we can fix information security. The quick answer is we can’t.
All your concepts of information security, the hacker conferences, the college courses, the subject matter experts, the standards, the processes, the technologies, and so much more are simply wrong. The entire architecture of the information technology paradigm as currently instantiated is simply not capable of being secure. The political processes, standards of the government, the principle actors of information security, and the perverse incentives of having an intelligence agency head the primary infosec organization up simply won’t allow information security to flourish.
You see one of the lessons we can take from the Manhattan project is that you’ve got to think of new and innovative things without being hampered by the nay sayers and those who say failure is assured. I’m not saying you don’t take into account all the ways that didn’t work. What I am saying is that we spend a lot of time working towards the things we know already don’t work. Think about some of the standard security mechanisms.
Everything you likely think about defense in depth is wrong. All of the audit and compliance stuff is wrong. The firewall and intrusion detection and prevention technologies are wrong. The autocratic and dictatorial policies of information security are wrong. The underlying theories of robust and resilient programming are wrong. There is nothing about the current information technology infrastructure that is security oriented. The foundations of the technologies are fundamentally at odds with creation of an information secure culture. Now to be honest I didn’t say this. Neumann, Saltzer, Cerf, Bernack, and so many other people said this long before I did. But, maybe you haven’t read their stuff before.
How can I possibly support that they are all wrong and don’t work? Pretty simple. They don’t. Though we can secure systems to some point we are almost always talking about a security absent some failure in the system. There is nothing really secure. This is a huge problem that breaks most peoples “common sense” way of thinking about security. Simply put the way we do things will never be secure and we should stop trying to fix things the way we know doesn’t work.
Now the ignorant, mentally lazy, those incapable of handling the difficult reality of irreconcilable problems are bound to failure will simply put up their hands and say, “What’re we supposed to do give up?” They then will take a higher than mighty moral sounding stand, denigrate the concepts laid out here, and continue to fail to provide you, your organization, the government and about anybody else information security. There were a lot of people who did that when they suggested making a nuclear bomb. The role of information security practitioner and researcher is often filled with the autonomous arrogant idiots who claim near mythic powers of security. Breaking things doesn’t make you good, and trying to secure technologies never destined to be secure is just as bad.
Some companies have network sanitation policies in place that completely obliterate the ability to do business. That is a symptom of that stupid expectation of security guru’s to subjugate the business to their needs. Other examples of this are autocratic automatons that think usability is a principle of information technology abhorent to control and shouldn’t be considered in the equation of security. If it isn’t useable or doesn’t foster capability we shouldn’t have it in the network. That includes archaic passwords, horrible network routing rules, and anything else that fails to be tied to business need. At about this point if they are still reading several CIO types are experiencing moments where their eyes bug out of their heads, blood seeps from their eyes, and they are cursing me loudly. Get it out of your system.
Imagine if the security at the cost of business was applied to the Manhattan Project. Why yes we can make a nuclear bomb and it is a real dandy. I’m sorry though you can’t use it as a weapon. It can sit here and be totally useless for the war effort. I know that we broke the bank installing this war winning piece of technology, but unfortunately the security team has disabled all of the functions that make it work.
All of this stuff isn’t much better for the academics. The computer science types love formal methods. Formal methods attempt to apply mathematical principles to the creation of software. Computer science types like to talk about testing and work break down structures, along with engineering and so much more. Gag on a boolean function. So much software isn’t written by Google PhDs but liberal arts students trying to solve problems. Software is a creative endeavor and an artistic one at that. All of those people who think this is about engineering and trying to foster that have completely lost their minds. Software is more often a caffeine induced lucid event between the sun setting and rising while keyboards are shredded. Software creation is writing a book, not appending an algorithm. There is heart and soul in the process that academic types ignore at their peril. But, then again computer science types don’t like engineering types, and they both end up working for some pin-striped pencil necked geek from the business school. The day of the “professional programmer” has given sway to hordes of people creating a narrative upon which society runs.
The paradigm of information technology is a situated amalgamation of myths, fears, unfounded expectations, and politically motivated perverse incentives. This isn’t something you apply a government run super secret world controlling weapons program too. It is a story you tell your children as you watch the narrative unfold. You can’t apply security to a story. You shouldn’t try. Yet in the world that we’ve built of information security that is exactly what is happening. Some of these problems are created by attempting to control that which can’t be controlled. Instead of dealing with the weather we’ve tried to make rain and complained when we flooded ourselves into oblivion. The very principle that Lewis Shepherd talked about illuminated to me that there isn’t much place for government in securing our companies.
Software and hardware are domains of the information technology construct that are separated by a glass wall in many cases within an organization and even across organizations. The application and operating systems entities are often separated much the same way. Specialization and hyper specificity of tasking workers make these entities totally unaware of the others.
I’m not one to give up. I can tell you the first principle you should think of operationalizing within your organization is “assumption of breach.” This too is an old concept that only recently has gotten some news. Most people don’t know that it isn’t about giving up but figuring out how to operate. If you do this then you will figure out how to stay operational and profitable during a breach. You will likely fundamentally architect your environment in such a way as to protect information assets. Strange and wholly stupid policy conundrums of bring your own device (BYOD) will evaporate. In a world where you control the information you don’t care about the devices. The angst over BYOD is just an example of the arrogance of information security to think they have any hope in a hyper protected world of securing their network infrastructure from the outside world. We don’t need a Manhattan Project for cyber security. We need programming courses in Kindergarten.
Remember there is way more money in continuing the problem instead of fixing it.