The following is a little thought experiment. It is not meant to answer any questions but to examine capabilities of cyber warfare from a metaphorical position (perhaps open). Using the principles of Schrodinger’s Cat (is the cat alive or dead) without harming any cats we ask a slightly different question. Is the electric light in a room on or off? Of course you could enter the room and look. Maybe you could peek under the door, or perhaps through a window. But, for our purposes let’s say the room with the light on is on the other side of the earth and you have to determine the condition of the light using other than direct observation methods.
If you examine the complexity of a light being on or off you might jump to the conclusion that the chances are 50-50 that the light is on or off. You might say that there is no way to know for sure if the light is on or off. The idea is to see if it is possible to reason if the light is on or not (sureness isn’t being measured). As far as guessing we can inject a few variables such as the amount of daylight in a given day, the occupants work habits, the weather, cost of keeping a light on, and more to help reason the likelihood that the light is on. Each datum or point if accurate can help us draw conclusions about whether the light is on or off.
What does this light on/light off question have to do with cyber warfare? It is an inherently non-cyber controlled device (though that is changing), but more importantly it helps us look at a piece of information regardless of the computers that might or might not be used. We are not looking for the sure answer that so often eludes us, but we are looking to see the flexibility inherent in the exercise of thinking. I have seen a variety of these thought exercises in the literature and thought it might be nice to walk through one here just for the fun of it. In the end the state of the light is simply a piece of information.
Since we can’t see into the room we might use other methods other than direct observation to ascertain the state of the light. The use of infrared imaging might give some indication of whether a light is on or off. Perhaps simply looking at a power meter might help ascertain if the light is on or off. In fact we could remotely watch the power consumption of the power meter and attempt to infer when certain activities occurred within the room. If thousands of watts of power is suddenly being drawn it might be a microwave, or if an entity enters the room and with seconds a hundred watts or thereabouts turns on it might be a light.
Looking at that usage remotely we might infer a series of patterns of electric usage and know that over time when certain spikes occur a light is being turned on. Though it might be hard to say for certain when a specific light is turned on. If we know the wattage of the light and all of the other lights a set of smaller lights might create a set of lights where the wattage of the specific light is only one. Thus, we would know exactly when the light was turned on by remote manipulation of the power meter (sensing is often unprotected on these meters). The wattage of no other device should be the same as the light in question, but neither should the addition of any multiple devices add up to the same as the target light.
All of this allows us to make an educated guess about the light. How many other ways can you derive the status of the light without breaking the rules? We have departed from the original structure of a “cat in a box” but our principles of unknown observation remain. I don’t want to mess up what might be a good point of discussion, but think about the problem and post in the comments if you have anything to add.
Sam, but what if I don’t want to be subjected to dataveillance? Do I have a choice any longer?
The implications of this questions scare me, as they are so far from everything that I’ve been taught about the American ideal. Why should I throw that away? For what benefit am I, or society, obtaining via these advanced forms of ‘Technique’?
Great question. We know that security through obscurity is a thin easily penetrated veil. So that leads to non-participation as an avenue. Still likely not even possible. To some dataveilance is good. Fear drives abandonment of freedom. So that leads to the question of value which is likely predicated on personal point of view. The value is in societal control which is also where the scary stuff is too. It is a loaded gun with a hair trigger pointed at freedom of censure. But, what is unknown is who has their finger on the trigger. Dataveilance is how we secure society, and provide safety. What is not discussed is the cost to our eroded freedoms.
“But, what is unknown is who has their finger on the trigger. ”
Well, isn’t it whoever is best positioned to process OODA fastest on the widest possible data set, where that data set is private or public. If my assessment is correct, than the “who” is the Pentagon and it’s universe of high tech affiliate companies.
How does the individual, as we currently know it, survive this societal framework?
If privacy is a concern then there is very little to be done. Give up using credit cards and shopping cards. Go to a cash based system (strange sounding only a decade into plastic but still viable). Only use non-telemetry equipped media and entertainment (no cable TV for you). Use data obfuscation and signal obscuration technologies like TOR and anonymous proxies. You don’t have to give up all technology just moderate ahead of time how you use it. Each person has a digital footprint and the use of any technology can provide (following the metaphor) a footfall that might be detected. There are a couple books out there on how to minimize that footprint. Maybe a blog post is in order on that.
“If privacy is a concern then there is very little to be done.”
Thomas Jefferson is rolling in his grave. What did so many Americans give their lives for… a global network of Sousveillence?
“Each person has a digital footprint and the use of any technology can provide (following the metaphor) a footfall that might be detected. ”
With dataveillence tech like Palantir.com and bit.ly, social systems are able to be modeled at quite a granular level. Therefore, feedback control system of significant dexterity are only inevitable, especially as dataveillence migrated to real-time, affecting all within that society, irrespective of any personal choices to opt-out of any given single technique within that system.
Who are those that are asking the societal question about the impact of these techniques on humanity, and why is this entire discussion outcast to the desert of the real, where no one can hear it or participate in this discussion? Is anyone at Purdue, perhaps in the philosophy dept., exploring these questions publicly?
By the way, here’s bit.ly’s public facing data scientist in a video presentation showing off their stuff.
http://www.youtube.com/watch?v=G6_UtrZsiBo
Ok, the music background to the Youtube video was kind of eery 🙂
As to topic. Most Americans feel that they have given up nothing and expected nothing from surveillance culture. There simply has been zero data to support that surveillance is even perceived by Americans as an issue except in tiny/small patches of aware populations. Even criminal elements begin to ignore public camera systems after awhile.
There are a few people asking the broader social questions but there is no support for that work. I have worked with EFF and other groups (see Amicus brief in my CV) dealing with the abuse of tools by law enforcement. Other people have looked at the ethical nature of some of the tools. I personally believe that the tools as they are dual use are nothing more than tools. It is the specific use and goal of use with the tools that is reprehensible.
It is interesting that you mention opt-out versus opt-in. Opt-out as a structure for protection requires a fully engaged and committed individual with knowledge of the ramifications of their decision to be morally and ethically correct. However, opt-in is the better default condition when setting up regulatory choices. It is the less ambiguous choice and does not violate the unknowing, or unaware. Obviously business doesn’t like that second model at all.
I wish we had a choice about opting-in or out, but that choice does not really exist, does it? And if we cannot master the tools (ie ‘technique’), then how can we assure ourselves that they serve society? Let me provide the following excerpt…
The Future does not Compute
http://www.netfuture.org/fdnc/ch25.html
An Inability to Master Technology
“Jacques Ellul says much the same thing when he points to the ‘Great Innovation’ that has occurred over the past decade or two. The conflict between technology and the broader values of society has ceased to exist. Or, rather, it has ceased to matter. No longer do we tackle the issues head on. No longer do we force people to adapt to their machines. We neither fight against technology nor consciously adapt ourselves to it. Everything happens naturally, ‘by force of circumstances,’
‘….because the proliferation of techniques, mediated by the media, by communications, by the universalization of images, by changed human discourse, has outflanked prior obstacles and integrated them progressively into the process. It has encircled points of resistance, which then tend to dissolve. It has done all this without any hostile reaction or refusal …. Insinuation or encirclement does not involve any program of necessary adaptation to new techniques. Everything takes place as in a show, offered freely to a happy crowd that has no problems. ‘ (quote from The Technological Bluff)
It is not that society and culture are managing to assimilate technology. Rather, technology is swallowing culture.”
The Technological Bluff, by Jacques Ellul
http://books.google.com/books?id=QXIDzfx19SkC&pg=PA18&lpg=PA18