Research note: Trans-convergence architecture

SPORT380Apple CEO Tim Cook and previously Steve Jobs talk often about the idea of the iOS application eco-system. I think that is fundamentally flawed thinking. Thinking in terms of operating systems limits the level of innovation by constraining the product category to a single point in time. Instead there is actually a better model for thinking about a data services eco-system. Satya Nadella at Microsoft has made headlines with his discussion of mobile and cloud first. This also is fundamentally flawed though likely an intermediary step. I think Apple and Microsoft both have a great grasp on the direction of technology but there are some glaring blind spots to both strategies.

Think of the Bing service not as search but a larger more holistic data service. It is on your Xbox, your windows phone, and it is also on your computer. The data repository behind it is linked to search, email, and webpage data. Google provides the same data intensive application ecosystem which includes groups and image hosting. The recent iOS8 update and OSX Yosemite update integrate the consumer computational platforms of mobile, desktop, and data service very closely. This is the principle of integrated convergence of data beyond the device and across the data-ecosystem. The principle of trans-covergence.

If you consider the view of mobile and cloud first it misses an entire thread of security in the resilience of data systems. The concept of an application eco-system misses the entire reality of a data driven eco-system. It is like mistaking the lawn for the lawn mower. This mistake is common when trying to align reality to strategy and the assumptions are based on history rather than the future. Cloud is a services delivery platform with limitations and hurdles that are outside of the cloud model itself. Cloud is also a much abused term, but for now it is what we have.

Until the entire world is wired with high bandwidth and low latency networks any cloud first strategy will be only for urban, wealthy nations, and those with highly disposable incomes. In this particular area the United States itself lags behind other countries. As an example Google has pushed a mapping solution that requires significant bandwidth and low latency across the cellular network. In the city moving from point a to point b that makes sense. In the hinterlands where I might really need maps that may be impossible. Unfortunately Microsoft has followed Apple and Google into dropping their cartography software (Streets and Trips) and are moving to a downloadable just in time map solution (Bing). These are examples of moving towards cloud enabled without considering data resiliency.

This isn’t all bad. The emperors still have clothes on. The vast proportion of society has moved into centralized locations and the metropolis age of urbanized living is well underway. Snarky comments aside this is a great example of the technology overtaking the reasonable consumer use case. It may be interesting though that lack of maps does not stop location aware tracking and analysis. More importantly the demographics likely support the move even if not the use case itself. Looking at the various use cases we can start to build larger models or consider different architectures

Government is an interesting use case for studying how technology will change over time. The government enterprise is often considered to be a slow to adapt or adopt technology and yet it also drives some rapid changes. The United States federal government internally has highly restrictive rules on the use of information technology. A principle that FBI Director Comey has been pushing on the public as he decries the concept of cryptography on cell phones. Dictatorial and autocratic technology implementers are a significant risk to innovation and productivity. A consequence of the governmental bias is a rule base and law base of conflicting regulatory controls and infeasible technical implementations.

If you were to take all of the government mandated information technology controls and the government standardized security technical implementation guides and overlay them on the current technical solutions. You would find wide gaps and conflicting regimes. Parsing the security technical implementation guides to create a useable system is a series of trade-offs usually based on experience. The guides and solutions are an amalgamation of best cases and evolving standards. Most of the standards and processes of securing information assets were derived out of the traditional client service architecture, itself a derivation out of the data processing and main frame days of computing. Heavy emphasis in the traditional client service architecture is placed on securing individual assets that sit in the enterprise as silos. The clients are usually connected to the server front end through a shared transmission medium and the servers usually have some kind of shared connectivity.

TraditionalClientServer
Security within the traditional client server model is layered and does not aggregate. The security model of the client server hierarchy is systems based security. Specifically security is applied to a device or system (an instance of an operating system among what might be many within a server). This kind of monolithic solution results in a significant amount of overhead for the security controls. The controls as applied are generally imposed per hardware device or per operating system instance individually.  Thus security is for the specific asset not the information found on the asset. In some cases there is a access control mechanism for files and a trust solution that says user A can access asset B. That kind of granularity rapidly breaks down when the asset is moved between individual nodes and or leaves a protected zone.

The information asset is created or owned by somebody but the ownership controls do not carry over once something is sent out by email or the physical media control itself has been lost. In the main-frame era this was not necessarily the case. A user could be given access to an information asset located in a container but never allowed to copy that asset and permission to access could be revoked. Copy and paste broke all of our security controls. Control of the physical asset is the best control and is colloquially known as gates, guards, and guns. It is not a point of contention and experience with several data thefts have proven physical control is important. Loss of physical control of an asset will likely lead to assumption of breach.
Another point is that the traditional model of accessing resources has resulted in connectivity provided by numerous provider and data redundancy is high. Each of the locations for data is duplicated. As an example an email sent out to a distribution list will be duplicated in each users mailbox. The consumers experience isas silo’d as the solution often duplicating passwords and login credentials across different sytems. As a user issue that is is interesting, but the reality is that user fatigue is an issue. This leads to technology service providers to blame the user, and often denigrate the user for not following better practices.
Connectivity mechanisms for the consumer can be role based (work phone, home phone) or use base (cellular, wi-fi, LAN, WAN). Because connectivity is not shared at the consumer level a variety of layered technologies are required in the stove pipe that allows for access. In some cases the various systems will share a connective layer but even that can be fractured requiring significant architectural compromises. When information is shared between competitors or outside of particular enterprises almost always the connectivity between systems is subject to external forces separate to either the sharing entity the receiving entity.

EmergingCloudInfrastructure

In the modern or emerging cloud architecture hardware becomes a shared asset and often the security of the systems is a shared asset. Connectivity within the served systems is often a shared asset. The consumer has the appearance of a “cloud” structure but actually each consumer is a separated by their application stack and their own access mechanisms. In this emerging context a few technical trends are being actualized. The first is that the connectivity structure is now being standardized around TCP/IP. Regardless of the cellular, voice, data, or television signal it is a digital signal. The consumer though is still seeing a service stack in front of them. If you want to use iTunes you go through iTunes. If you want Amazon instant video you have to go through Amazon. The shared and non-shared commerce models create a walled garden approach to the cloud infrastructures.
Whether PaaS, SaaS, or IaaS the pattern is shared connectivity and hardware for the service provider and clients of the service provider. To consumers the reality is that the systems look like an application stack. In this particular model data redundancy remains an issue where the same data may be replicated dozens or hundreds of times. User credentials are not shared. In the recent change of iOS 8 by Apple even where there were shared credentials a password for each application is now going to be required. This will create significant issues for the user base and poor implementation will decrease uptake on the technology. Every time we push cognitive effort on to the consumer the consumer is presented with a choice of poor security or dropping the service. This is a bad place to be for companies. It is going to be a driver towards a different model.
Applications are not shared even if common data repositories are required in this model. On my phone or computer I run an instantiation of a program and even if I am chatting with somebody using the same program the reality is we both run our own application stack. That creates risk of attack and modification of the locally running program. It creates a leakage or data repository access issue depending on the application. Security is at the application layer and access task which means one bad actor within a group could have tragic or dramatic consequences to a group of users. Many of the recent exposures of information actually reflect the use of an application trust relationship to undermine the system security model.
Consumer roles and attributes are still redundant and duplicative in the cloud model. This leaves a gaping hole in possible infrastructure enhancement.  The industry silos are application, equipment, connectivity, security, and operating systems.  The emerging cloud model enforces the concept of application eco-systems rather than services eco-systems.

TransconvergenceIntrastructure

What if you didn’t have to have any silos? What if you could merge into a matrixed and quite holistic model? The question is what happens as a service delivery model after cloud? With the merging of data repositories the need for silos of operating systems is removed and thus applications can be used on a purely subscription basis. The user sees a service or clicks on an icon to activate what they would call an application but it is running somewhere and not necessarily on their computer. Their device they call a computer is a display and has minimal storage. The application they just activated may have one instantiation and all users are acting upon that instantiation. Thus the data being exchanged is controlled at the application layer and any subterfuge can be traced quickly.
Applications run on top of a data layer rather than an operating system layer in this model. In this model of converged architecture there is one copy of the data or item people are using. When you send an email out to a distribution list the pointer is sent and the email itself remains in a controlled container. Everybody who gets is creates a change, adoption, concatenation log of the email. The service layer of the email services is tied directly to the data layer. That keeps segmentation at the data layer possible and controls on access are in the application itself. A form of baked in security to follow on many previous authors ideas. The concept is to decrease the spawning of data in duplication and increase security through integration and authentication to the data. The idea in a most general way is to focus on information security rather than systems security.

Resilience in a system that is converged at this level is that duplicative services are spawned and controlled in the same application space. Several instantiations of an application are running within the environment but they all use the same data sources and capabilities. This means that processing tasks can be moved between nodes fairly quickly irrespective of geography. There are issues. A model of converged architectures that is beyond even the cloud strategies of today requires mobile high speed connectivity. It also requires a standardization of cloud protocols. A significant step we are already seeing in the consumer and production market of cloud tools. Finally, it requires a commitment by the service provider that is liability conscious and doesn’t blame the user for the providers failures. Liability is slowing cloud adoption currently. In a fully converged architectures and post convergence model liability is a killer to innovation. That lack of taking responsibility is the issue. Not hiding from the liability tiger itself.

 

Leave a Reply