Infrastructure's journey to invisibility

Good technology is technology that users don't even notice - and so, in a sense, good technology is invisible. As cloud computing and the compartmentalised world of containers now flourishes, how should we track our IT infrastructure's route to a comfortable level of invisibility?

IDGConnect_infrastructure_invisible_cloud_shutterstock_706297474_1200x700
Shutterstock

The ultimate goal for any truly successful technology to achieve is a state of invisibility. This is the point at which it becomes subsumed into systems, platforms, services or devices as an integral component so that users (and other machines) can simply assume it exists - and so get on with their tasks in hand.

Nobody stops to question the speed of their machine's microprocessor anymore, few of us have too many storage issues - after all, the cost of plugging in an extra terabyte isn't exorbitant - and even operating systems themselves are now largely empowered with the ability to auto-update and extend themselves when and where needed.

Cloud computing was of course always intended to provide a core level of ubiquitous foundational power and - to be fair - it does in many areas, but the modern litany of use cases and riotous market of competing interconnected virtualisation services hasn't always made everything easier. Nobody really talks about plug-and-play cloud. Well, not yet anyway.

Cloud implementation, augmentation & defenestration

But stumbling blocks aside, cloud could become more of a utility (like gas or electricity) rather than some sort of finely tuned computing ability that needs specialist implementation, augmentation and occasional defenestration. The question is, on its journey to invisibility, how can cloud computing use its composable and flexible DNA to become a more unseen part of our IT infrastructures?

In general, software developers like using cloud services because they can help get things done faster and they don't have to think as much about infrastructure before a project starts. According to the magical soothsayers at analyst house Gartner, spend on application infrastructure services and products is anticipated to grow at 26.6% per year to more than $68.9 billion by 2022.

So IT infrastructure might never be truly sexy, but is it is perhaps gaining more respect and consideration.

But as with all things, there are cloud trade-offs, pitfalls and costs. While some things get done faster, the same old problems around capacity planning still come up. It's not hard to get a cloud instance planning decision wrong, at least for longer-term use, if not for shorter-term also.

This viewpoint is provided by Patrick McFadin in his role as vice president of developer relations at cloud DataBase-as-a-Service (DBaaS) company DataStax. He contends that some cloud services create 'walled gardens' that can keep an organisation hemmed in with one provider.

“As a technology, cloud serves a great purpose, but Cloud Services Providers (CSPs) have an incentive to encourage you to use more of their services and resources. The more you use, the more you pay and the more you are committed. While developers and enterprise IT teams are comfortable to commit to services that meet their needs, they should also be fully cognizant of the costs and risks that exist when they don't have an escape route,” argues McFadin.

Much of the current 'hope' in this sector of technology is placed upon containers. The current darling of the cloud architecture space, a container is a logical computing environment where software code is built to enable a guest application to run where it is abstracted away from the underlying host system's hardware and software infrastructure resources.

Because containers can help software developers and their operations counterparts to decouple application components from each other, these tech teams can start to think about cloud infrastructure in more invisible terms. At the coalface, this means software can be built from an upper-level Application Programming Interface (API) perspective with connectivity and functionality coming to the fore, rather than it having to be built with a more sensitive appreciation for infrastructure.

Kubernetes clarity & complexity

Kubernetes is clearly part of this story. Initially developed and then open sourced by Google, Kubernetes separates workloads from infrastructure and handles the orchestration of the containers running on that infrastructure.

CEO at Kubernetes management and operations portal company Shipa.io Bruno Andrade says that there's an immediate advantage here in terms of finding application issues. When workloads and infrastructure are separated, it's a whole lot easier to pinpoint the root causes of application failures, data mismatches and so on.

But says Andrade, there's no such thing as a free lunch. “As a generic platform, Kubernetes standardises methods for packaging, running and monitoring workloads, reducing pains across these processes. However, Kubernetes complexity introduces its own set of pain points, including a lack of visibility that hides security risks and makes maintenance and troubleshooting that much more difficult,” he said.

Both McFadin and Andrade agree that combining cloud, containers and Kubernetes together can assuage the potential problem that developers have around lock-in to a specific cloud provider.

We can now at least be happy about the fact that applications running in containers as part of Kubernetes implementations can be managed as a discrete set, allowing us to create virtual datacentres that are completely portable. Requiring only compute, network and storage from cloud vendors.

Is that the infrastructure invisibility dream made real? The answer is most likely to be: possibly, maybe, somewhat, with occasional exceptions. But regardless, the greater weight of industry momentum here is still focused on creating as much service layer invisibility as possible to provide us all with a super functional underlying substrate upon which we can all compute upon.

The state of stateful & stateless data

There will be more headaches to come. Even though these technologies make the application side of the equation much easier to manage in the cloud, this is not the only element in this process.

“From a practical perspective, we still need a means of looking at how applications create and manage data,” explains DataStax's McFadin. His reference here relates to the difference between stateful data workloads that have identity relating to their state in time and space vs. stateless information workloads that don't - and so can be used in isolation of other systems.

“For example, when we talk about storing data within a database in the cloud, we're really talking about creating a 'stateful' workload that requires some kind of storage and uses a database model to organise that data,” said McFadin.

But he says, in the world of cloud, stateful equals painful. Data has to be stored over time, it can be difficult to move for compliance reasons and the cost goes up as more data is created. Kubernetes was not originally designed for stateful workloads, so this has added more complexity.

We can take some solace from the fact that cloud is offering increasing levels of invisibility every day. How often has the average user fired up a smartphone language phrase translator, currency calculator or an online banking application and stopped to think about the compute muscle that's actually going on at the back end?

While the holidaymaking smartphone user might stay oblivious when services work effectively, they will naturally start to curse these tools when they go wrong. What we need to worry about in tech circles is keeping stateless container-based workloads intelligently orchestrated to give applications their invisible cloak of 'it just works' usefulness.