Escaping operational black holes with unified ‘full-fidelity’ observability

As tech leaders now look to gain deep and granular tranches of management control across their IT estates, there is a reasonable (if not compelling) argument for questioning the form, focus and fidelity of our observability viewpoint – the alternative may be something like a journey down an operational black hole, which is clearly a fairly suffocating experience for everyone.

A ladder coming up thriough a keyhole
Shutterstock

No enterprise IT system implementation exists in a vacuum. By their very nature, organisations need to build, manage and manipulate a corporate software services layer that is characterised by its ability to compute, interconnect and deliver.

That last word is important; the delivery factor here is a mechanism where users get results at the upper-tier level, but, more fundamentally, all enterprise software systems need to deliver observability in the first instance, otherwise, they will risk existing in some sort of operational black hole, or disconnected vacuum.

Distributed complex black holes

A lot of what gets written today about observability is in the context of the challenges that DevOps and Site Reliability Engineers (SRE) teams face in cloud-native environments, which are highly distributed and complicated.

In those environments, identifying and resolving system issues is tough to do, but some IT industry vendors and commentators are calling that ‘looking inside’ process observability. But, according to Mike Marks in his role as vice president of product marketing at network visibility and end user experience management company Riverbed, that’s really an extension of Application Performance Monitoring (APM).

Looking at the modern observability challenge facing customers today, Marks suggests that cloud-native infrastructures aren’t the only highly distributed environments that are challenging the ability of IT to manage.

To continue reading this article register now