Why CIOs need a complexity meter

With an existing software application and data services stack now being inter-married to new cloud services and a plethora of even newer low-code and no-code software functionalities, the C-suite IT team needs to formalise its measure of code complexity if it is to avoid technical debt and system disruptions in the future. Sourcery founder Tim Gilboy takes aim at IT technical debt.

IDGConnect_CIO_complexity_meter__code_shutterstock_2131508163_1200x650
Shutterstock

Modern IT stacks are complex. Despite the fact that we are applying an increasing amount of Artificial Intelligence (AI)-based autonomous and abstracted automation into the way enterprise IT is managed, the genome of the systems we are constructing beneath are increasingly complex.

Because our existing enterprise software application and data services stacks are now being inter-married to new cloud services and a plethora of API interconnections and functionalities, complexity at the code level itself is essentially increasing.

With complexity comes computing power, obviously. But the greater entanglement of more software code itself also increases the risk of creating technical debt.

Not a financial term per se (although it creates and incurs a direct cost in time-hours and resources), technical debt is of course the ‘repayment’ the IT team needs to expend to debug and fix software that has been created over-quickly, with appropriate consideration for integration or without enough foresight to plan for longer-term scale and so on.

To balance its books going forwards, the C-suite IT team needs to formalise its measure of code complexity if it is to avoid technical debt and system disruptions in the future.

Built, bought, bolstered & butchered

The challenge here comes down to whether an organisation can achieve codebase insight. Or, to put it more directly, it comes down to whether the business is able to read, understand, interpret, contextualise and rationalise upon the software code that it has built, bought, bolstered and very possibly butchered into some forked or skewed customisation designed to suit a particular business use case.

What our list of b-words comes down to is briskness i.e. software teams are under pressure and they often push code into live production too quickly with more velocity than is prudent if they are to avoid code complexity. Founder of real-time software code refactoring company Sourcery Tim Gilboy says that despite the clear link between software maintainability and development velocity, many teams do not proactively measure complexity.

Thankfully, there are two relatively straightforward metrics for complexity that can be used to identify where code may be becoming overly complex: Cyclomatic complexity and Cognitive complexity.

An eye on Cyclomatic complexity

Clarifying the difference between these two yardsticks, Gilboy explains that Cyclomatic complexity is a measure of the complexity of software in terms of the minimum number of tests we need for complete test coverage. It can give us a good proxy for how hard code is to understand and ultimately work with to fix and manage, in the context of this discussion.

It was originally proposed in 1976 by Thomas McCabe as a way to figure out what software would be difficult to test or maintain. It is calculated by looking at the number of nodes (processing tasks within a piece of software) and edges (paths that connect those processing tasks).

“As Cyclomatic complexity was originally developed to benchmark the difficulty in testing and maintaining software, it is more tailored towards these ‘machine-level interpretability’ characteristics, instead of human readability. While minimal test coverage is important, readability is a significantly bigger factor for software development velocity - so we need a better way to understand where complexity will slow down development speed,” explained Gilboy.

The way Cyclomatic complexity is calculated can lead to multiple methods having identical Cyclomatic complexity scores, even if one is much easier for a human developer to understand.

Gilboy speaks from experiences gained developing his own firm’s AI-centric approach to software code refactoring and management. In doing so he has spent extensive research hours working to understand where others have progressed knowledge in this field for industry practitioners to adopt. As such, the Sourcery team also make sure they consider Cognitive complexity.

Understanding Cognitive complexity

To try to counter the limitations in Cyclomatic complexity, (and a handful of issues that Cyclomatic complexity has with modern software languages), Geneva, Switzerland-based organic code remediation company Sonar developed another complexity metric, known as Cognitive complexity.

“Cognitive complexity is built on a series of rules that penalise code structures that make it harder for a human to read (breaks in the linear structure, nesting etc.), but don’t penalise shorthands that help to make the code easier to understand. While it’s not perfect, these rules provide a metric that gives developers insight into the readability of code and therefore can estimate the impact that complex code will have on velocity,” explained Gilboy.

When code becomes too complex it slows down future development and developers need to refactor our code to get it back into a manageable place. For large projects this can be a significant undertaking - taking weeks or months to complete. However, once finished, the benefits to future development are almost immediately tangible and indeed visible.

Three core complexity controls

In an ideal world developers should be looking to proactively prevent overly complex code from becoming an issue in their codebase in the first place. Sourcery offers these three cornerstones of good practice to combat complexity at its core.

  • Robust review - A robust code review process enables team members to flag and push for fixes for complex code before it’s merged helps prevent complexity from building up.
  • Automation advantage - Use automated code quality tools to set quality thresholds. There are a number of tools IT teams can use to automatically measure the complexity of code and warn the team if the quality is too poor by setting a threshold.
  • Review & refactor - IT teams should periodically review and refactor existing code. Even taking a proactive approach, complexity will build up over time. By setting aside time to deliberately review and refactor code, teams can improve quality issues without needing to launch a major refactoring effort.

“It can be easy to focus on the short-term wins of continually adding in new functionality and building new features, but ignoring the build-up of technical debt can have major long term costs for your team. Focusing on proactively reducing complexity issues in your codebase can help your team continue to move quickly and not become bogged down by unmanageable code,” concluded Sourcery’s Gilboy.

It’s often tough to figure out technology companies settle on their chosen trading name; it can often just be some obscure reference to the founder’s favourite dog, sandwich, or brand of beer.

There’s no such confusion with Sourcery, for obvious reasons; the company’s technology stems from ‘magical’ controls designed to be used by code sorcerers to control their software source code and keep complexity out of the equation.

As any good sorcerer knows, double toil only leads to boil and trouble.