Making Computers Work Smarter, Not Just Faster

Can we make software run faster without adding extra hardware? Yes, but it's not easy – or cheap

Sitting behind me, fresh from a thorough dismantling, cleaning and renovation, is a Panasonic Senior Partner 'portable' XT PC from 1983. It runs an Intel 8088 processor at 4.77MHz and has a built-in 9-inch monochrome CGA screen. It borrows technology from the PC that IBM launched a couple of years earlier. And it's slow. Very, very slow.

A quarter of a century after IBM's first XT class machines were launched, an ingenious coder worked out how to play full-motion video – with sound – on just such a PC as this. You can watch the results here. The achievement is remarkable in many ways, some of them too nerdy to contemplate here, but the take-home message for non-coders is this: today's software is not making the most of the hardware it runs on.

Coding is always a compromise, especially when it's done for money. Programmers are expected to produce efficient code with maximum features in minimum time. Something has to give, and it does. Since time is money, and marketing thrives on features, what gives is efficiency. But half the time the programmers won't even know it.

The lowest level of computer programming language is assembly language, sometimes called assembler or assembly code. To an outsider it's arcane and impenetrable, which is why we have higher-level languages that let programmers write code in something approximating their native tongue. Then a piece of software called a compiler converts the high-level language into assembly code that the computer can run.

Assembly code varies from one type of computer to another, so anyone compiling software has to take the target platform into account. These days that means software must be compiled for a variety of different processors, chipsets, GPUs and memory types. Or rather, it should be for maximum efficiency, but that's often impractical. So a generic binary will be compiled that will run on all modern x86 chips, for example, but might not take perfect advantage of specific features, such as co-opting a particular type of GPU to help with the workload.

To make matters worse, compiling is not an exact science, so compilers vary in their efficiency. Imagine trying to translate a piece of literature from Mandarin Chinese into English, but taking into account the characteristics of the English speaker's larynx, throat, lungs, teeth and tongue, to create the most efficient spoken prose. What approach would you take? A literal translation first, then clean it up later? Optimise the Chinese first then translate? Attempt to comprehend the meaning of the Chinese and then replicate it in English?

In many circumstances it's more efficient to write from scratch in the target language, and that's what was done for the XT video demo: it was written in assembly code. When it comes down to getting computers to do what you want, coding in assembler versus a higher-level language is the difference between persuasion and shouting: it's more difficult but more effective.

[image_library_tag d9168981-2db2-4cfd-81a2-f678b9046977 570x397 alt="02-12-14-making-computers-work-smarter-not-just-faster-alex-cruickshank" title="02-12-14-making-computers-work-smarter-not-just-faster-alex-cruickshank" width="570" height="397"class="center "]

There are plenty of examples of assembly code available, all of them fast and most of them small, because programming big projects in assembler requires serious brainpower and organisation. There are some larger projects, though. The Kolibri operating system uses about 0.1% of the resources of an OS such as Windows because it was written from scratch in assembly code. It's incredibly fast but requires a touch of genius to program, whereas anyone with an ounce of programming knowledge and an IDE can knock out a Windows dialogue box.

And that's always the trade-off. Assembly code is fast, but it costs brain-power to produce, so it's more expensive to write software this way. And anyway, who cares? If your software's slow, just throw another CPU core at it, or add some more RAM. It's not like processing speeds are slowing down, is it?

Whether they are or not, this approach might not make sense forever. Moore's Law might keep us ahead of the game for a while longer, but it's a big waste of energy and resources and may become more of an issue over time, if processing power becomes subject to diminishing returns.

On a desktop or laptop machine, nobody really cares if the hourglass appears for an extra few seconds. But with a warehouse full of data servers all throwing out massive amounts of heat and consuming megawatts of power then, suddenly, efficient coding starts to matter.

To come back to the Chinese-English analogy, the process could be considerably improved if the Chinese speaker knew a little English, to help optimise the translation. So should today's programmers learn assembly language to help them become more efficient at their work? Possibly.

If you're buying software for your business there's not a lot you can do to improve its efficiency, except to make sure the hardware you run it on is as closely matched as possible to the manufacturer's compiler target specifications.

But if you're running development projects in-house, or if you're a programmer yourself, you could do worse than learn some assembly language. It might not help you write more efficient code straight away, but it could help you identify compiled code that's sub-optimal. That in turn could help make your applications run faster without having to throw ever more powerful and expensive hardware at each new problem.


Freelance technology journalist Alex Cruickshank grew up in England and emigrated to New Zealand several years ago, where he runs his own writing business, Ministry of Prose.