friday-rant
Technology Planning and Analysis

Rant: The computer lords it over all of us

Isaac Asimov's Foundation series of novels discussed the influence of computer algorithms on the societies of the future. He used the term 'psychohistory' to describe mathematical principles used to identify broad future outcomes, though not at the level of individuals. He also, in Franchise, described a society where computers identified the single person in society whose vote accurately represented everyone else's – and only that person was allowed to vote.

That's not likely to happen any time soon, but it's interesting to see how quickly algorithms are taking a defining role in society. And in some ways they are more powerful than Asimov's psychohistory, because they are often applied to individuals.

For example, dating apps use increasingly complex algorithms to make matches between people. Statistical analysis of personal data is used to calculate our most likely emotional/sexual matches. Or so they tell us, anyway: for all we know it's just a lucky dip.

But regardless of the underlying method, it has an effect. If an algorithm tells two people that they are each other's perfect match, there's a good chance they'll believe it, and will unconsciously make more effort to work through any initial problems in their relationship. So algorithmic dating apps aren't so much predicting compatibility as creating it. They are helping to shape the generations of the future by deciding who breeds with whom.

Social media networks can mine swathes of data to determine whether a user is pregnant before she's told anyone else – and perhaps before she even knows it herself. In fact, behavioural data mining can predict individuals' future actions with a reasonably high chance of success, whether they're about to announce an engagement or have an affair. It's not quite Minority Report, but it's not inconceivably far off.

Loan applications are decided by data-crunching machines; CVs/resumes are increasingly processed by computers in the initial stages; job suitability is sometimes weighed by algorithmic psychological evaluations gleaned from social media posts. In fact all of us are increasingly governed by statistics extracted from Big Data.

That's because it works. Whatever we may think, whatever we may feel, our actions and behaviour are largely predictable. We do have autonomy, but it's framed by our environment. With a sufficiently large data set of other people in similar environments, it's possible to determine all sorts of things about an individual with a fair degree of certainty.

But not absolute certainty. So there are two big problems here. First, what happens when the computer is wrong? Second, what happens once we believe the computer can't be wrong – and then it is?

One of the central tenets of computer use for many years was GIGO: garbage in, garbage out. If you don't put accurate data into a bug-free program, you'll get a meaningless and wrong result. But many of today's algorithms are based on artificial neural networks that adapt their weightings independently and exhibit their own learning behaviour. How do you bug-check those?

Like some recent mathematical proofs, the data is too big to fit inside our heads. We can't check these systems, so we have to take it for granted that the computer is correct. Not even their makers can check them. They can check the initial assumptions and code, but these algorithms are as much a product of their data as their code. Nobody can check the validity of the data: if they could do that they wouldn't create the algorithm in the first place. We just have to assume that it's right.

And that's what I'm ranting about: the acceptance of algorithms as oracles. The idea that humans must meekly accept the pronouncements of data-mining algorithms as true, accurate and good. No doubt they often are. But sometimes they aren't, and it's becoming increasingly difficult to challenge the output of algorithm engines.

The move to acceptance of algorithmic predictions also focuses us on what is as opposed to what might be. That means it reduces our freedom of action, through discouragement and the elimination of chance. If an algorithm told you that you had a 14% probability of succeeding in a job application, would you bother? If you were given a 96% likelihood of dying from cancer, would you fight it?

Today, perhaps you would, because the estimate might be wrong. But once algorithms become gods and are perceived to be infallible, maybe you would simply accept your fate. Unpredictability would be reduced: as would spontaneity, creativity, and probably optimism too. Belief changes behaviour, whether it's belief in a god or belief in the truth of numbers.

I had a discussion recently about leadership. Not at a corporate level, but at a social level: the people or entities who create the rules that society follows. Religion, royalty, governments... what would come next, we wondered?

It seems quite possible that the next leaders won't have physical form at all. They will be algorithmic entities. Their pronouncements will be ineffable, infallible and omnipresent. Sounds familiar, doesn't it?

 

See also: Looking beyond Big Data: Are we approaching the death of hypocrisy?

PREVIOUS ARTICLE

« News Roundup: Big Apple, sex, and what came before the seed

NEXT ARTICLE

Wanted: A model for startup success that doesn't rely on alchemy »
author_image
Alex Cruickshank

Alex Cruickshank has been writing about technology and business since 1994. He has lived in various far-flung places around the world and is now based in Berlin.  

  • Mail

Poll

Do you think your smartphone is making you a workaholic?