Business Process Automation

Will AI-based automation see humanity reach its own 'peak horse' moment soon?

What does automation and Artificial Intelligence mean for the future of humanity? Will we all eventually live in a post-work society where AI guides how we live? This was the topic of discussion at a BetterFutures event in London this week.


Peak horse – peak human?

At their peak, the population of domesticated horse in the US was around 25 million. The rise of cars reduced this to around 4 million. This happened, argues Calum Chace, author of Surviving AI: The promise and peril of artificial intelligence, and The Economic Singularity: Artificial intelligence and the death of capitalism, because they had nothing more to offer than their muscle, which was quickly surpassed by machines.

Humans face a similar problem. It’s only in the most doomsday scenario that AI decides the greater good is served by fewer humans, but humanity does face a future where not only our muscle can be replaced by machines, but also our brains.

Depending on which study you reference, job losses because of automation range from a fairly low 5% to almost 50% of the population.

AI, Chace says, is already “our most powerful technology”. The ‘Big Bang’ of AI in all forms – Machine & Deep Learning – has happened, and was brought to the fore by the ‘Three Wise Men’ of Stephen Hawking, Elon Musk, and Mark Zuckerberg. But it’s not the idea of Terminators and Skynet that should worry us.

Automation is coming for both high and low-skilled jobs – Chace labels AI ‘Collar-Blind’ in this regard – and we can’t rely on the notion of a ‘magic job drawer’. The idea that those replaced by machines will end up in entirely new types of work is unlikely – and labels proponents of this idea “reverse Luddites with their heads in the sand” – because in previous generations many more openings in existing jobs were created, with few entirely fields of work instantly springing up to replacement.

Even if machines can’t do certain tasks now, the relentless improvement of technology means they soon will be. In recent years we’ve seen computers become as good as humans at tasks such as image and speech recognition, and are now being looked at for far more nuanced tasks such as lie detection.

Personal human relationships will be even more important in the future as we move to more automation, says Futurist Ray Hammond. “If your job involves you smiling at someone, your job is safe.”


Dystopia or utopia

Of the four potential scenarios he sees playing out, Chace says the idea that nothing will change is almost impossible, while whole new types of work appearing are also unlikely.  The two most likely scenarios, he says, are either a total dystopia – fuelled in part by a “very destructive” panic amongst workers in 10 years unless we start making plans now – or a ‘Star Trek’-esque utopia. This would see AI help humanity reach a post-work society, post-war, happy and healthy state.

However, even the dangers of a utopia were highlighted. Even if we do reach some sort of egalitarian society, there is the danger it could create a two-tier society: where the non-working masses have a decent way of life but one that it is static with almost zero social mobility, while the AI-owners would be far richer and separated from the rest of society.

Benedict Dellot, Associate Director, Economy, Enterprise and Manufacturing at the Royal Society of Arts, says that in the short to medium term we need not panic, but should start to look at these issues. Automation often involves certain tasks, not entire jobs, and so there shouldn’t be a huge reduction in jobs in the short to medium term. Many businesses, he says, are still a while off from really embracing automation for number of reasons: customers might still want human interaction, regulation may be a barrier, and integrating automation into existing systems and businesses is often easier said than done.

Dellot also warned of the “Danger of Algorithmic overreach”, where AI’s are deciding what is good for society for us; something which could lead to a loss of self-determination and agency. Chace, meanwhile, says that humans are never able to decide what the ‘greater good is’ but super-intelligent machines would be able to work that out for us and help us get there. He also argued Marxists have to get over their notions about the ‘nobility of work’ and accept that many people would be perfectly happy never to have jobs.


What to do

There’s an overarching feeling that this change can’t be stopped and it’s the companies who have the power. Funding in these technologies is coming almost entirely from the private sector in the pursuit of profit. And, in an age where data is key, these same companies also possess the power. Dellot called for more funding from the Public Sector, especially in areas where there is little to no profit to be made, such as in social care.

None of the speakers believe Universal Basic Income – the idea that everyone is given an unconditional monthly stipend – is the answer. Chace argued the fact it was ‘basic’ meant it was unsustainable for people to live off was its main drawback, and it was often too low to work or too costly for governments to pay for long term. Dellot claimed UBI – something the RSA is in favour of – could be part of a solution but is simply “mopping up” the issue after it arrives, and needs be part of a wider set of solutions. Taxing robots – an idea floated by Bill Gates earlier in the year – was also disregarded as a killer of innovation.  

Tighter regulation was also downplayed as a solution. Aside from the fact that many policy-makers are guilty of short-term thinking and so often ignore AI as a future problem, that fact politicians often have very different views to technology to the companies pushing makes things difficult. Nick Forrester, Partner at Hymans Robertson, warned that the current stances of some politicians on encryption, for example, “doesn’t bode well for AI.”

Another policy issue is the problem of cooperation. Unless everyone universally agrees on how to govern the issue there can be no order. One idea touted is that governments should approach AI in the same unilateral (certain heads of state excluded) way that they look at Climate Change, and we should have a Paris-like agreement between nations.

While long-term answers to these difficult questions were hard to come by, in the short term there are things that could be done. Lifelong learning was highlighted – Aviva’s example of retraining workers who said their jobs could be automated was cited as a good example, as was creating ethical frameworks for how to approach AI and its potential impact, and shifting taxes away from  income and instead towards capital.


Also read:
Everything you need to know about… Automation
Office 2021: Why robots won’t end drudgery or steal our jobs
Evil mankind-hating robots are not the same as automation
Can we prepare for the jobs that don’t exist yet?
Free money: The answer to a post-automation world?


« Vodkalisation: Five vodkas to drink with these virtualisation white papers


Is it time to invest in robots? »
Dan Swinhoe

Dan is a journalist at CSO Online. Previously he was Senior Staff Writer at IDG Connect.

  • twt
  • twt
  • twt
  • Mail


Do you think your smartphone is making you a workaholic?