Tech Cynic - Would you trust AI with your life?

Watch out for the long stable horse mackerel

The world of IT can seem cult-like at times. Not so much the actual technological part of that world, because creating hardware and software requires pragmatism and focus on what is and isn't achievable (at least, so one would hope), but the marketing and analysis aspects, which are more often driven by desperate futuristic optimism than any connection with reality.

By now, most observers know to take Elon Musk's predictions for the future of autonomous vehicles with more than a pinch of salt. Self-driving taxis next year, cars without steering wheels or pedals the year after - really? Maybe such vehicles will be available within that time-frame, but chances of them being in widespread everyday use on normal roads alongside human drivers in that period are slim to zero. The AI just isn't good enough. It may never be good enough.

That in itself is a bold statement, but I think it's a fair one. I interact with machine-learning AI on a daily basis, due to living in a country in which I don't have a comfortable grasp of the local language. As a result, I find myself at the mercy of online language translation tools, of which I've tried many.

Once you get over the initial 'talking dog' amazement that they work at all, realisation quickly dawns that they don't actually work very well. Over the past couple of years, I've seen output that is wrong in nuance, tone, meaning and fact, sometimes diametrically opposed to the true meaning.

In terms of ability, no translation software I've tried has come close to mimicking a moderately competent bilingual human. My young daughters, who have been learning German for just 18 months, are far superior to the vast machine-learning server farms owned by Google and others that have been trying to 'learn' it for years. Consider this recent example, translating a school announcement email sent by a native German speaker:

There's a subtle error (I believe) in the source German, but any human would spot that the output is clearly nonsense. AI has no ability to determine what's right and wrong. It can only follow the rules it has built up through its learning process, requiring humans to tell it if it's on the right track. Eventually, so the logic goes, enough feedback will correct all faults and the software will be perfect.

To continue reading this article register now