The problem with AI is you can't get it to talk and share its secrets.
What is it good for and which areas should algorithms stop meddling in? We need to bust this case wide open but it's got too big for humans to conduct on their own. They'd have to break the back of the problem using machine learning, neural networks and deep learning, then augment the findings with some human judgement. Don't worry about the bias involved when AI investigates itself. These algorithms don't have feelings. At least that's what people say.
The word on the Net is that algorithms are great for tracking the vast cultural deserts of information but not the lively, diverse ecosystems of context. Data is inert, context is alive. Where's the life, there's nuance, because another set of codes is influencing behaviours, in the shape of DNA, instinct and species memory – those are ingrained impulses nobody fully understands. Yet AI has identified that humans respond to the rule of three, so our study identifies this top trio of algorithm users to study: medicine (including pharmacology), the military and retail.
"The billions already invested in AI have mostly come from medical and military domains," says Mark Croxton, VP of customer support at Symphony RetailAI. The resulting algorithms, generally available as freeware, have been adopted by the Retail sector.
Retail, the world's second oldest profession, has centuries of folk wisdom about human psychology. If 'retail is detail' as the industry white papers say, AI can only be applied to the retail's boring databases.
AI is for picking and packing, bad for people.
WareBots are great for discrete problems, like automating warehouse picks, floor cleaning robots and image-driven fraud busting at checkouts. But few can walk the full length of the store, says Paul Boyle, CEO at Retail Insight.
"Human variables are a detail too far for robots to analyse," says Boyle, "Retail is affected by constant change and chaos at macro and micro level."
All shop customers cause the type of chaos that would bamboozle an algorithm. "They come at the wrong times, distract staff and disrupt the store. They move items between shelves and across departments and then move the signage," Boyle says. This behaviour sends out so many complex messages and AI can only capture some of the signals, some of the time with some of the data.
What if we got tough with this AI investigation and called in Detectives Machine Learning (ML) and Federated Learning (FL) to play Good Bot/Bad Bot with the data? Their current conclusion is that humans are running the retail business and AI is only good for assistance.
Medicine is the next sector heavily invested in AI. Subhankar Pal, AVP of technology and innovation at Capgemini Engineering, sees similarities between the mysteries of medicine and AI. In both cases, the punter doesn't get to see the workings and becomes very suspicious. He calls this 'The Black Box effect'.
One of AI's biggest hurdles is public trust and acceptance, and since it won't show us how it reaches its conclusions, nobody trusts it. Black boxes are artificial neural networks and deep learning. We like the fact that these hidden layers of nodes that process input but we can't see what they've learned. AI's programmers must remove these layers of obfuscation – and surely if anyone can spot it, they can.
"The computing power is unconstrained but emotional intelligence is in short supply," says Pal. The AI Doc would sail through exams but has a terrible bedside manner. Although that's often said about very bright medical students.
AI is great at crunching through data points about virus shapes, protein forms and drugs to create vaccines in record time. Now it needs to stop putting patients off their medicine.
Doc Bots are trying, says a study. It's a work in progress because Doc Bots try to be humane, it goes wrong says a team at the University of California. They are at that stage of learning empathy where they can do nothing right. The study found that patients want to be on a first-name basis with their human doctors but that machines are less effective at giving health advice if the Doc Bot knows their patient's name and medical history.
One of the study group's complaints about human doctors is that they rarely know the full medical history of the patient in front of them. Patients hate talking to the back of a terminal as the doctor catches up on their notes. A Doc Bot, on the other hand, can provide that continuity they crave, using the AI discipline of federated learning. However, the study of 295 patients found that an automated voice that knows all about their operations and bowel movements simultaneously pleases and horrifies the patients.
As a result, they're less likely to heed the Doc Bot's medical advice. Machines walk a fine line in serving as doctors, said study leader Shyam Sundar, an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS).
The last barrier to an AI takeover is domain knowledge and even that is receding, says a study by The Korea Institute of Civil Engineering and Building Technology (KICT).
Bridge building is an essential skill in any logistics core of a military force. However the structural integrity for variables such as stay-cables, tension force and damping ratio were better done by a machine, despite the superior 'domain knowledge' of experienced civil engineers.
A research team in KICT, led by Dr Seung-Seop Jin, examined the 'peak picking' method of estimating the tension forces and damping ratios, comparing the 'vibration method' used by humans and machines to calibrate the materials they are using.
Humans are vulnerable to operator's bias, mistake and fatigue. However, the methods used require human manipulation for predefined amplitude, frequency intervals and training process.
So humans are better at this job? Not necessarily. Their 'domain knowledge' can be simulated.
"We can find suitable methods from other disciplines. For example, our heartbeat makes periodic peaks in electrocardiogram (ECG) signals and we can compute heart rate by counting the periodic peaks in real time," says Dr Seung-Seop Jin.
It turns out that, like the algorithm, the human doctor has transferrable skills. The biomedical discipline of ECG interpretation can be used for analysing periodic peaks in engineering and logistics materials. "We can adopt one of the methods from this discipline to exploit the periodic characteristics of the modal frequencies," says Dr Jin.
So, if an algorithm or doctor can read ECG, it can use that skill in civil engineering too.
Well good for them.
Where does it leave the rest of us? Before we form a Lunchtime Luddite gang and smash up the IT Department, there is a message of hope from Ben Cox, Head of Responsible AI, at H2O.ai.
The next step in evolution of the enterprise is Explainable AI, says Cox. "It does not increase the costs or limit possibilities of machine learning, it simply enables companies to be more in touch with how trustable their model is, ultimately driving value and accuracy."