Experts on hype, past, present, future and ethics of AI
Analytics Software

Experts on hype, past, present, future and ethics of AI

Artificial intelligence as a term dates back at least to the 1950s to denote a computer’s ability to learn; as a theoretical concept or artistic theme it goes back further. Today it is very much back in vogue in part perhaps because old algorithms are becoming more popular as a way to automate more and more popular projects (autonomous vehicles and the Internet of Things among them) and new algorithms are being created and applied in novel ways.

But like any popular meme, AI has also been kidnapped by the media and marketing communications industries to the point that the term is sometimes used in a slapdash way. To get a grip on where AI is today, I contacted a selection of experts in the field. The following is an edited version of their responses to my questions.

Artificial intelligence is starting to be marketed like a gimmick, but could offer real business benefits for the right applications. Check out: AI has its place in business, just don’t believe the hype

 

Q. AI has been around for decades. Why do you think the term is getting so much airplay and attention today?

 

Jurgi Camblong, CEO of Sophia Genetics, a Swiss specialist in data-driven medicine, simply says: “Because it’s happening! In healthcare, AI is already routine within hospitals, delivering concrete benefits to patients every day and saving lives.”

Sohrob Kazerounian, data scientist at security threat monitoring firm Vectra Networks, takes a broader view:

“Firstly, access to tools and systems that use AI is far greater than at any previous point in time. Future visions of AI in the 1960s were always impossibly far away and inaccessible to all but the economic and political elite. However, today, what we would have once thought of as the basic substrates of any AI system (for example, the ability to perceive arbitrary speech, visual inputs at near-human levels, network monitoring, cyber security detection, and interact with humans through natural language, both understanding and responding to queries) are now readily available to the population, often at the low end of consumer electronics pricing. That alone has transformed the landscape for AI perception and adoption.”

Andrew Joint, managing partner technology law firm, Kemp Little, sees a combination of factors:

“It feels like the technology has started to catch up with the years of science-fiction and future-gazing about its predicted use. The combination of vastly improved processor speeds, the rise of the Cloud and Big Data and the development of the AI algorithms themselves make the conditions seem ripe for AI to begin to flourish. We are now seeing everyday devices in both the home and office which use (admittedly weak) forms of AI.  That everyday use is beginning to generate trust in the tools, and benefits that we can see and appreciate.”

Jason Maynard, director of data and analytics at service desk firm Zendesk, says AI is hot for good reason.

“With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare are reaping the benefits. It’s a new and intuitive interface for the existing world. Chatbots and other AI platforms like virtual assistants continue to become more proficient at dealing with enquiries, and in some cases, pre-empting customer enquiries with predictive analytics and proactive communication. The benefit of this type of technology is that - even in circumstances where customer service requests are complex - a growing history of accurate decisions will allow companies to put more confidence in the automated system it provides for customers, saving time and money in the long run.”

Suman Nambiar, head of the AI practice at IT services group Mindtree, says:

“First, as the power of computers continues to obey Moore’s Law, an increasing number of powerful processors are becoming cheaper and cheaper. Critically, this means that Deep Learning networks – a key element of the development of AI in computing – have become much easier to build and train.

“Second, the internet has undoubtedly changed the way we connect, interact and communicate today, as has the development of mobile technology. When combined, this results in the generation of an inordinate constant flow of data. Simplistic or complex, Big Data has changed the way we gauge the impact of technology in this era of digital disruption.

“These two factors have made it possible to build and train neural networks on a scale never previously witnessed, thus enabling the current wave of AI to flourish. This neural network-based computing has been responsible for the shift away from trying to construct progressively more complex rule-based computer systems, to systems that are actually capable of learning, adapting and evolving independently. This means they are capable of resolving problems unassisted as a result of such learning abilities.

“Fundamentally, these developments have gifted today’s computer systems with the innate ability to learn as human brains do. Whether its learning a foreign language, or something as simple as just crossing the road, our brains are not hard-wired to do these things by a defined set of step-by-step instructions. Neural networks seek to mimic this process with processors merely replacing the neurons in the human brain.”

 

What are the biggest myths you hear or read about AI?

 

“That AI is about replicating the human mind,” says IBM CTO for Watson Solution Rob High. “And there was once a time where scientists were trying to do just that. In reality, AI and cognitive systems like Watson augment human intelligence. There’s a critical difference between systems that enhance and scale human expertise and those that attempt to replicate human intelligence. AI can be best described as an augmented intelligence tool. It is about man + machine. The AI often depicted in movies, popularised by Hollywood and science-fiction writers, is out of sync with reality and gets confused with real concerns in making sure algorithms today are open and fair. The truth is less sensational and far more meaningful.”

Vectra’s Kazerounian says it’s the idea that AI is inaccessible or expensive.

“In today’s world of cloud computing, a user armed with a laptop and an internet connection can spin up a cluster of computer nodes with world-class hardware and build arbitrarily complex neural networks - all by simply using open source software and publicly available datasets. What was once relegated to a select and exclusive group of academics, entrepreneurs and enterprises has now become easy enough to grasp for anyone with the basic technical skills and the inclination to learn. It has also become much simpler to understand them. While calculus used to teach a neural network to make predictions has been explored in great depth, modern systems do the heavy lifting for the developer. With the advent of high-level AI packages, knowledge of the underlying maths is hardly even necessary for the production of world-class AI.”

Kemp Little’s Joint adds:

“There is currently a large amount written about the future workplace and the removal of humans from the workforce - to be replaced by AI completely.  What we are already seeing with AI, in our workplace, is that AI augments the human worker and changes the scope of role of the human, but doesn’t necessarily replace the human role.”

Sophia Genetics’ Camblong says it’s a myth that AI will replace doctors:

“With AI, technical or back-office work can be fully and easily automated, giving back precious time that clinicians can spend with their patients [but] the human aspect of their profession is even more valued: that is, their intuition and capacities to listen, trust, deliver advice, empathise, to eventually decide on the best care path. Also, despite what we often hear, the only way to build something solid in AI is bottom up with the help of the end-users, and this is particularly true in healthcare.”

Mindtree’s Nambiar weighs in with a few more:

“A common misconception is that AI is a single technology. AI is an umbrella term for various algorithms and models which, when combined with large volumes of data, create systems with certain characteristics. Second, AI cannot yet be classed in the same bracket as human intelligence. Even those systems at the forefront of this technology, such as [Google’s acquisition with DeepMind] AlphaGo, for instance, are only designed and have reached the level where they are capable of performing specific, defined tasks. AI systems that are capable of absorbing information in a manner akin to that of the human brain are perhaps decades in the making.

Another misbelief is that the future will be controlled by those who patent new algorithms. The notions of innovation and protecting IP are constantly changing in the world of AI. There is now a firm realisation that the pooling of both innovation and the collection of algorithms powering today’s AI systems is mutually beneficial for everyone. DeepMind for example, Google’s recent acquisition, made the publishing of its research, even after the acquisition, a condition of the takeover. However, it is still capable of protecting its first-mover advantage by virtue of the data it holds, allowing it to continue training its AI systems.

And, finally, perhaps the most radical notion out there, the suggestion that physical human intelligence will become distilled into one form of AI or another is so radical that people aren’t even contemplating investing in it. The concept of ridding ourselves of our carbon-based forms, and having our thoughts, memories and emotions – quite literally everything that makes us human in fact – [preserved] has been considered very seriously by some. What we can say is that cryogenic freezing companies will very happily charge hundreds of thousands of pounds to freeze humans alive, and for you then to be born again at some point in the future, but it simply isn’t possible to put a date on when this will become a reality.”

 

Do you think the term is being bandied about in a careless manner?

 

IBM Watson’s High:

“Yes, AI is overused and its definition often misconstrued. The true goal of AI is to augment intelligence and a lot of people do not make this distinction, nor are they aware of the underlying algorithms that AI employs, including deep learning and machine learning. An engine is just one component of a car. In the same way, machine learning and deep learning algorithms are important features but the real recipe comes when you take those algorithms and combine them with other forms of data and analytics to create an augmented intelligent system.”

Vectra’s Kazerounian says there’s certainly a lot of hype:

“We are also finding that more and more companies are referring to traditional mathematical and statistical modelling techniques as AI or under the umbrella of AI. Due in part to the hype, but also to a set of shifting goalposts, the definition of AI is evolving and broadening. With each new development, AI is redefined to cover a set of tasks that appear just beyond our capabilities. Simpler applications once believed to require true intelligence are quickly relegated to the subterranean netherworld of simple and mechanical behaviours.”

Sophia’s Camblong:

“I believe we should refocus the discussion on understanding the needs and feeding AI with high-quality raw data. If you think about healthcare, this has a direct impact on clinical decisions. For us, talking about AI has one meaning: saving patients’ lives.”

 

What do you see as current opportunities for AI and what do you see in the future?

 

“There is an opportunity for cognitive systems to help aspire people and see through their point of view and also beyond their own biases … and posing questions that we would not otherwise think to ask,” says IBM Watson’s High. “Cognitive technologies also help people make better decisions. What search is to simple information retrieval, so cognitive is to advanced decision-making.”

Vectra’s Kazerounian:

“In cyber security detection, AI can be used to automatically monitor network traffic, flag suspicious behaviour or network anomalies, and alert the security team to investigate. Data traffic has grown exponentially over the last decade, making it a near-impossible task for humans to monitor the vast volume of data in real-time, 24/7. Future models of intelligent behaviour will evolve beyond the current singular activity and action model we see, to become more multi-skilled, using the notion of reward to help educate systems when they successfully evolve to understand a new function. This is based on the notion that the only way to learn to act in an environment is through reward in the first place. They will begin to incorporate principles observed, whilst learning to predict the resulting displacement as a consequence of the commands.” 

Kemp Little’s Joint provides a legal perspective:

“We can already see great AI benefits in relation to low-level, large scale, repetitive, review tasks. [This] allows us to offer better value services to clients and allows our lawyers to focus on those tasks which AI cannot perform.  As the range of legal activities that AI cannot replace is likely to reduce, the opportunity to develop technologies that can better replicate some aspects of legal services will exist for years to come.”

Zendesk’s Maynard:

“The overall volume of interactions is growing as consumers have more questions about the products and services that they are using, and expect more from the brands that they deal with. Today, brands are recognising that customer service is an integral part of the customer journey, and they’re using AI to better understand the consumer’s behaviour and needs. Consumers expect help in the context of what they’re doing in real-time and AI will enable this as we become embedded in new channels of communication. Businesses that transform their service models to provide proactive, adaptable, and targeted support will be rewarded with customer trust and loyalty.”

Mindtree’s Nambiar:

“The opportunities for this technology to grow and develop are endless. In service roles for example, the deployment of AI for customer interactions by using natural language and also conversation as an interface can be achieved whilst simultaneously automating menial and arduous processes – for example, replacing humans with chatbots for customer and employee self-service duties. Machine-led decision making is also coming on leaps and bounds, with AI gradually being drafted in for the purposes of predictive analysis to forecast complex outcomes with greater and greater precision. And, finally, the merging of AI with IoT to create intelligent systems that not only interact with each other but can anticipate and deal with issues and outages to keep businesses running will certainly result in the streamlining of future IT processes.”

Sophia Genetics’ Camblong:

“The next step is for Sophia to expand our knowledge base and head towards a future of real-time epidemiology, an era when we can monitor treatments almost real time within patients’ cohorts and where we will be able to say that one particular patient’s cancer is identical to that of 10,000 other patients, who had received treatment plan A and survived. In order to do so, in oncology for instance, we need to get access to data about cancer types, cancer stage, patients’ treatments and treatments’ outcomes, which will allow us to cluster patients and further leverage previous diagnosis to inform the next ones, ensuring patients get even more personalised treatments.”

 
What is holding back AI today?

 

IBM’s High believes more research is needed in two key areas.

“One is in the area of deep reasoning, especially in deductive and abductive reasoning. Today, in the area of speech and object recognition, for instance, AI recognises my speech and translates that into words or recognises the things I see, and translates that into objects or even recognises my intentions and maps that onto an action. However, it is far more interesting to think about how a cognitive system can begin to engage in a conversation and not just understand what we are asking, but also recognise the reasoning behind the question we are asking. Moving to abductive reasoning is one of the major advances we’re actively working on.

“Contextual awareness is another area for further research and development. This is the idea that AI is aware of the current situation or environment we are dealing with and acting in different ways depending on a normal or critical situation. We want cognitive systems to have greater contextual awareness and act more intelligently as a result. For example, if two people are talking to each other, it is not simply recognising the words the individuals use, but how they vocalise those words, their body language and how they punctuate to truly understand the intent and the context.”
 
Vectra’s Kazerounian looks at state funding:

“Ironically, AI is perhaps most held back by the same social forces that have propelled it to its prominent position today’s society. Many participants in the AI conversation have overpromised what it can do today. Another factor is anti-intellectualism and a decreasing commitment to public funding of basic research. It’s easy to forget that all the fundamental techniques that comprise today’s vision of AI – the early neural network, perceptron, and backpropagation work, as well as modern Deep Learning models and methods, including long short-term memories (LSTMs), Convolutional Nets, etc. - were all developed in basic research settings. In most cases, they were funded with public money, without any certainty as to their efficacy.”

Kemp Little’s Joint vies this via a legal lens:

“As a lawyer, I can see that the lack of legal certainty in relation to the legal status and liability of AI and its impact on areas of laws such as intellectual property and data privacy, is causing some uncertainty. Within the legal industry there is a lack of analysis regarding the business cases for when AI use suits certain types of legal activities.”

 

Do you think there should be a code of ethics for AI?

 

IBM Watson’s High certainly believes so.

“In order for cognitive systems like Watson to have a positive effect on society, they must be transparent and trustworthy. For example, if a business is using bots in its customer service operations, it has to be very clear and transparent that the customer is interacting with a bot, not a bot masqueraded as a human.

“Our job as a technology company and a member of the global community is to ensure that we’re developing cognitive technology in the right way and for the right reasons. At IBM, we created a system of best practices that help guide the safe and ethical management of AI systems, including alignment with social norms and values; algorithmic responsibility; compliance with existing legislation and policy; assurance of the integrity of the data, algorithms and systems; and protection of privacy and personal information.”

PREVIOUS ARTICLE

«Millennials talk careers: Eleanor Cook

NEXT ARTICLE

Female tech leader suggests girls must be pushed into STEM »
author_image
Martin Veitch

Martin Veitch is Editorial Consultant for IDG Connect

  • twt
  • twt

Add Your Comment

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should companies have Bitcoins on hand in preparation for a Ransomware attack?