Why Natural Language Processing is the future of enterprise AI
Business Intelligence

Why Natural Language Processing is the future of enterprise AI

Jon Oberlander is Professor of Epistemics at the University of Edinburgh. I spoke to him about the current trends for AI in the enterprise, particularly Natural Language Processing.

 

AI used to be obscure but has become a widespread term, even amongst the general public. What's changed?

We've reached a kind of inflection point. In the past it used to be said that as soon as something worked it stopped being called AI and was simply taken for granted. That was the paradox of success in this field. But now it's the other way around: anything that works is called AI.

Today's driver of success is machine learning. Bayesian algorithms and deep learning are what's driving the interest in this field. It's still generally called AI and machine learning, although deep learning is the more precise term.

 

Where are we now with enterprise AI, and specifically NLP?

It's become ubiquitous for the consumer. Amazon and other companies are putting conversation agents such as Alexa in the home, so consumers have more acquaintance with these systems than ever before. People are likely to use NLP systems when they interact with call centres too.

Voice search needs to evolve to reach mass adoption. Check out: The rise of voice search: What do businesses really need to know?

Also, there have been enormous jumps in the quality of machine translation in the past year, because development teams have switched from traditional approaches to neural network methods. This is something we've been involved with at Edinburgh.

 

Are there any downsides to the neural network approach?

Translation results are now more fluent, but different kinds of errors can occur, namely the loss of fine detail in information. This loss can be critical, so the next challenge is to ensure that all relevant information is included in results. There's been a huge jump in the quality of the technology being employed, and deep learning is part of that.

 

What's the biggest barrier to successful use of NLP in the enterprise?

One of the biggest is the shortage of the right type of data. We have big data but it's not necessarily the right data. In most cases, to improve results that data has to be properly classified or annotated before being processed.

But there have been improvements here too. For example, when AI is applied to machine vision systems, previously it required humans to draw boxes around items of interest so they could be identified. Now we have eye-tracking tools that accelerate this process dramatically.

It's not always enough to use this annotated, tagged or supervised data, because then we potentially miss out on valuable insights from unsupervised (raw) data. So ideally we have some tagged and some raw data, doing things in a semi-supervised way. Still, some neural networks are so powerful that they can work with unsupervised data and produce good results.

AI protagonists are chipping away at Moore’s Law to redefine future processor. Check out: A look at the new breed of AI chips

Where else in the enterprise is this type of deep learning being used?

We see it in all kinds of classification tasks, for example in calculating insurance premiums. It's used for face recognition in media industries, Facebook being a prime example. It's used for prediction of customer behaviour, such as calculation of credit risk or managing inventory based on probable customer volumes.

Retail has been changed the most. Amazon was the first to make this a big thing, the recommendation systems, crunching data to find out who is similar to whom. NLP is also being used in call centres, where speech technology is playing a significant role. People were wary of it initially, but now many companies are reliant on it.

 

How about the underlying technology?

Specialisation of chip design has been especially important in helping drive the widespread uptake of NLP. Special-purpose computing chips such as GPUs and FPGAs have helped make massive gains, assisted by much better algorithms. We have much more data now too. These three strands are all vital. What we're doing now was literally impossible 20 years ago.

This is also helping to push deep learning to the edge, to the periphery. That means that instead of offloading all the work to the cloud, you do it in situ. It's important for things like machine vision in self-driving cars, for example. It's not safe to offload that work to a remote server so it has to be done at the edge. We're moving towards very low-power computing environments where this is practical.

 

What about the issue of data bias?

We've already seen that biased data is biasing AI, such as with some online ads tied to search queries. That's a problem because with some algorithms it can be hard or even impossible to see how conclusions are drawn, making it difficult to identify bias.

This is why hybrid systems are becoming more useful. Hybrid systems combine neural networks with human-generated rules that can be very efficient at classifying symbolic data. This gives us so-called "explainable AI" which is becoming a major trend. For example, it means that a rejection of insurance cover can be explained.

In fact hybrid systems can go further. You can potentially identify the underlying bias and change it. The decisions made by the system are transparent. The EU is pushing hard for this explainability and transparency. It also helps organisations comply with legal and audit requirements, so it's likely to be a big part of future developments.

 

Where next for NLP in the enterprise?

An increasingly important area is recruitment. Using AI on semi-structured documents such as CVs/resumes will become normal. Natural Language Processing will help identify the right candidates.

Speech recognition is going to become more widespread, from automatic transcripts of meetings to searchable audio content. As a society we have vast archives of content going back years or decades, but it's not well indexed. That is likely to change.

 

What are the social implications of these rapid developments?

There will be a major shift in employment. Education is going to be vital because this is likely to affect people who can't necessarily re-train easily.

Free money: The answer to a post-automation world? We take a good look at the potential in Universal Basic Income

At the moment we're seeing this happening in the legal field, but it's also going to affect logistics and manufacturing in a big way. We're going to have to start thinking about there being less work to go around. That may require implementing policies such as a Universal Basic Income, which are not always popular. This is a difficult issue in terms of politics. Education must be part of the solution.

PREVIOUS ARTICLE

«Millennials talk careers: Naledi Hollbruegge

NEXT ARTICLE

Typical 24: Sarra Bejaoui, SmartPA»
Alex Cruickshank

Alex Cruickshank has been writing about technology and business since 1994. He has lived in various far-flung places around the world and is now based in Berlin.  

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?