Demystifying the black box: IBM on how to get started with AI

IBM's Rob Thomas talks through to get started with AI, and some common pitfalls.

According to many analyst forecasts, Artificial Intelligence (AI) technologies represent one of the biggest economic opportunities in history. AI has a tremendous potential to transform a business, allowing it to enhance a firm's capacity for predictions, as well as revenue gains through the automation, and optimisation of core business processes. However, while adoption is increasing rapidly in the business sphere, most companies either haven't employed AI or are still in their preliminary or pilot stages of implementing the tech. 

According to the Wall Street Journal's 'State of AI Adoption' survey, 47% of organisations have employed at least one AI capability within their standard business processes, and 30% of organisations are conducting AI pilots. However, only 21% report using AI across multiple business functions, while 58% report that less than one-tenth of their digital budgets go towards AI. This is despite only 1% of respondents - who have employed AI capabilities - reporting that they have experienced none or negative value from AI implementations. 

To cap this off, there seems to be a genuine fundamental misunderstanding of AI amongst business decision-makers in general. According to research from MIT Sloan Management Review and Boston Consulting Group, 50% of surveyed organisations were either ‘passives' or ‘experimenters', meaning they either had little understanding of AI or lacked any kind of ‘deep' understanding. A further 30% (investigators), displayed knowledge of AI and applications, but had not deployed the technology beyond the pilot stage. Overall, only 20% of surveyed organisations both understood and had adopted AI technologies, who were classified as ‘Pioneers'.

Noting this, it's fair to say that AI still needs a fair bit of demystification from a strategic perspective, especially in terms of how and when to implement it. IBM, an organisation that is regularly part of these AI-based conversations with its Watson offering, has looked to address this through its AI ladder report, which aims to guide organisations through their AI journey depending on their level of maturity. The report outlines a method for assessing where an organisation is placed in terms of their maturity and getting started with the next step.

There are four rungs to the ladder, starting with ‘Collect' (collecting all types of data), ‘Organise' (organise data into a business-ready foundation), ‘Analyse' (building and scaling AI), and ‘Infuse' (operationalising AI throughout the business). This is all underpinned by a ‘Modernise' layer, which stipulates that firms update to a more agile data architecture, preparing it for synergy with an AI-driven/ multi-cloud mentality.

To speak more about how organisations should be thinking about AI today, we spoke to Robert Thomas, General Manager, IBM Data and Watson AI. As the author of IBM's AI Ladder, Thomas is well positioned to provide expertise into the mystification of the AI arena and how to cut through this. He also speaks about how organisations can work to solve ‘the data problem', as well as why he thinks DataOps is an essential ethos to adopt if firms want to get the most out of their AI deployments.

 

You've talked previously about how AI is often seen as a 'magic box' that will fix all business issues, why is that mentality troublesome? 

Well, looking at the big picture, AI is the largest economic opportunity that we'll ever see, with $16 trillion added to global GDP between now and 2030, yet enterprise adoption is still relatively low. When you look at that, the question becomes; 'if it's the biggest economic opportunity ever, why is adoption so low?'

My view is that this comes down to three things. First, it comes down to data, which continues to be a challenge for organisations that are trying to put AI into production. Secondly, it's about skills, meaning organisations don't necessarily have all of the skills that they need to get AI into production and keep it there. 

The third factor is around trust, meaning that there is a general question mark around whether we can trust AI that's in production and making decisions. This is a particularly relevant point when talking about 'demystifying the black box'. If there is a perception that there is this magic box making decisions, there is no way that people are going to be comfortable with that. 

Our strategy is focused on those three areas. For the first two, you need to bring a level of sanity to data and how you make progress with it, and you can build up your skills using things like automation for skills augmentation. Where trust is concerned, we built a framework around trusted AI, involving understanding the decisions that AI is making, how it's making those decisions, and the lifecycle of that decision making. If we can get over those hurdles, adoption will take off. 

 

In relation to that idea of understanding AI, is there a general misunderstanding amongst organisations around what actually constitutes AI technology?

I believe that there is, and I think that's true of anything new, not just AI. Where Watson is concerned, we tend to describe it as a set of tools for companies that want to build their own AI, sort of like a toolbox with everything you might need to build your own AI. Most of that work is open source that we've augmented in some way, so that's also a bit of an alleviating factor. 

Thinking about this, AI starts to become demystified for customers as they understand that they can build models, put them into production, and manage their lifecycle themselves. It's not about robots, or driverless cars - it's a toolbox and you can use the pieces that you want. 

In this sense, it's not about a black box you don't understand. It's about tools that allow you to build AI, or a set of applications that solve a specific business problem. 

 

That issue of identifying business problems to craft AI solutions seems to be particularly important. For organisations just starting to dabble with AI, is that always going to be the first step?

The first step in my mind is thinking about this concept of an AI ladder. You really need to start by focusing on data collection, and once you have that finalised you can move up the ladder to organising that data, then to data analysis, and how you would fuse AI into your business. So, start with forging an understanding of where you are when it comes to maturity of AI implementation vs where you want to be. 

The second step would be; don't make AI projects purely a technical project, because most of those don't work out. It's about the business coming together with the technology team to figure out the right outcomes and what is going to drive the business forward. I strongly encourage that - with every AI project - there should be a line-of-business sponsor or business analyst along with the technical team discussing implementation. 

Once you have those two things in place, it's then about doing hundreds of projects. In other words, try a whole bunch of things. A single project is 4-6 people over 4-6 weeks, so we're not talking about hundreds of millions of dollars' worth of investment. Doing a lot of projects will allow you to figure out what works and when to iterate. Once you do that, you'll find a path. You just need to get started. 

 

Talking about those initial stages, organisations often have an issue with cleansing their data. What sort of advice would you give organisations regarding collecting and cleansing the data, getting it ready for AI? 

This is where I think there is a bit of a misperception on AI. The general consensus is that AI is for very high-end use cases, like robots, self-driving cars, that kind of thing. The reality is, modern AI can be used for very basic tasks like making your data more useful. In fact, we have used AI to do data matching, metadata creation, and automated cataloguing as well. 

So, this whole idea of data matching and organising can actually be done using AI. You don't have to do a whole load of manual effort to get ready for AI implementation. You can use AI to solve that data problem, which is something that doesn't get talked about as much. It's obviously not as exciting or flashy, but since every company tends to deal with the data problem, we encourage people to use AI to solve it. Then you can move on to the other use-cases. 

 

So that involves building an AI algorithm to sort out the data before you actually explore investment en masse? 

Exactly, and that's the observation that we had about 18 months ago. We realised that we can use the AI we're building at IBM research to solve the data problem and that's going to help clients achieve a level of AI maturity faster. That part is not well-understood and so we're trying to communicate that as much as we can. 

As an example, a big client of ours in Europe is ING and that's been precisely their use case. We helped them build a multi-country data lake and a global data catalogue, using AI to do both of those things and to do the data organisation. It's also positioned them to be GDPR compliant while also enabling them to provide self-service analytics, making data available to more stakeholders across multiple countries. So this is happening now, it's not a future thing. 

 

Once the models have been built and you're getting a bit of traction with the 'analyse' step of the process, a big hurdle then becomes scaling, especially when considering things like GDPR. How can organisations effectively scale their AI deployments while ensuring compliance with privacy regulations? 

I think a good way to answer that is to talk about the AI tools. When it comes to AI tools, there are four key components. You need a catalogue for how you're preparing your data, a place to build models, a place to run models, and the fourth thing is you need a way to manage the lifecycle of your models. 

That means - regardless of where the model is build, whose tool it is, or whether it is open source - how are you going to manage the lifecycle of your models and how they're making their decisions, so that you have the capacity to scale and deploy AI broadly. 

Managing that lifecycle means understanding data providence (where it came from), and how the AI is making decisions. The thing about AI is that it is alive, it morphs as it's in production, so you can to look at things like model drift, anomaly protection, and how it changes over time. Without a capability like that, it's impossible to get to scalable AI, because otherwise it will quickly grow beyond your understanding of how things are happening, so we believe it's a foundational capability for scaling.

 

You've also mentioned that adopting a DataOps mentality is important here as well. What are the merits of DataOps where the facilitation of AI deployments and strategies are concerned?

If an organisation does its data preparation and they build their AI and get it into production, that AI is going to create more data, so the whole thing is a virtuous loop. The concept behind DataOps is ensuring a consistent data pipeline where you're constantly feeding and enhancing models while also constantly cleaning and organising your data. DataOps is the practice around making sure that your data is always ready for AI and feeding the different models that you have in production. 

The reason that you see DataOps taking off now is that organisations are finally getting models into production. As organisations increasingly adopt AI and get it into production, you're going to have a massive increase in the number of models, putting an even bigger strain on DataOps, and data lifecycle management. That's just getting started in my mind. 

 

In that sense do you see DataOps as a key competitive advantage at this point in time? 

I would rephrase it slightly in saying DataOps is existential. You may not differentiate by using DataOps, but without it, you may not survive because your AI is only as good as your data. If you don't have a practice for how you're curating and managing your data then, even if you're doing AI, it's not going to be very good or very relevant. 

In that sense, DataOps is fundamental. It could perhaps be seen as a differentiator if you do it better than anyone else, but I think it's more like table stakes. It's what you must have as a basis for everything that you do from an AI perspective.

 

Another thing that organisations might struggle with is the question of when to actually apply AI, in terms of the implementations and use-cases that would serve beneficial to the business, versus those where it's sort of shoehorned in. How do you manage that question of when to apply? 

When thinking about AI, it's really all about iteration and culture. The difference between AI and ERP, for example, is ERP is about committing to a project. You define the destination and build a project plan to get there, with little to no deviation. That's not AI. 

AI is about iterating on a bunch of different things and progressing with the things that work, while moving the things that don't work to the side. It's a very different cultural phenomenon. When I talk to chief data officers and even CEOs, I tell them that they must be willing to accept that if you're going down the AI path, a number of the things that you do aren't going to work. This is about figuring out which ones do work and doubling down on those.

It's very different from traditional IT projects, which are mostly pass/fail and involve a hard commitment. AI is about skills, data, and how these things come together. You can't guarantee success, but you can guarantee iteration and eventually that you'll find the right path.You just need to have the right culture to encourage that. It's a big change. 

 

What extent of business transformation is likely to be spurred by the right kind of AI implementation?

Ultimately AI is about business transformation, making better predictions, and automating processes.

If you put a business lens on this, the processes that will be most impacted by AI are customer service, HR, financial planning and budgeting, supply chain, and finally IT itself. Where IT is concerned, we can use AI to make the whole role of the CIO and IT much more productive in terms of automating how you manage systems and code. 

So, when I think about core business processes, those are the 5 that I think will be most impacted. If you change how you're doing HR, IT, finance, supply chain, and customer service, it gets to a point where you're getting true business transformation out of AI.