Is the cloud the key to democratizing AI?
Artificial Intelligence

Is the cloud the key to democratizing AI?

At the peak of the Japanese harvest, Makoto Koike's mother spends around eight hours a day sorting cucumbers from the family farm into different categories – a dull, time-consuming task that her son decided to automate. Although Makoto wasn’t a machine learning expert, he started playing around with TensorFlow, Google's popular open-source machine learning framework, and developed a deep learning model that could sort cucumbers by size, shape and other attributes. The system isn’t perfect (it has an accuracy rate of around 75%). But it’s a sign of how AI could soon transform even the smallest family-run business.

Giants like Google, Amazon, Microsoft, Apple and Facebook are, of course, well-aware of this transformative power. Deep learning underpins Amazon's recommendation system, Google's search and translation tools and Microsoft's Cortana personal assistant, as well many other widely-used applications and services. Most fortune 500 companies also have dedicated AI teams in place. But the big beasts’ interest in AI has drained the pool of data scientists, which leaves most smaller and medium enterprises in the same boat as Makoto: eager to explore how AI can improve their business, but short on expertise.

Even those enterprises that can afford to hire top AI experts still need to prepare huge datasets, and spend considerable sums on computing power in order to analyze them and teach their neural network to recognize certain patterns or objects. However, the big cloud providers are conscious of these issues – and they believe they've found a way to help people overcome them.

Machine learning as a service, or cloud AI, is now a major component of cloud platforms like Amazon Web Services (AWS), Microsoft Azure, Google Cloud and IBM Cloud. Essentially these companies are offering to do the grunt work involved in adding AI to business applications by providing their customers with access to pre-trained deep learning models – for image recognition, say – as well as tools that simplify the process of building, training and deploying customized models on the cloud.

“There are tools for data scientists who know how to code; there are tools for software developers that may not know how to properly tune algorithms, but who can build apps if you give them an API to code against; and finally there are tools for clickers, who basically relate through GUIs, which covers the vast majority of people in the world,” says Chris Nicholson, co-founder and CEO of Skymind, a provider of deep learning tools for the enterprise.

Microsoft Azure ML studio, Amazon SageMaker and Google Cloud ML Engine are broadly similar platforms that sit more toward the data scientist end of the spectrum, helping deep learning experts to train, tune and deploy their models at scale; while the likes of Amazon Rekognition and Google Translation are APIs built around pre-trained models: you simply feed in your data – images or videos you want analyzed for common objects, or text you want translated – and wait for the API to serve up the results.

The issue with the latter approach is that deep learning is typically used to solve a specific business problem, which pre-trained models may not be able to address. In other words, it’s no good having an API that can recognize different breeds of kitten if you want it to identify different types of cucumber.

“They’re kind of saying ‘hey, we found a bunch of data, we trained a model on it, now you can use it to make predictions about images’,” says Nicholson. “But that's a false solution in the sense that it’s still just as hard and just as necessary to train a model on your own data if you want to customize that solution.”

In a bid to bridge the gap between highly customized neural networks and the more basic ‘one size fits all’ pre-trained models, Google recently launched Cloud AutoML, a system that uses customer data to automatically build a custom deep learning model. Cloud AutoML Vision, which allows users to create custom machine learning models for image recognition via a drag-and-drop interface, is the first release from the new service.

“We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of,” said Fei-Fei Li, Chief Scientist, Google Cloud AI, and Jia Li Head of R&D, Google Cloud AI, in a blog post describing the new service.

Several companies have been testing Cloud AutoML for the past few months. For example, Disney has used the tool to develop a way for customers to search its merchandise for particular Disney characters, even if the product is not tagged with the character’s name. Companies still need to prepare their own data for the AutoML service, however, which could be problematic for some enterprises.

“There’s a lot of data that's specific to an organization, like how they process invoices or how they do customer lead checking,” says Nicholson. “It will continue to be a challenge for organizations to collect that data, because many organizations don't really have a handle on their data in the first place.”

Google's cloud business lies a distant third behind AWS and Microsoft Azure, so it's no surprise that the company is trying to leverage its AI expertise to win more customers. But given the computational resources needed to train and deploy deep learning models, all the major cloud vendors stand to make considerable sums from renting chips for this purpose.

As enterprise interest in AI increases, machine learning tools will no doubt come to be seen as an essential part of any cloud computing service. In fact, IDC predicts that by 2021, 75% of commercial enterprise apps will use AI. “As such, companies need to offer these kinds of capabilities, just like they need to offer container capabilities or monitoring services,” says Dave Schubmehl, research director for IDC's Cognitive/Artificial Intelligent Systems and Content Analytics research.

But before enterprises rush to deploy deep learning tools, they should stop to consider whether there’s a clear business case for using them. “People are using it [deep learning] in a very specific way to solve a very specific problem,” says Nicholson, “and people who can't define the specific problem they want to solve are much less likely to succeed.”

PREVIOUS ARTICLE

«What can CIOs do to boost workplace productivity?

NEXT ARTICLE

How server disaggregation could make cloud datacenters more efficient»
Duncan Jefferies

Duncan Jefferies is a London-based freelance journalist who writes about technology, digital culture and sustainability.

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?