Artificial Intelligence (AI) and its ability to perform different strains of Machine Learning (ML) functions is arriving at the point where it will need to make the biggest ‘decision’ of its life so far.
That decision is all about how AI (and the ML functions it drives) should come to its predictive decisions i.e. how extended and complex should the software engineering be behind the decision-engines and, crucially, how much ML automation should be introspectively applied back upon itself to make the learning power the learning?
Machine Learning Operations (MLOps) and DevOps solutions architect at Git Lab Monmayuri Ray explains that ML represents a shift in the way we predict and make decisions.
An economic model for AI
Viewed through an economic and business-focused lens (rather than from a software engineering or systems analysis viewpoint) she says it is a drop in the cost of prediction, which forms part of a sort of AI economic model where other variables include data, the power of analytics engines and a portion of human judgment.
But as clearly as we can paint that picture, there’s still a shortfall in workable affordable applicable ML intelligence.
The issue here comes down to implementation at the coal face of the IT stack and although many organisations have grasped the concept, there is still a gap in ML adoption for solving real-world problems. Tech-focused behemoths like Google and Tesla might be running ML systems, but everyday enterprises have yet to apply these innovations on any grand scale.
“However, there’s still a lot to learn and incorporate from the Googles and Teslas of this world because, with ML models, the economics of buying intelligence also changes. Basically, we have to realise that we pay the same price for a good and bad ML model,” said Ray.
She explains the scenario here with coffee. When we buy coffee beans, we pay on an increasing scale between freeze-dried granules, regular, organic and gourmet brands.
“But which coffee do we buy if all the products are the same price? This is the economic dilemma of ML models: since we pay the same price for a good or a bad model, feedback and accuracy become key elements that cannot be compromised and the fundamental basis for the adoption and upscaling of better machine learning in general,” she said.
Bing fails to zing
Ray and her GitLab associates note this whole argument is well illustrated in Bing’s failure to compete with Google Search on any substantial level. Thought to command less than 3% of the global search market, Microsoft’s Bing differs from Google operationally due to the respective feedback loops and longer time required to retrain Bing recommendations, which are essentially driven by ML functions.
“In order to crank up the accuracy of the model and to use ML as a general purpose service, there are several frameworks one can use. One of which is to automate repetitive tasks involved in machine learning to close the feedback loop time and drop the cost of prediction and decision making,” concluded Ray.
This is the point where the penny could perhaps be about to drop i.e. if we can use automation more prevalently in the learning and testing element inside ML models and their augmentations and extensions, then we can use ML to drive ML.
But it won’t be easy. Enterprise software developers are dealing with a Cambrian explosion of hardware backends in the form of chips, clouds, edge devices and so on. Each of these hardware backends - think Nvidia or Intel - have their own specific libraries and frameworks. This issue is compounded by the fact that today, developers have to manually optimise and fine tune ML models for each of these backends.
Luis Ceze is CEO of OctoML. He works alongside co-founder and chief product officer Jason Knight. Both are vocal on the subject of how enterprises can now use Machine Learning (ML) to create better Machine Learning (ML).
Billions of calculations, just one algorithm
The OctoML team argues that the goal of any CIO is to get their AI innovations deployed quickly and cost-effectively. But the reality is that 90 percent of AI and ML applications don’t ever make it to market. The issue today is that ML deployment requires specialised engineering and is compute-heavy, which translates to high cloud costs, or makes them unfit for edge deployment.
“With ML getting more and more popular (in areas including computer vision, NLU/NLP, content generation etc.) the underlying compute power required to do ML is significant. In many cases (consider a big model like GPT-3) developers are doing billions and billions of calculations to get one answer from an algorithm,” said Ceze and Knight.
Built on Apache TVM, OctoML is designed to take the pain out of getting ML models to production by automatically maximising model performance on any hardware and across common ML frameworks like Pytorch, TensorFlow and ONNX serialised models. Once again, it’s all about automating the deployment of the ML model itself.
The trend for this discussion is widespread. Analyst house Gartner recommends that leaders responsible for infrastructure operations and cloud management should, “Revise monitoring strategies by integrating AIOps platforms as part of the DevOps toolchain to enable automated pattern discovery.”
Testing software, with testing software
Back at GitLab, the company thinks it can see some light at the end of the tunnel. GitLab’s DevSecOps Survey 2021 showed greater use of ML software bots working to build software at its testing phase.
In 2020, just 16% of survey respondents said they had bots auto-testing their code or an AI/ML tool in place to carry out testing procedures. This year, the percentage was just over 41%. All told, 25% of respondents use bots to test their code, 16% use AI/ML to review code before a human sees it and 34% are exploring the idea of AI/Ml but haven’t done anything about it yet.
If enterprise ML models are more prevalently tested automatically, using ML itself to drive the bots that perform the tests, then the decision engines manifesting themselves in the upper tier AI systems being created may possibly be developed faster, at a lower cost and with smarter smartness.