A look at the new breed of AI chips
Software Architectures

A look at the new breed of AI chips

A $30m injection of cash into the UK’s Bristol-based AI chip startup Graphcore in July this year was another clear indication of how AI chip development is disrupting the traditional CPU players. Earlier in the same month, leading AI chipmaker NVIDIA struck a deal with Chinese web services firm Baidu with a promise to “accelerate AI”, while Microsoft executive vice president for AI research Harry Shum recently talked about how Microsoft is developing its own AI chip for HoloLens. Throw in Google’s TPU, Apple’s hints at an AI chip for its smartphones and a host of startups including Austin, Texas-based Mythic and you have a processor bunfight brewing.

It may prove to be a defining period for the technology industry and for traditional processing giant Intel in particular. Moore’s Law has stood Intel in good stead over the years, multiplying the transistors to boost performance, making it a dominant player in datacentre servers. There is of course, still plenty of life in that particular beast but the rapidly increasing drive for a data driven world is piling pressure on existing technologies and chip designs. As a result, Intel’s dominance is under threat and although the playing field may not have been entirely levelled, there’s enough of a shift to for new entrants to take a shot.

As you would expect Intel has not been idle. It launched its own AI chip Nervana last year and in March it acquired Israeli chipmaker MobileEye for $15bn but it’s having to play catch-up. Other businesses such as Nvidia have stolen a march, recognising as early as 2006, that a new approach was needed. According to Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA, the company’s work in developing CUDA 11 years ago, a parallel computing platform and application programming interface, has been the rock on which the business is now building its AI-targeted products.

“We spend around $3bn a year on R&D and the vast majority of that benefits AI,” says Hamilton. “To be a success in AI you need to do things differently. You can’t rely on existing commodity processors.”

So, what does a $3bn R&D fund buy you? How many AI-related patents does the business hold? Hamilton didn’t talk numbers but pointed to NVIDIA’s Pascal architecture products and its work to develop improved ‘inferencing’ speeds. Its Tensor RT chip he says, is ten times faster than anything else in the inference step of a deep learning process, something which is already attracting early users of machine learning technologies.

“Facebook is using deep learning for identifying pictures but now it wants to look at video content too which is much more demanding,” adds Hamilton. “You need more inferencing capability more quickly and this is where you need a hardware and software approach to the network. Typically people have used some software on either side of the process, to ‘train’ and then ‘run’ the deep learning network but they are very different processes with different optimisations.”

As well as the big social platforms and cloud companies, such as Facebook and Baidu in China, Hamilton believes this approach should help with some other use case challenges, such as in healthcare and genome sequencing. Through its partnership with Baidu NVIDIA is also looking at driverless cars and mapping, enabling Chinese developers to access its GPUs via Baidu’s public cloud. The aim is to accelerate development using the NVIDIA infrastructure, a clever tactic of course and one that should feed NVIDIA’s hardware and software development.

 

New chips on the block

Clearly AI is developing along a number of key lines but ultimately it is the collaboration between datacentre and device that will enable applications to truly perform. With on-device hardware in smartphones, drones and vehicles improving, but also having the ability to access via the cloud, the huge banks of ‘thinking’ datacentres, we should start to see considerable strides forward in machine learning products and services.

More accurate data analytics and decision making is regarded as a key goal but as Kenneth Heafield, a lecturer on machine translation at the School of Informatics at the University of Edinburgh warns, we shouldn’t jump from “more accurate to more intelligent”. These AI processors after all, are about speed at reduced power consumption and at the moment, they are limited in what they can achieve due in a large part to factors outside of the chipmakers’ control.

While Heafield admits that the applications are broad at the moment – “speech recognition, spam detection, recommendation systems, and targeted ads to name a few,” – it is not the primary objective of some of the newcomers, such as Graphcore, which sees its role as to push the boundaries and help redefine the processor for tomorrow, not just for today.

“Inventing a new processor architecture and rendering it as a leading-edge chip is difficult and expensive. Building all the software on top to make it useful and easy to use is similarly difficult and even more expensive, so unless it’s going to be useful for 20 years or more, don’t bother,” says Nigel Toon, Graphcore’s co-founder and chief executive. “Our IPU has to out-perform GPUs and CPUs at all the tasks but perhaps more importantly, it has to provide a flexible platform for the discoveries yet to come.”

Toon adds that one of Graphcore’s aims is to “democratise machine intelligence” by doing more with less, making everything more affordable. Cost is an issue. Toon agrees that the cost of current hardware needed for research is prohibitive and effectively locks out all but the best funded corporations or universities, limiting the capability to innovate.

“Computational power and cost are still bottlenecks for AI development,” says Toon. Whether Graphcore’s IPU technology can change this remains to be seen but there is clearly something in it. The funding alone speaks volumes.

“Although, there has been great progress in the last few years with GPUs speeding up training time, because they only work efficiently with certain use cases, research has been distorted in a narrow direction,” adds Toon. “New platforms like ours, which will speed up training across a broad range of different machine learning approaches will enable people to explore new models or re-explore areas that did show promise but which have been left behind. Many of the machine learning innovators we talk to want to try new approaches which are just not possible with GPUs, but will be possible with IPUs.”

Certainly, Graphcore is a serious challenger, one of many. If the real aim is to accelerate the development of AI and to realise its true potential then the disruption is essential, if only to give the likes of Intel, NVIDIA and Google a few acquisition targets somewhere down the fast-developing AI road.

PREVIOUS ARTICLE

«The desktop of the future is here now: Augmented Reality in the workplace

NEXT ARTICLE

Everything you need to know about... Smart cities & IoT »
Marc Ambasna-Jones

Marc Ambasna-Jones is a UK-based freelance writer and media consultant and has been writing about business and technology since 1989.

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?