Reducing energy use in neural networks

With a number of developments in the creation of technologies focused on reducing energy use in neural network systems how could IT professionals best use such systems to reduce energy-use in IT, IoT and IIoT systems?

IDGConnect_energy_network_shutterstock_694883650_1200x800
Shutterstock

Although advanced neural networks continue to dramatically improve the capabilities of artificial intelligence systems, they are associated with substantial energy use. In an effort to address this problem a growing number of organisations are focused on the creation of technologies designed to reduce energy use in the training and operation of such systems.

Winning ticket

The recent record-breaking predictive performance achieved by deep neural networks (DNNs) has prompted a growing demand to bring DNN-powered intelligence into numerous applications and devices. However, training a state-of-the-art DNN model often requires considerable energy use and is associated with a number of additional financial and environmental costs. For example, a recent report shows that training a single DNN can cost over $10,000 and emit as much carbon as five cars over the course of their lifetimes - limiting the rapid development of DNN innovations and raising a variety of environmental concerns.

In an attempt to remedy this situation, several organisations are engaged in the development of more energy-efficient approaches. One of the most interesting recent initiatives is a joint Rice University and Texas A&M University project that has developed a novel energy-efficient method for training DNNs, based on so-called ‘Early-Bird’ (EB) tickets, called an ‘EB Train.’  As Dr. Zhangyang 'Atlas' Wang, Assistant Professor of Electrical and Computer Engineering at The University of Texas at Austin (until recently, Assistant Professor of Computer Science and Engineering at Texas A&M University), explains, the Early Bird Ticket algorithm leverages an important recent finding called the lottery ticket hypothesis – which describes how a dense and randomly initialised deep neural network (DNN) has a small but critical subnetwork, known as a ‘winning ticket,’ that can be ‘trained alone to achieve a comparable accuracy to the former in a similar number of iterations.’ 

IDGConnect_DNN_network_TexasA&M_1068x704 Texas A&M

Figure 4, Drawing Early-Bird Tickets: Towards More Efficient Training of Deep Networks

To continue reading this article register now