Due to the ever increasing number of parameters in deep learning, learning them has become a daunting task. As a concrete statistic, in 2017 Silver and Hassabis reported that learning AlphaGo Zero, a neural network for playing the board game Go, took 40 days on 4 tensor processing units. This corresponds to a power consumption of about 1MWh, about a fourth of the yearly power consumption of an average Belgian household. For a board game.
In the Fit for 55 pledge, the European Union promises to cut its greenhouse gas emissions by 55% by 2030. Since machine learning contributes and will keep contributing significantly to energy demands, attaining this pledge requires major advances in the energy-efficiency of deep learning, among others. Following the framework presented by Henderson P. et al in 2020, we commit to fairly reporting the efficiency records of our novel scheme. TOC4Deep targets cutting the energy consumption of deep learning by 50% by a proportional reduction in the number of operations performed by the algorithm. TOC4Deep thus directly supports the 13th sustainable development goal “Climate Action" and will contribute to the reduction pledges by the EU.
TOC4Deep’s more efficient algorithms democratize access to deep learning, as they enable users to run deep learning methods on less powerful and expensive computers. This supports the 10th sustainable development goal “Reduced Inequalities” by providing access to powerful algorithms to those without access to supercomputing resources.