HOLIGRAIL: HOLIistic approaches to GReener model Architectures for Inference and Learning

Accelerators of artificial intelligence algorithms currently consume much more power than they should, in particular in the learning phase. The many aspects of this question are too often considered in isolation. Based on the complementary expertise of the partners, and thanks to the integration into the rich community build by the PEPR on foundation of frugal AI, we will instead systematically look at a holistic, global comprehension of all these issues in established and upcoming AI algorithms. We will therefore combine more compact and efficient number representations, hardware-aware training algorithms that enhance structured sparsity, coding compactness and tensor transformations, with their adaptation to efficient hardware mechanisms and compiler optimizations. Our ambition is to provide breakthroughs in efficiency when running inference and training algorithms on specialized hardware. The results are intended to be integrated into development solutions for embedded systems, in particular within the DeepGreen/AIDGE national platform for the deployment of deep learning in embedded systems.

Keywords: deep neural network compression, number representations, arithmetic operators and kernels, entropic compression, quantization, compiler optimization, pruning, tensor methods, distillation techniques, error evaluation, low-precision training

Comments are closed.