Performance study in the computation of distributed generative adversarial networks

Performance study in the computation of distributed generative
adversarial networks

Principal investigators
Guirguis Arsany,Ph.D. student, DCL lab, Ecole Polytechnique Fédérale de Lausanne
Erwan Le Merrer, Doctor, WIDE research team, Inria

Abstract
The objective of this collaboration is to investigate the efficient distributed computation of generative adversarial networks (GANs) over a set of client devices. In particular, we will study the implementation of GANs in the setup of federated learning, where a server is leveraged as a central point for model synchronization.
In this model, the client data can remain on their devices, increasing privacy w.r.t. approaches that require the collection of all the data on a single location. Several angles are of interest in this collaboration:
(i) the fault tolerance aspect as not been studied in a constructive way, which leaves space for algorithms allowing robust learning.
(ii) The training of GANs is data intensive, as well as compute-intensive; the computation-communication trade-off that arise in this approach is yet to be understood, for an accurate comparison with central, or fully decentralized approaches (such as gossip-based ones for instance).
(iii) The intrinsic difference of GANs w.r.t. regular deep learning models, when it comes to distributed computation, is to be fully qualified for efficient proposals.

Website: under construction
Keywords: Machine learning, federated learning, generative adversarial networks, distributed computing, fault tolerance

Comments are closed.