Intrinsic disentanglement an invariance view for deep generative models

Abstract

Deep generative models such as Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs) are important tools to capture and investigate the properties of complex empirical data. However, the complexity of their inner elements makes their functioning challenging to interpret and modify. In this respect, these architectures behave as black box models. In order to better understand the function of such network, we analyze the modularity of these system by quantifying the disentanglement of their intrinsic parameters. This concept relates to a notion of invariance to transformations of internal variables of the generative model, recently introduced in the field of causality. Our experiments on generation of human faces with VAEs supports that modularity between weights distributed over layers of generator architecture is achieved to some degree, and can be used to understand better the functioning of these architectures. Finally, we show that modularity can be enhanced during optimization.

Publication
Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML
Remy Sun
Remy Sun
Research scientist

I am a research scientist (ISFP) at Inria Sophia Antipolis (MAASAI) team working on the injection of knowledge in neural networks.