Rémy Sun

PhD student at Sorbonne University

Sorbonne University

Thales Land and Air Systems


My research interests focus mostly on semi-supervised learning, multitask learning and incremental learning. I have most recently focused on studying how mixing data augmentations can be leveraged to improve the performance of classifiers. I am currently completing an Industry funded PhD (CIFRE Thales Land and Air Systems) on these topics at Sorbonne University’s Machine Learning and Information Access (MLIA) team under the supervision of Matthieu Cord, Nicolas Thome, Clément Masson and Gilles Hénaff


  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Semi-Supervised Learning
  • Multitask Learning
  • Mixing Data Augmentations
  • Explainability


  • PhD in Applied Mathematics, Ongoing

    Sorbonne University

  • Magistère in Computer Science, 2019

    ENS Rennes

  • MSc in Applied Mathematics "Mathématiques, Vision, Apprentissage", 2019

    ENS Paris-Saclay

  • Semester abroad at EPFL, 2017


  • BSc in Computer Science, 2016

    ENS Rennes



PhD Student / Research engineer under a CIFRE scholarship

Thales Land and Air Systems and Sorbonne University

Nov 2019 – Present Elancourt and Paris, France
PhD on the uses of mixing data augmentations for image classification.

Internship: Classification of long term EEGs

Conservatoire National des Arts et Métiers

Apr 2019 – Sep 2019 Paris, France
4.5 months internship on Deep learning techniques applied to EEG classification tasks under the supervision of Nicolas Thome.

Internship: Causal analysis of generative neural networks

Empirical inference department - Max-Planck Institut for Intelligent Systems

Jan 2018 – Jun 2018 Tuebingen, Germany
5.5 months internship on the Independence of Cause and Mechanism postulate in generative neural networks under the supervision of Michel Besserve.

Internship: Detecting domain shifts from classifier outputs

Computer vision and machine learning group - Institute of Science and Technology

Aug 2017 – Dec 2017 Klosterneuburg, Austria
3 months internship on detecting domain shifts impacting classifier performance under the supervision of Christoph Lampert.

Internship: Deep learning and acquiring meaningful latent representations of peptidic sequences

Dyliss Project - IRISA

May 2016 – Jul 2016 Rennes, France
8 weeks exploratory internship on deep learning and peptidic sequences under the supervision of François Coste.

Recent Publications

Adapting Multi-Input Multi-Output schemes to Vision Transformers

Multi-input multi-output models have proven capable of providing ensembling for free in convolutional neural networks by training …

Towards efficient feature sharing in MIMO architectures

Multi-input multi-output architectures propose to train multiple subnetworks within one base network and then average the subnetwork …

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks

Recent strategies achieved ensembling “for free” by fitting concurrently diverse subnetworks inside a single base network. …

Swapping Semantic Contents for Mixing Images

Deep architecture have proven capable of solving many tasks provided a sufficient amount of labeled data. In fact, the amount of …

Counterfactuals uncover the modular structure of deep generative models

Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. …

KS(conf ): A Light-Weight Test if a ConvNet Operates Outside of Its Specifications.

Good classification performance can only be expected if systems run under the specific conditions, in particular data distributions, …