Abstract
Machine learning models have achieved stunning successes in the IID setting. Yet, beyond this setting, existing models still suffer from two grand challenges: brittle under covariate shift and inefficient for knowledge transfer. In this talk, I will introduce three approaches to tackle these challenges, namely self-supervised learning, causal representation learning, and test-time training. More specifically, I will share our recent findings on (i) incorporating prior knowledge of negative examples into representation learning, (ii) promoting causal invariance and structure by making use of data from multiple domains, (iii) exploiting extra information besides model parameters for effective test-time adaptation. I will show how these techniques enable deep neural networks to more robustly generalize and efficiently adapt to new environments in the motion or vision context. I will finally discuss the implications of these results on the design, training, and deployment of deep models for domain generalization and adaptation. Comments and feedback are more than welcome.
Paper Links
Social NCE: Contrastive Learning of Socially-Aware Motion Representations, ICCV’21
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive? NeurIPS’21
Collaborative Sampling in Generative Adversarial Networks, AAAI’20
Bio
Yuejiang Liu is a PhD student at EPFL, advised by Alexandre Alahi. His research interests center around representation learning and its applications to autonomous agents. He is particularly excited about unsupervised learning for robust generalization and efficient adaptation.
The Zoom Link will be sent the day before the lecture.