Abstract:
Self-supervised learning (SSL) consists in training neural network systems without using any human annotations. Typically, neural networks require large amounts of annotated data, which have limited their applications in fields where accessing these annotations is expensive or difficult. Moreover, manual annotations are biased towards a specific task and towards the annotator’s own biases, which can result in noisy and unreliable signals. Training systems without annotations could lead to better, more generic and robust representations. In this talk, I will present different contributions to the fast-growing field of SSL conducted during my PhD. I will finish by discussing open questions and challenges for the future of SSL.
The Zoom Link will be sent the day before the lecture.