Self-Supervised Learning

{Self-Prediction; Contrastive Learning}

Self-supervised representation learning aims to obtain robust representations of samples from raw data without expensive labels or annotations. Early methods in this field focused on defining pre-training tasks which involved a surrogate task on a domain with ample weak supervision labels. 

Encoders trained to solve such tasks are expected to learn general features that might be useful for other downstream tasks requiring expensive annotations like image classification.