A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
Pytorch Lighting: https://github.com/PyTorchLightning/pytorch-lightning
Pytorch Lighting: https://github.com/PyTorchLightning/pytorch-lightning
Contrastive learning has revolutionized self-supervised image representation learning field and has recently been adapted to the video domain.
One of the greatest advantages of contrastive learning is that it allows us to flexibly define powerful loss objectives as long as we can find a reasonable way to formulate positive and negative samples to contrast.
However, existing approaches rely heavily on the short-range spatio-temporal salience to form clip-level contrastive signals, thus limiting themselves from using global context.
In this paper, we propose a new video-level contrastive learning method based on segments to formulate positive pairs.
Our formulation is able to capture global context in a video, thus robust to temporal content change.
We also incorporate a temporal order regularization term to enforce the inherent sequential structure of videos.
YADIM = AMDIM + CPC
Encoder
AMDIM's Encoder.
ResNet-50 also works.
Representation Extraction
Encode multiple versions of an image and use the last feature map to make a comparison.
No projection head or other complicated comparison strategy.
Similarity Metric
Stick to dot product.
NCE loss
Extensive experiments show that our video-level contrastive learning framework (VCLR) is able to outperform previous state-of-the-art on five video datasets for downstream action classification, action localization, and video retrieval.