01_Self-training
In this semi-supervised formulation, a model is trained on labeled data and used to predict pseudo-labels for the unlabeled data. The model is then trained on both ground-truth labels and pseudo-labels simultaneously.
In this semi-supervised formulation, a model is trained on labeled data and used to predict pseudo-labels for the unlabeled data. The model is then trained on both ground-truth labels and pseudo-labels simultaneously.
0) Overview:
Dataset:
Metrics:
1) Papers:
Survey
Self-training
[Pseudo-label] D. H. Lee, “Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks”, in ICML Workshop on Challenges in Representation Learning, 2013.
[Noisy Student] Q. Xie, M. T. Luong, E. Hovy, Q. V. Le, “Self-training with Noisy Student improves ImageNet classification”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10687-10698, 2020.
P. Cascante-Bonilla, F. Tan, Y. Qi, and V. Ordonez, "Revisiting Pseudo-labeling for Semi-Supervised Learning", in AAAI 2021.