Generative Models
Sub-tasks:
Autoencoders
Generative Adversarial Networks - GAN
Sub-tasks:
Autoencoders
Generative Adversarial Networks - GAN
Geoffrey E Hinton and Richard S Zemel. Autoencoders, minimum description length, and helmholtz free energy. In NeurIPS, 1994.
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and PierreAntoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008.
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Leon Bottou. "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", in JMLR, 2010.
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros, "Context encoders: Feature learning by inpainting", in CVPR, 2016.
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever, "Generative pre-training from pixels", in ICML, 2020.
Finetuning Pretrained Transformers into Variational Autoencoders_2021 [Paper]
Rethinking Coarse-to-Fine Approach in Single Image Deblurring [Paper]
T. Cemgil, S. Ghaisas, K. Dvijotham, S. Gowal, P. Kohli, "The Autoencoding Variational Autoencoder", in Part of Advances in Neural Information Processing Systems 33 (NeurIPS), 2020.
[VRAE] O. Fabius, J. R. V. Amersfoort, "Variational Recurrent Auto-Encoders", in ICLRw 2015 [Code]
[NCP-VAE] J Aneja, A Schwing, J Kautz, A Vahdat, "NCP-VAE: Variational autoencoders with noise contrastive priors", Under Review at ICLR 2021, 2021.
Coupled VAE: Improved Accuracy and Robustness of a Variational Autoencoder.
InfoVAE: Information Maximizing Variational Autoencoders
[AAE] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, "Adversarial Autoencoders", in arXiv preprint arXiv:1511.05644, 2015
[ALAE] S. Pidhorskyi, D. Adjeroh, G. Doretto, "Adversarial Latent Autoencoders", in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14104-14113, 2020. [Code]
[MAE] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, "Masked Autoencoders Are Scalable Vision Learners", in arXiv preprint arXiv:2111.06377, 2021.
[MAEv2] C. Feichtenhofer, H. Fan, Y. Li, and K. He, "Masked Autoencoders As Spatiotemporal Learners", arXiv preprint arXiv:2205.09113, 2022.
T. Park, J. Y. Zhu, O. Wang, J. Lu, E. Shechtman, A. Efros, and R. Zhang, "Swapping Autoencoder for Deep Image Manipulation", Advances in Neural Information Processing Systems 33 (NeurIPS), 2020.
Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation [Link] [Code] [Colab]
ViTGAN: Training GANs with Vision Transformers_2021 [Paper]
Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer_CVPR_2021 [Paper]
Online Multi-Granularity Distillation for GAN Compression [Paper]
SofGAN: A Portrait Image Generator with Dynamic Styling [Paper]
Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions [Paper]
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data [Paper]
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis [Paper]
Ensembling Off-the-shelf Models for GAN Training [Paper] [Code] [QuickRead] [Video]
[SR-GAN] "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"
Medical Image
References:
What are Diffusion Models? [Link]
[SSGAN] Semi-supervised learning GAN [Link]
Keras GAN [Link]
Benchmark VAE: https://github.com/clementchadebec/benchmark_VAE
GAN Inversion:
https://sertiscorp.medium.com/gan-inversion-a-brief-walkthrough-part-i-bc2ee1b73253
https://sertiscorp.medium.com/gan-inversion-a-brief-walkthrough-part-ii-e192513b89ae
https://sertiscorp.medium.com/gan-inversion-a-brief-walkthrough-part-iii-da87c28f9e62
https://sertiscorp.medium.com/gan-inversion-a-brief-walkthrough-part-iv-a72bc713e3c8
https://www.paperdigest.org/2020/04/recent-papers-on-generative-adversarial-network/
https://github.com/ageron/handson-ml2/blob/master/17_autoencoders_and_gans.ipynb
Survey + Benchmark: https://github.com/clementchadebec/benchmark_VAE