[SRCNN]
[FSRCNN]
[LapSRN]
[CARN]
[IMDN]
[RLFN]
[FMEN]
[SwinIR]
[Swin2SR]
[AsConvSR]
- SRCNN [13] that applies deep learning to SISR for the first time.
- FSRCNN [14] significantly accelerates the SISR network by adopting the original low-resolution as input without bicubic interpolation, smaller sizes of convolution kernels, and a deconvolution layer at the final stage of the network to perform upsampling.
- LapSRN [27] progressively reconstructs the sub-band residuals of high-resolution images using the Laplacian pyramid.
- CARN [2] further improves efficiency by its design of cascading residual networks with group convolution.
- IMDN [21] proposes information multi-distillation blocks with contrast-aware attention (CCA) layer based on the information distillation mechanism, while RFDN [32] refines the architecture of RFDN with feature distillation mechanism by proposing the residual feature distillation block.
- RLFN [25] redesigns RFDB by adding more channels to compensate for discarded feature distillation branches to achieve higher inference speed and better performance with fewer parameters.
- FMEN [15] expands optimization space during training with re-parameterizable building blocks [12] without increasing extra inference time.
- SwinIR [29] proposes an efficient transformer-based SR model which fully explores the swin transformer structure, and it outperforms pure convolution networks with fewer parameters and FLOPs.
- Swin2SR [7] further improved the network structure by introducing the SwinV2 attention, and proposes auxiliary loss and high-frequency loss for the compressed images.
C. Dong, C.C. Loy, K. He, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” In IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 295- 307, 2016.
X. Chen, X. Wang, J. Zhou, Y. Qiao, and C. Dong, “Activating More Pixels In Image Super-Resolution Transformer, ” In Conference on Computer Vision and Pattern Recognition, pp. 22367-22377, 2023.
W. Shi, J. Caballero, F. Huszar, J. Totz, A.P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-Time Single Image And Video Super-Resolution Using An Efficient Sub-Pixel Convolutional Neural Network,” In Conference on Computer Vision and Pattern Recognition, pp. 1874-1883, 2016
M.V. Conde, E. Zamfir, R. Timofte, D. Motilla, C. Liu, Z. Zhang, Y. Peng, Y. Lin, J. Guo, X. Zou, Y. Chen, Y. Liu, J. Hao, Y. Yan, Y. Zhang, G. Li, and L. Sun, “Efficient Deep Models For Real-Time 4K Image Super-Resolution NTIRE 2023 Benchmark And Report,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1495-1521, 2023.
J. Guo, X. Zou, Y. Chen, Y. Liu, J. Hao, J. Liu, and Y. Yan, “AsConvSR: Fast And Lightweight Super-Resolution Network With Assembled Convolutions,” In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1582-1592, 2023.
L. Sun, J. Dong, J. Tang, and J. Pan, “Spatially-Adaptive Feature Modulation For Efficient Image Super-Resolution,” In Eprint ArXiv, 2302. 13800, 2023.
[Bicubic++] B. B. Bilecen, M. Ayazoglu, "Bicubic++: Slim, Slimmer, Slimmest. Designing an Industry-Grade Super-Resolution Network", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1623-1632, 2023.