TY - JOUR
T1 - Self-supervised cycle-consistent learning for scale-arbitrary real-world single image super-resolution
AU - Chen, Honggang
AU - He, Xiaohai
AU - Yang, Hong
AU - Wu, Yuanyuan
AU - Qing, Linbo
AU - Sheriff, Ray E.
PY - 2023/2/1
Y1 - 2023/2/1
N2 - Whether conventional machine learning-based or current deep neural networks-based single image super-resolution (SISR) methods, they are generally trained and validated on synthetic datasets, in which low-resolution (LR) inputs are artificially produced by degrading high-resolution (HR) images based on a hand-crafted degradation model (e.g., bicubic downsampling). One of the main reasons for this is that it is challenging to build a realistic dataset composed of real-world LR-HR image pairs. However, a domain gap exists between synthetic and real-world data because the degradations in real scenarios are more complicated, limiting the performance in practical applications of SISR models trained with synthetic data. To address these problems, we propose a Self-supervised Cycle-consistent Learning-based Scale-Arbitrary Super-Resolution framework (SCL-SASR) for real-world images. Inspired by the Maximum a Posteriori estimation, our SCL-SASR consists of a Scale-Arbitrary Super-Resolution Network (SASRN) and an inverse Scale-Arbitrary Resolution-Degradation Network (SARDN). SARDN and SASRN restrain each other with the bidirectional cycle consistency constraints as well as image priors, making SASRN adapt to the image-specific degradation well. Meanwhile, considering the lack of targeted training images and the complexity of realistic degradations, SCL-SASR is designed to be online optimized solely with the LR input prior to the SR reconstruction. Benefitting from the flexible architecture and the self-supervised learning manner, SCL-SASR can easily super-resolve new images with arbitrary integer or non-integer scaling factors. Experiments on real-world images demonstrate the high flexibility and good applicability of SCL-SASR, which achieves better reconstruction performance than state-of-the-art self-supervised learning-based SISR methods as well as several external dataset-trained SISR models.
AB - Whether conventional machine learning-based or current deep neural networks-based single image super-resolution (SISR) methods, they are generally trained and validated on synthetic datasets, in which low-resolution (LR) inputs are artificially produced by degrading high-resolution (HR) images based on a hand-crafted degradation model (e.g., bicubic downsampling). One of the main reasons for this is that it is challenging to build a realistic dataset composed of real-world LR-HR image pairs. However, a domain gap exists between synthetic and real-world data because the degradations in real scenarios are more complicated, limiting the performance in practical applications of SISR models trained with synthetic data. To address these problems, we propose a Self-supervised Cycle-consistent Learning-based Scale-Arbitrary Super-Resolution framework (SCL-SASR) for real-world images. Inspired by the Maximum a Posteriori estimation, our SCL-SASR consists of a Scale-Arbitrary Super-Resolution Network (SASRN) and an inverse Scale-Arbitrary Resolution-Degradation Network (SARDN). SARDN and SASRN restrain each other with the bidirectional cycle consistency constraints as well as image priors, making SASRN adapt to the image-specific degradation well. Meanwhile, considering the lack of targeted training images and the complexity of realistic degradations, SCL-SASR is designed to be online optimized solely with the LR input prior to the SR reconstruction. Benefitting from the flexible architecture and the self-supervised learning manner, SCL-SASR can easily super-resolve new images with arbitrary integer or non-integer scaling factors. Experiments on real-world images demonstrate the high flexibility and good applicability of SCL-SASR, which achieves better reconstruction performance than state-of-the-art self-supervised learning-based SISR methods as well as several external dataset-trained SISR models.
KW - Real-world image
KW - Super-resolution
KW - Resolution-degradation
KW - Self-supervised cycle -consistent learning
KW - Arbitrary scaling factors
KW - Convolutional Neural Networks
UR - https://doi.org/10.1016/j.eswa.2022.118657
UR - http://www.scopus.com/inward/record.url?scp=85137265856&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137265856&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2022.118657
DO - 10.1016/j.eswa.2022.118657
M3 - Article (journal)
SN - 0957-4174
VL - 212
SP - 118657
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 118657
ER -