Multi-Categories Images Super-Resolution via Contrastive Learning

Conference: ICETIS 2022 - 7th International Conference on Electronic Technology and Information Science
01/21/2022 - 01/23/2022 at Harbin, China

Proceedings: ICETIS 2022

Pages: 5Language: englishTyp: PDF

Authors:
Zhu, Dongming; Huang, Zhuoyu; Yao, Shaowen (National Pilot school of software, Yunnan University, Kunming, Yunnan, China)

Abstract:
For a long time, single image super-resolution (SISR) has been studied by many people as a downstream task in the field of computer vision. Among them, constitutional neural network (CNNs) based methods have achieved impressive performance. This kind of methods can accomplish the super-resolution task by learning the mapping relationship between high-resolution (HR) and low-resolution (LR) images. However, CNNs usually conducts training on a category of picture, and the category of images can be divided into cartoon picture, text picture and nature picture according to its own texture feature. As a result, the SR performance degrades obviously if the input image is inconsistent with the categories of the training image. The usual solutions to the above problems: a. Train a single model on multiple categories of images, b. Train different models for different categories and select them as needed. For first method, the model capacity is required to be large enough, but also the performance of model is balanced on multiple categories. In contrast, the second method is more consistent with the actual scene, but it needs to design and train different models for different image categories. On the one hand, it needs to consume huge computer resources, on the other hand, it lacks flexibility. In this paper, we present a multi-categories image super resolution network via contrastive learning (MCSR), which can adaptively adjust model parameters for category images, to further improve the performance in multi-category images without introducing additional resource consumption. We train and verify our experiments on public datasets, and our method has achieved some improvement compared with traditional methods.