Jointly Optimized Regressors for Image Super-resolution

Dengxin Dai, Radu Timofte, and Luc Van Gool


Learning regressors from low-resolution patches to high-resolution patches has shown promising results for image super-resolution. We observe that some regressors are better at dealing with certain cases, and others with different cases. In this paper, we jointly learn a collection of regressors, which collectively yield the smallest super-resolving error for all training data. After training, each training sample is associated with a label to indicate its `best' regressor, the one yielding the smallest error. During testing, our method bases on the concept of `adaptive selection' to select the most appropriate regressor for each input patch. We assume that similar patches can be super-resolved by the same regressor and use a fast, approximate kNN approach to transfer the labels of training patches to test patches. The method is conceptually simple and computationally efficient, yet very effective. Experiments on four datasets show that our method outperforms competing methods.

Related Project: Is Image Super-resolution Helpful for Other Vision Tasks?


Average PSNR on Set5, Set14, BD100, and SuperTex136
Examples with scaling factor x3
Examples with scaling factor x3


We collected a new dataset SuperTex136 , which contains 136 textures covering a large range of materials (e.g. metal, plastic, glass, water) and geometrical properties (e.g. stochastic, lined, structured) of textures. SuperTex136 is compiled especially to evaluate the ability of super-resolution methods for texture recovery.

The code of JOR is available to use, along with comparison code (from the original authors) of other recent methods, including ANR, A+ and SRCNN. The four datasets used in our paper are also included in the code package, which makes it quite large (~300M).

The training part is also included. You can now train JOR with your own images. For 5 million patches, the training takes a couple of mins to tens of mins, depending on your parameter settings.

For the sake of easy use, two trained models (x3 and x4) with 32 learned regressors are provided to download and use if you don't want to re-train the model.

Please cite our paper if the code and data are used in your scientific publications.

Dengxin Dai, Radu Timofte, and Luc Van Gool.. Jointly Optimized Regressors for Image Super-resolution. In Eurographics 2015.

This page has been edited by Dengxin Dai. All rights reserved.

web counter