I've seen a number of neural network approaches for super-resolution like waifu, but I haven't seen something general purpose thats better than bicubic/fourier/nearest neighbor.
(Author here.) My biggest insight from this project is that super-resolution with neural networks benefits significantly from being domain specific. If you train on broader datasets, it does pretty well but has to make compromises. Many recent papers do a comparison in terms of pixel similarity (PSNR/SSIM), and using those metrics the quality drops because high-frequency detail is punished under those criteria (even though it may look better perceptually). Reference: http://arxiv.org/abs/1609.04802
On GitHub, below each GIF there's a demo comparison, but on the site you can also submit your own to try it out (click on title or restart button). Takes about 60s currently; running on CPU as GPUs are busy training ;-)
> super-resolution with neural networks benefits significantly from being domain specific. If you train on broader datasets, it does pretty well but has to make compromises.
To what extent could the need for this trade-off be overcome with a larger network?
Would be nice if the author did a comparison.