Hacker News new | past | comments | ask | show | jobs | submit login

In the example they’re up scaling 1080p content to 4K. Am I missing something or is that not particularly impressive? Isn’t it just pixel doubling?



I think in this case, they're attempting to keep the "inked" look where lines start and stop. Pixel doubling would result in aliasing (or, rather, a "pixelated" look) and bilinear filtering results in a "blurred" effect. The intended effect with this goal being to give the appearance that the anime was produced in 4K.


No if they did that it would look "pixelated".

They seemed to have built an edge optimized image upscaler. It prevents the edges from becoming soft during the upsampling.

You can clearly see the difference in their comparison pictures (of which they have a metric ton)


I'm pretty sure this is what nvidia DLSS does. Only this works much better than DLSS I think.


Per the name (Deep Learning Super Sampling), DLSS uses a trained neural network to achieve high-quality upsampling. The neural network is trained on representative output of the game at the internal framebuffer resolution and at the target output resolution (with SSAA and such).

The upsampling algorithm in the OP is not based on machine learning but is also fairly domain specific and of limited general applicability.


Probably. Everything seems to work better than NVidia DLSS though. AMD apparently managed to beat it using a pretty standard content aware sharpening algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: