Hacker News new | past | comments | ask | show | jobs | submit login

What an amazing idea :)

They reproject the input images and run the low-res network multiple times. Then they use an approach similar to NeRF to merge the knowledge from those reprojected images into a super-resolution result.

So in a way, this is quite similar to how modern Pixel phones can take a burst of frames and merge them into a final image that has a higher resolution than the sensor. Except that they run useful AI processing in between and then do the super-resolution merge on the results.




Also similar to temporal antialiasing https://en.wikipedia.org/wiki/Temporal_anti-aliasing .


Perhaps similar in some ways to how big cats' eyes reflect the light back from behind the retina (i.e. back through it for a second pass) to capture more light. I'm sure I heard that on a nature documentary ...


Very interesting, I am curious how do people reach that train of thought to a successful idea. So many great algorithms based on small twists.


It is interesting indeed. One wonders if the researchers of this particular bit of work made it mandatory to go for walks at lunch and think about how their vision chunked/filtered the information it was receiving. Interesting that they "perturb" the image to get some noise involved. I'll need to read it over again.


Nature is such a good source of inspiration, the "perturb" approach reminded me of [fixational eye movement][1] but maybe that's only a clear link in retrospect.

[1]: https://en.wikipedia.org/wiki/Fixation_(visual)


This seems like it could have been inspired by how human vision works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: