Hacker News new | past | comments | ask | show | jobs | submit login

I suspect that this trend might be slowing down. Starting with the Pixel 2, Google started really pushing the idea that image processing in your camera can make up for a lack of better hardware, using it as an excuse not to improve the camera that much from the original Pixel. Anecdotally, I even think my friend's original Pixel takes much better photos than my Pixel 2. (but maybe he's just a better photographer!)



Some of the image processing that goes on for Pixel 2 & 3 is done in a special ASIC.

https://en.wikichip.org/wiki/google/pixel_visual_core

Apple probably has a similar block built into their SoCs directly. Computational photography will require more and better HW to handle the compute, so I wouldn't bank on camera improvements stalling out for some time.

As an example of runway, look at the Light L16 (which the Nokia 9 Pureview is rumored to use their tech). It takes medium format DSLR quality photos that you can adjust depth of view in post processing. That will filter out to Android & Apple phones eventually.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: