Hacker News new | past | comments | ask | show | jobs | submit login

I'm mostly curious in what compression algorithm results in artifacts like these, and assume somebody here on HN recognizes them? But we're 225 comments in (so far) and not a single answer.

They're absolutely nothing like block-based JPEG, that's for sure. When I inspect the images Google Photos serves me in my browser, it is serving up JPEG (from the response headers). But is this an artifact that shows up in AVIF or WebP? I wonder if mobile clients are getting a different encoding.

The only objective thing I notice is that the artifact lines tend to track a line of constant brightness, so you seem them appearing perpendicular to gradients of light and shade. And that each artifact line is black/dark on one side and white/light on the other -- and that the white edge seems to be in the direction of darkening, while the black is in the direction of lightening.

Someone here who works with modern image codecs must be able to hypothesize what part of encoding/decoding must be bugging out here?




Looks to my eye like some 8-bit underflow artifacts that I've seen. And the way the line follows the gradient reminds me of when I've broken a gradient-domain Poisson solver to reconstruct an image from gradients. Maybe they're decorrelating the image data via gradients to compress better and then compressing those along with a sparse set of primal pixels?

(See, e.g., http://graphics.cs.cmu.edu/courses/15-463/2019_fall/lectures... for a nice overview of gradient-domain image processing.)


I’ve spent a ton of time playing with codecs and colorspaces while developing PhotoStructure.

These artifacts aren’t where I’ve seen them before. Normally you’d see this sort of posterization and clamping around global highlights and shadows, especially if the incorrect colorspace profile was applied, but this seems like the affected border is around a localized area, possibly due to a buggy pseudo-HDR implementation (what you see when you move the “pop” adjustment slider, which increases localized contrast ranges). Google+ images had a mild pop adjustment applied automatically.


Clearly Google was storing their old photos in a damp basement, and they got water damage.

But seriously, I am also very curious. I wonder if they are playing with an in-house compression algorithm.


it seems to me from the photo of the dog, it was an face recognition, algo or eyes recognition ago, but the algo also left the changes in the photo as it proccesed it. instead of putting it all in the server side.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: