Hacker News new | past | comments | ask | show | jobs | submit login

There's definitely value in providing this functionality for photographs taken in the present.

But I think the real value -- and this is definitely in Google's favor -- is providing this functionality for photos you have taken in the past.

I have probably 30K+ photos in Google Photos that capture moments from the past 15 years. There are quite a lot of them where I've taken multiple shots of the same scene in quick succession, and it would be fairly straightforward for Google to detect such groupings and apply the technique to produce synthesized pictures that are better than the originals. It already does something similar for photo collages and "best in a series of rapid shots." They surface without my having to do anything.




That's exactly why I've been keeping all "duplicates" in my photo collections.

They do take up a lot of space, and just today I asked in photo.stackexchange for backup compression techniques that can exploit inter-image similarities: https://photo.stackexchange.com/questions/132609/backup-comp...


Suggestion: stack the images vertically or horizontally. Frequency spectrum compression schemes like JPG will see the similarity in the fine details.


I got really good compression using this technique with JPEG XL, I'm sure there's even a good reason why it works so well but it's been a long time and I don't seem to remember why.


>in the fine details

Could it be possible that jpg also exploits the repetition at the wavelength of the width of a single picture, so to say? E.g. 4 pictures side-by-side with the same black dot in the center, can all 4 dots be encoded with a single sine wave (simplifying a lot here..) that has peaks at each dot?


Tiled/stacked approach as others mention is good, and probably the best approach. Could also try doing an uncompressed format (even just .png uncompressed) or something simple like RLE then 7zip them together since 7zip is the only archive format that does inter-file (as opposed to intra-file) compression as far as I am aware.

Unfortunately lossless video compression won't help here as it will compress frames individually for lossless.


Inter file compression has been solved ever since tar|gz


Not so. Gzip’s window is very small - 32K in the original gzip iirc, which meant even identical copies of a 33KB file would bot help each other.

Iirc it was Bzip2 that bumped that up to 1MB, and there are now compressors with larger windows - but files have also grown, it’s not a solved problem for compression utilities.

It is solved for backup - but, reatic, and a few others will do that across a backup set with no “window size” limit.

…. And all of that is only true for lossless, which does not include images or video.


Not even remotely an efficient scheme for images or video.


That’s for lossless compression, i think there’s special opportunities for multi image lossy


most duplicates are from the same vantage point. these are not. i.e. you don't need to keep them all.


Those have been used for denouncing and super resolution for 30 years now - they are not useless. And storage is cheap, just keep them all.


That was supposed to be denoising, not denouncing, DYAC. Just noticed, too late to Edit Now.


Stupid question. Would a block based deduplicating file system solve this?


Every picture is a picture from the past though


Oh yeah, what about this old Kodak I found in my grandpa's attic that prints pictures showing how people are going to die?


but how did you know it wasnt a coincidence that the picture depicted a similar scene in the past?


Philosophically, yes. But some photo-editing techniques rely on data that is not backfillable and must be recorded at capture time. And even in cases where there is no functional impediment to applying it against historical photos, sometimes there is product gatekeeping to contend with.


Here's a picture of me in the future.


John Titor, is that you?


No, it's Mitch Hedberg.


I had an ant farm. They didn't grow shit!


Where you get that camera at??


Not the pictures where you age people artificially


Every existing pictures are.


If it hasn't been taken/made/captured yet, it isn't a picture. It's just the potential for one.


Every state machine is bound to cycle at some point, even if it has the size of the universe.


This is not true, its very trivial to design a state machine that won't cycle.


Sorry, forgot to add that it should be reversible, like the laws of physics.


I would really like to see the proof that it's impossible to design a reversible state machine that won't cycle. But even if you do prove that, you would also have to prove that if the laws of physics are reversible that the universe is reversible.

The current best theory and understanding of the evolution of the universe is that it will reach maximum entropy (heat death). There is no cycling when this happens. Can you cite what theory or new discovery you have come across that somehow challenges the heat death hypothesis?


Also, not all laws of physics are time reversible, i.e. the second law of thermodynamics.


> ..fairly straightforward for Google to detect such groupings and apply the technique to produce synthesized pictures that are better than the originals.

Wouldn't an operation like this require some kind of fine-tuning? Or do diffusion models have a way of using images as context, the way one would provide context to an LLM?


I think simpler algorithms (e.g. image histograms) can get you a long way. Regardless of the mechanism, Google Photos already has the capability to detect similar images, which is used to generate animated gifs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: