Many moons ago, I wondered what it would look like if you iteratively generated every possible image. It doesn't sound very useful at all (and it certainly isn't) but I learned C++ and SDL that way.
I've thought about this as well; it's one of those weird things where you realize that with enough pixel count and shading options, every image you can think of exists within the total set of generated images.
"enough" hides a surprising amount of work. Mobilise the entire planet's computers for a decade, assuming a generous Moore's law increase as well, and you might count through seventeen 8-bit grayscale pixels in that decade.
Annihilate the Sun and you won't get enough energy to drive a simple counter through thirty 8-bit grayscale pixels.
I think about it from the perspective of ideas, once we figure out something (how to fly, electricity) we know it. How do you brute force find that info through random/massive computation. Specifically to find the next things we don't know like jumping dimensions or FTL, etc...
First, article tried to assess information capacity in Shannon style, completely disregarding what signal is (a) two-dimensional and (b) highly redundant, that is, ignoring the traits of being an image.
Second, article taken too much liberties while mixing photorealistic and pixel-art images. The latter is really an art, since there is no formally defined ("machine") transform between these two types. And last but not least these types have significantly different information density profiles.
I think the point of the article is that there isn't 2^(32x32x3x8) meaningful images that can be encoded - there's far less than that because so many of the possible arrangements of the data are just meaningless noise, or millions of versions of the same thing with slightly altered color values etc.
The article is saying that neural networks are cool, because part of what they do is finding which images actually contain meaningful visual information.
But they don't, they often conflate noise with actual features, as long as the noise has some statistical bias, which given enough random generations, will. Try it yourself, generate random images using a uniform distribution (not gaussian) and run a SOTA classifier on them. Eventually you'll hit some minor false positives.
It errs in the same way we often see objects in clouds or constellations.
I got the point quickly enough (from the title and initial table of snapshots?), but author has lost me as a reader due to saying little with too many words before they could elaborate on the point.
I sort of came at the combinations of small images from a slightly different angle when I was a teenager, I was imagining that if there was a movie of your life (and everyone elses, in fact every possible life you could live) then you could produce every still image of that video if you restricted the bounds of the image to something small but understandable. I was very quickly was disappointed at how big the numbers get :)
wonderful, it looks like people are talking with Zoom, kidding. a image can contain lot more information if it is designed by an professional artist. an image have an message for everyone.
Would this be a good way to avoid censorship. Encode an image by mixing it with a private key image, maybe xor the cells or something like that. Post online. Anyone with the key image can decode.
This is probably terrible code by anybody's standards but maybe someone wants to take a look: https://github.com/svenstaro/infinerator