This is not as suggestive as one might imagine given the title or the discussion here so far.
It's not "Turing machine but the tape is limited to 1024 cells" - i.e. you don't get 1024 units of memory and a conditional branch instruction. You can't branch at all.
In order to simulate the Turing machine, you need to run the Rule 110 automation, which necessarily implies expanding linearly into that 1024 cell space - i.e. your simulated program really isn't going to get very far.
FTA: "To help with parallel decoding, JPEG XL decoders work on slices of images 1024 by 1024 pixels in size. So JPEG XL itself isn't Turing-complete; a version of it without the 1024×1024 pixel limitation would be."
Even without that limitation, it wouldn't really be Turing complete since images need to have a finite dimension, which makes the halting problem trivial (everything halts when all finitely many pixels are decoded).
Then again, no real computer is Turing complete because they all have finite memory so they're strictly speaking only big finite state machines (though we do typically consider them Turing complete anyway).
I think the main point here is that the context/prediction model of JPEG XL's modular mode is expressive enough to consider it a kind of mini-programming language which is 'Turing-complete' (with some asterisks like requiring infinite images).
It's not a security risk though: the decode computation per pixel is still bounded, so there is no way to make the decoder go into an infinite loop or anything like that. In that respect it's (intentionally) not Turing complete, unlike some other image formats like PostScript and SVG that do have actual programming languages in them that can cause a decoder/interpreter to go into an infinite loop (safe handling of such files does require sandboxing and time-outs).
> Then again, no real computer is Turing complete because they all have finite memory so they're strictly speaking only big finite state machines (though we do typically consider them Turing complete anyway).
Memory is only finite if it uses absolute addresses like RAM, e.g. "load 0x12345" refers to the same piece of memory, regardless of where in memory that instruction appears (ignoring complications from paging, etc.). Computers can also have external storage that's addressed relatively, e.g. a tape which can move forwards or backwards.
I like to think of Turing machines has having a "tape factory" at both ends of the tape, which appends new cells whenever the read/write head gets close. We can do the same thing with a real computer, by pausing the computer and splicing on new sections on to the tape as needed; or by extending the rails and racks of a tape robot.
Of course, our factories are ultimately limited by raw materials, even once production expands into space. The key is that relatively-addressed memory is infinitely expandable, whilst absolutely-addressed memory has a predefined upper-limit.
That is an interesting point. But the halting problem is one of those things that is only problem on a literally unbounded memory. If you have a known number of bits of state available, just run the machine for 2^bits steps. Either the machine will have halted by then or it will run forever. Now 2^bits may be astronomically huge, but there's still a big gap between that and undecidable!
> That is an interesting point. But the halting problem is one of those things that is only problem on a literally unbounded memory.
What do you mean "but"?
If you say "trust me, there's a tape factory that will kick in once you run low on space in a zillion years and it will add on another terabyte", then you do have "literally unbounded memory".
> I think the main point here is that the context/prediction model of JPEG XL's modular mode is expressive enough to consider it a kind of mini-programming language which is 'Turing-complete' (with some asterisks like requiring infinite images).
Yes, this is exactly the point I intended. I have added a link Gwern Branwen's https://www.gwern.net/Turing-complete for context. (I considered adding a note on how no physical computer or interpreter running on one is really Turing-complete before submitting the story but decided against it, because the context should convey the asterisks on "Turing-complete".)
I have also quoted your paragraph on security on the page. Something I did not consider was that people may link to my page to support security FUD against JPEG XL, a format I like and look forward to adopting. I hope this is sufficient mitigation.
> Then again, no real computer is Turing complete because they all have finite memory so they're strictly speaking only big finite state machines
Interestingly, cloud computing made this limitation less obvious: one can imagine a program running on the cloud that bill the cloud provider when it runs out of disk space. And the cloud provider will buy more hard drives for its servers when needed. There is still obviously a limit somewhere, but its much farther ahead, because its more limited by factors such as how much hard drives we can build.
If the universe is expanding indefinitely, then it means it has unlimited potential to store information, for example as the position of matter inside the universe.
For example imagine a spacecraft going away from earth, with a mirror toward the earth. On earth you can send a stream of bits with a laser toward it and it will come back. As it goes farther and farther away, you can increase the amount of bits in transit in the stream, making it a way to store a potentially infinite amount of bits.
Of course this is not a really good example, because I'm sure there is a lots of reasons why this scheme would not work indefinitely, such as chaos theory preventing us to target the spacecraft with the every-growing precision required.
But still, I would love to see an actual reasoning of wether or not the information we can store in the universe is actually bounded.
Even if you could somehow recycle energy forever, more and more of your initial energy supply will be locked up in the laser beam. By e=mc², building an always-growing laser beam is equivalent to building an always-growing tape. Even if it's more efficient by a very big constant factor, that's still a constant factor.
You would need a magic box that supplies watts out of nowhere to keep this scheme working. But if you had one of those, it might be easier to just convert some of the output into matter to feed a very slow tape-building machine.
To expand on this - the practical difference is "can be executed hundreds of times over in a timespan imperceptible to humans" and "will not terminate in a trillion times the age of the universe."
A trillion times the age of the universe? How quaint; you're using durations that humans can, in principle, conceive of.
---
Edit: I'm being ridiculous; the below assumes a Turing machine with infinite tape, with the restriction that it must halt eventually, simulated by a PC. That's not real.
Absent an infinite Turing tape, the number of states a computer can have is very finite; 2^(amount of memory). Assuming 16GiB of RAM and no hard drive, that's 2^2^30 states. Assuming a state transition happens every clock cycle (which it basically would be for a naïve counter – though obviously if you were designing a program to run as slowly as possible, you wouldn't do it this way), and assuming a massively-overclocked 10GHz chip (2^33 Hz or thereabouts), it'd run for 2^(2^30 - 33) seconds.
The CMB will probably be undetectable in about 2^62 seconds, so it'd run for 2^(2^30 - 95) times that length of time. Vastly, vastly shorter than my below estimate!!
Graham's number is g_64. Don't take g_1000; rather, take g_g_g_g_g_g_g_…g_g_64 Graham's number times, and then repeat that Graham's number times, and that's probably not an upper bound; the complete description of that number fits in your computer's memory.
Whatever the limit of a computer program's execution time, it's definitely large enough that it doesn't really need a unit; it's practically irrelevant whether you measure it in yocto-Planck-times or yotta-heat-deaths, or even just count the digits.
Turing completeness is a property of a language, not an implementation. There are programming languages where a bound on memory is not part of the spec. Those are Turing complete even if there can be no perfect implementation of them.
Generative art is interesting in itself, so plenty of nice images can be encoded. I've checked JPEG-XL Discord channel and there's a small group of people creating these images for fun.
You'd need to exploit an implementation bug, or have the user be streaming or something (so some squares appear before others as the output of your timing attack).
It's not "Turing machine but the tape is limited to 1024 cells" - i.e. you don't get 1024 units of memory and a conditional branch instruction. You can't branch at all.
In order to simulate the Turing machine, you need to run the Rule 110 automation, which necessarily implies expanding linearly into that 1024 cell space - i.e. your simulated program really isn't going to get very far.
Edit: see comments by dTal and Dylan16807