Is there any good, systematic discussion of the trade-off between computational complexity and compression? I have this intuition that the lower the entropy of signals, the more complex encoders and decoders might potentially need to be, while slightly reducing compression could afford, in some cases, making do with dumber codecs. Is this idea explored anywhere?