Hacker News new | past | comments | ask | show | jobs | submit login

Even if our mind were computational, we could detect if the universe was not.



Not really. You are still in the paradigm of computation.

Detecting anything is signal processing.

And if I am wrong about this - all the better! It is exactly the right kind of wrongness we need to progress science.


Let's say I have a computer that can only store 10 bits worth of information, but generates 100 bits of information. Then obviously the information cannot entirely originate in the computer.


Sure it can. Compression. Kolmogorov complexity.

I don't need to 'store' the infinite set of natural numbers - that would require infinite space (memory).

But I can describe the infinite set in a single line of Python. Nat = lambda x=0: Nat(x+1)

It's just a space-time tradeoff.


If the sequence isn't compressible, then you can eliminate that possibility.


You can never determine this with 100% certainty - that is what we have falsification for.

The non-compressibility of a sequence is a falsifiable claim.

All it takes is a single demonstration of successful compression.


for limited runtime you can


So you were unable to compress the stream given the allocated time.

Perhaps you could've compressed it had you kept running for just a second longer?


i mean run all the shorter bitstrings until time runs out or they terminate.


How does “time run out” in practice other than you putting an upper bound on your computation?

Obviously, the code will not halt until it compresses the stream successfully.


enumerate all bitstrings of length 10.

run them for N steps.

if none terminate on the target bitstring of length 100, then we've eliminated the computation hypothesis for 10 bits and N runtime

if we continue this approach and eliminate all the available storage and time available, then we eliminate the computation hypothesis altogether for our scenario


You understand that "Number of bitstrings of length 10" is a function of the cardinality of your alphabet, right?

A binary alphabet has 2^10 such strings. A decimal alphabet has 10^10 such strings.

So before you can make the assertion you've made you first have to answer two question:

1. How many symbols does your alphabet contain? 2. How did you choose that number?

Down that path you arrive the Linear speedup theorem. https://en.wikipedia.org/wiki/Linear_speedup_theorem


bit is base 2 as far as i know

yes, there is always a turing machine that can generate the target with any performance characteristics

so, the key is to keep the reference turing machine fixed


You can’t keep the reference Turing machine fixed in light of the linear speed up theorem.

Hence the argument for compression.

A Turing machine with a better language is faster.

It compresses time.


Yes, I agree there is always a faster Turinng machine for a particular target. With this sort of argument, all bitstrings have a compression of zero bits. I.e. you pick a TM that has the target bitstring as output for the null string. And if it starts with the target on the output tape, there is also zero time. Thus, every bitstring can be generated with zero space and time, with some TM.

So, thats why when talking about compression in general, we keep the reference TM fixed.

When testing, there is the possibility with a single test maybe we have the wrong reference machine. But as the number of tests grows, that probability approaches zero. So, with enough testing we can eliminate the linear speedup theorem loophole, at least with high probability.

As a practical application of this sort of thing, check out the 'normalized information distance'. It is based on algorithmic mutual information, which in theory is not approximatable, as you argue. However, in practice it works surprisingly well, even without prior knowledge of the domain.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: