Either the structure of the universe is computational or the structure of the instrument we use to understand the universe with is computational: our minds.
It is difficult to distinguish the two cases in practice.
I don't think it follows that either the universe or our minds are computational. For one, we don't have a satisfactory definition of "mind", so that question has to remain open until we do.
And the universe could be "computational" in a completely unfamiliar way. It might even be humanly unimaginable.
As a wild speculation, consider the extreme case: the universe actually functions as an indivisible whole.
We use math to make predictions about repeating causal patterns, and invariably they work in a time- and precision-limited way. If the universe was a true thing-in-itself on its own terms, it could be "computational", but our models would always be incomplete and imperfect.
We would never be able to understand it fully, because we would never be able to build a complete and accurate representation of its computational mechanisms.
Note how you insist on a definition for “mind”, but if we are going to keep raising the bar on missing definitions let’s just go for the sacred cow. What is a definition anyway?
Could you give me a definition for “definition”?
At some point you have to be content that your inability to define every single term in your vocabulary doesn’t hinder your ability to use that term in practice.
All we have in way of creating/communicating knowledge is symbol manipulation. Language.
Turing machines are language recognisers, and I am comfortable using them as practically sufficient (for the purposes of conversation) models of minds.
To be sure you see my point, defining “definition” is recursive and that puts you on the computer science turf.
I am a computer. Mathematics is a language; or more precisely - it is a grammar. Our Mathematical models of the universe are computational, and since Mathematics can only prove things up to isomorphism that is as far as colourless, formal, linguistic reductionism will take us.
As for Kantian transcendentalism. Look no further than Jean-Yves Girard’s work on Linear Logic, Geometry of Interaction and transcendental syntax. Is all computer science intersecting with Quantum Physics.
You are a biological system. A massive, noisy and dynamical system of interactions we can never hope to model in fidelity. You exist because of unstable equilibria and tiny little proteins pushing against entropy.
> All we have in way of creating/communicating knowledge is symbol manipulation. Language.
We can communicate with more than abstract symbol language. A dog grows and bears its teeth. A thunderstorm communicates its approach.
An inspiring critique of and response to confident realist materialism founded on a certain anthropocentric presumption of Cartesian dualism. It exemplified skill—its own powerful confidence—and experience, deserving great commendation. The education born from witnessing such activity has a high aesthetic value to me.
I am pleased that it pleased you, but allow me to correct the misconception arising due to the parallax between your view of me and my view of me.
Quantum Entanglement is computation, which makes me a monist and a materialist/physicalist. Computers reify our formal languages by manipulating matter.
I intended for my phrase “confident realist materialism founded on a certain anthropocentric presumption of Cartesian dualism,” in all its entirety, to refer to the object of the critique. I apologize if my carelessness caused you to interpret the second half of the phrase (the part after the word “materialism”) as a mischaracterized signifier of you, the subject or cause of the critique. Regardless, I thank you for providing an organization of your advanced philosophy and the fundamental mechanics of it.
Let's say I have a computer that can only store 10 bits worth of information, but generates 100 bits of information. Then obviously the information cannot entirely originate in the computer.
if none terminate on the target bitstring of length 100, then we've eliminated the computation hypothesis for 10 bits and N runtime
if we continue this approach and eliminate all the available storage and time available, then we eliminate the computation hypothesis altogether for our scenario
Yes, I agree there is always a faster Turinng machine for a particular target. With this sort of argument, all bitstrings have a compression of zero bits. I.e. you pick a TM that has the target bitstring as output for the null string. And if it starts with the target on the output tape, there is also zero time. Thus, every bitstring can be generated with zero space and time, with some TM.
So, thats why when talking about compression in general, we keep the reference TM fixed.
When testing, there is the possibility with a single test maybe we have the wrong reference machine. But as the number of tests grows, that probability approaches zero. So, with enough testing we can eliminate the linear speedup theorem loophole, at least with high probability.
As a practical application of this sort of thing, check out the 'normalized information distance'. It is based on algorithmic mutual information, which in theory is not approximatable, as you argue. However, in practice it works surprisingly well, even without prior knowledge of the domain.
It is difficult to distinguish the two cases in practice.
https://en.wikipedia.org/wiki/Mind_projection_fallacy