So it's a bit like a program, with sequences also selecting a different Turing machine? (which determines how that subsequence is interpreted.)
Because the Turing machine is selected entirely by the sequence (the protein folding caused by the laws of physics is selected entirely by the sequence), the number of possible results (the number of different shapes that could result) is limited to the number of different sequences. That is, the information in the phenome seems to be limited by the information in the genome.
If you think of it as a two part message, with the first part encoding a model, and the second part configuring it, then the DNA can be seen as the configuration, and the laws of physics as the model (which isn't actually coded anywhere like DNA - we'd have to write that ourselves.)
This model is constant over all life, so that DNA from all species (plants and animals) share the same "model" (laws of physics that cause protein folding etc.)
Another example of a two-part message is that the first part is a programming language, and the second part is a program written in that language. For a high level language (esp with libraries), it's obvious that a very short program might do an awful lot; but the true information content is not that program alone, but the total including the language and libraries it uses.
However, and this is my point, I don't believe that the laws of physics have been constructed so conveniently that provide as much assistance as a high level language with libraries. At most, nature may have stumbled onto hacks in physics (like surface tension, interfaces and gradients) and exploited them. Actually, given how long it took to get life started, perhaps it had to find a whole bunch of clever hacks (randomly recombining for billions of years over a whole planet) before it came up with a workable model (that is the model that DNA configure.)
hmmm... we might be able to estimate the information content of the 'model' by how many tries it took to come across it.
"Finite" would probably have been a better choice.
I have a micro SD card, smaller than my little-finger nail that holds 4GB - x8 more than the human genome (using the article's figure of 4 billion bits). And that's pretty much the lowest capacity you can buy. Yet, that amount of information is limited/finite: the possible states that that memory can hold is limited/finite.
Because the Turing machine is selected entirely by the sequence (the protein folding caused by the laws of physics is selected entirely by the sequence), the number of possible results (the number of different shapes that could result) is limited to the number of different sequences. That is, the information in the phenome seems to be limited by the information in the genome.
If you think of it as a two part message, with the first part encoding a model, and the second part configuring it, then the DNA can be seen as the configuration, and the laws of physics as the model (which isn't actually coded anywhere like DNA - we'd have to write that ourselves.)
This model is constant over all life, so that DNA from all species (plants and animals) share the same "model" (laws of physics that cause protein folding etc.)
Another example of a two-part message is that the first part is a programming language, and the second part is a program written in that language. For a high level language (esp with libraries), it's obvious that a very short program might do an awful lot; but the true information content is not that program alone, but the total including the language and libraries it uses.
However, and this is my point, I don't believe that the laws of physics have been constructed so conveniently that provide as much assistance as a high level language with libraries. At most, nature may have stumbled onto hacks in physics (like surface tension, interfaces and gradients) and exploited them. Actually, given how long it took to get life started, perhaps it had to find a whole bunch of clever hacks (randomly recombining for billions of years over a whole planet) before it came up with a workable model (that is the model that DNA configure.)
hmmm... we might be able to estimate the information content of the 'model' by how many tries it took to come across it.