I already said it might be no more than conditional correlations. That does not account for an emergent combination of conditionals that could implement some undiscovered algorithms of sentience.
If our brains are no more than computers and our own consciousness is software, then there exists some algorithm or combination of that gives rise to sentience. If these models arrived at this special algorithm during their training, much like the optimization of evolution arrived at the same, then we may have created something sentient.
But the fact that our own sentience is a mystery means that there’s not a whole lot we can say mechanically about these LLM other than talk about their behavior and whether it’s convincing.
>If our brains are no more than computers and our own consciousness is software, then there exists some algorithm or combination of that gives rise to sentience. If these models arrived at this special algorithm during their training, much like the optimization of evolution arrived at the same, then we may have created something sentient.
Joscha Bach gives a pretty good explanation of the algorithm we follow through the lens of Control Theory, and Stephen Wolfram has a pretty amazing Theory of Everything that explains how it can be arrived at.
If our brains are no more than computers and our own consciousness is software, then there exists some algorithm or combination of that gives rise to sentience. If these models arrived at this special algorithm during their training, much like the optimization of evolution arrived at the same, then we may have created something sentient.
But the fact that our own sentience is a mystery means that there’s not a whole lot we can say mechanically about these LLM other than talk about their behavior and whether it’s convincing.