> But then again, it's like trying to achieve perpetual motion in physics. One can't get more intelligence from a system than one puts in the system.
Not necessarily the same thing, as you're still putting in more processing power/checking more possible paths. Its kinda like simulated annealing, sure the system is dumb, but as long as checking if you have a correct answer is cheap, it still narrows down the search space a lot.
Yeah I get that. We assume there's X amount of intelligence in the LLM and try different paths to tap on that potential. The more paths are simulated, the closer we get to the LLM's intelligence asymptote. But then that's it—we can't go any further.
Not necessarily the same thing, as you're still putting in more processing power/checking more possible paths. Its kinda like simulated annealing, sure the system is dumb, but as long as checking if you have a correct answer is cheap, it still narrows down the search space a lot.