> The papers I've seen on the latter problem (years ago) start by assuming that the halting problem has been solved, essentially, by giving the agent non-deterministic computational powers.
Of course this is not required. An AI system can simply not implement optimizations that it can't prove are correct, per the above link. Alternately, if simply "crashing" is the issue then it could simply register a fault handler that reverts to its previous code.
The notion that these would be any kind of impediment is completely bizarre to me. As a lowly human I already know how to handle these problems, and we're talking here about a superintelligence more capable than either of us.
Some FOOM scenarios are clearly unphysical, but the idea of recursive self-improvement being impossible or infeasible is not one of those reasons.
How far advanced do you think an AI would be where we could say to GPTx “here is your source code - write us a much more powerful AI than yourself”? How far off would you say this was?
It's impossible to say with certainty. I suspect there are at least one or two generalization tricks needed, but that's only speculative. Those generalizations might be simple or a little more complex, and so might take a year or two, or decades to discover. I can only say that they will almost certainly be discovered within my lifetime (say within 40 years). I suspect it will be much sooner than that.
https://en.wikipedia.org/wiki/G%C3%B6del_machine#cite_note-G...
> The papers I've seen on the latter problem (years ago) start by assuming that the halting problem has been solved, essentially, by giving the agent non-deterministic computational powers.
Of course this is not required. An AI system can simply not implement optimizations that it can't prove are correct, per the above link. Alternately, if simply "crashing" is the issue then it could simply register a fault handler that reverts to its previous code.
The notion that these would be any kind of impediment is completely bizarre to me. As a lowly human I already know how to handle these problems, and we're talking here about a superintelligence more capable than either of us.
Some FOOM scenarios are clearly unphysical, but the idea of recursive self-improvement being impossible or infeasible is not one of those reasons.