> No, you're capable of learning things. You can't do brain surgery on yourself
What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?
>All real things have limitations.
Uh, yep, that doesn't mean it will be as limited as us. To spell it out: yes, real things have limitations, but limitations vary between real things. There's no "imaginary flawless" versus "everything real has exactly the same amount of flawed-ness".
> What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?
Software updates can't cause your computer to "exponentially self-improve" which is the AGI scenario. And giving the AI new software tools doesn't seem like an advantage because that's something humans could also use rather than an improvement to the AI "itself".
That leaves whatever the AGI equivalent of brain surgery or new bodies is, but then, how does it know the replacement is "improvement" or would even still be "them"?
> To spell it out: yes, real things have limitations, but limitations vary between real things.
I think we can assume AGI can have the same properties as currently existing real things (like humans, LLMs, or software programs), but I object to assuming it can have any arbitrary combination of those things' properties, and there aren't any real things with the property of "exponential self-improvement".
Just because you can imagine something and define that something has magic powers doesn't mean that the magic powers can actually exist in real life.
Are you capable of "self improvement"? (In this AGI sense, not meant as an insult.)