I think that the rationale for using int for array indices and lengths and so on is to avoid shooting yourself in the foot with wrap-around arithmetic (again, my thesis being that when 99% of people use integers, they expect them to behave like integers and not wrap)
Take for example code which finds consecutive differences in a list, something like
for i := 0; i < len(list) - 1; i++ {
print(list[i+1] - list[i])
}
This code looks fine and reasonable, and we even avoided the off-by-one error at the end. However, if len(list) is a uint and we run this on an empty list, len(list) - 1 is then 2^32 or something, and the program explodes.
So this is the rationale for using signed integers rather than unsigned integers. I think the language semantics should go further, and use true integers pretty much everywhere. Note that the compiler could still easily see that the index variable in the loop above is bounded, and use a machine word under-the-hood. But the programmer only ever has to think about well-behaved integers.
The original thread was about converting int to be an arbitrary precision type: "so that 'int' can become a true integer". One would imagine that, to preserve the ability to do pointer <-> non-pointer number, they would just leave uintptr alone, and so basically ignore your concerns, whereas int as the "native type" for slice indexes and the like is a more likely reason to prevent messing with "int" itself
Okay, that's the source of the confusion. I was not replying to that sentiment, which is a nuanced discussion on what an int should be. I was replying to the_clarence's categorical dismissal of variable width integer types.