What I'm saying is that a lot of energy has gone into "Assuming an attacker has gotten us into an undefined state, let's try to prevent them from pushing us into a chosen redefined state." And what I'm saying is, maybe we can create an environment where we don't end up in undefined states, or at least, there are bounds to how undefined they can be.
For example, I'm exploring ending use after free bugs by just not freeing memory. This sounds ridiculous until you realize that on 64 bit, leaking virtual memory (and therefore never recycling pointers) is actually not an insane idea, particularly for browsers that get to kill processes outright because they feel like it. Also, lots of UaF in there.
When you indict things like ASLR as being little more than bait for exploit developers, and later suggest that part of the solution might be a hack involving free() create zombie addresses, you give the impression of having said "exploit mitigations aren't working, unless they're my exploit mitigations".
(I also don't think yours is a good plan, but I'll wait for you to publish more details before criticizing it further).
Very specifically, I'm interested in exploit mitigations that eliminate undefined states, rather than just hope an attacker doesn't know enough to redefine them. One can show "zombie pointers" (fine, we've got lots of space in 64 bit land) will never allow an attacker to exploit a UaF much easier than we can show memory is randomized enough.
At the end of the day hard bounds checking (however slow it might be) also falls into this category of "approaches that do not try to survive falling into undefined states". I'm not saying ASLR et al isn't useful, just that we should put more energy intostaying within well defined states.
That's ultimately what "better" languages promise, after all. I'm curious if there are approaches that don't require rewrites, and very interested in actually measuring what does and doesn't absolutely suppress vulnerability, at what performance cost. We're not doing enough of that.
You might be right, you might not be, in the general case. In the specific case getaddrinfo is not performance sensitive (to say the least) and that entire block could be specially compiled 10x slower or run inside of a trivially available sandbox.
For example, I'm exploring ending use after free bugs by just not freeing memory. This sounds ridiculous until you realize that on 64 bit, leaking virtual memory (and therefore never recycling pointers) is actually not an insane idea, particularly for browsers that get to kill processes outright because they feel like it. Also, lots of UaF in there.