It seems like a counterargument to your counterargument could be to treat memory safety as an example of a thing that languages can help with to eliminate classes of bugs. There are other such things too. We could regard memory safety as a great success in this area that programmers can easily appreciate.
Then a follow-up question would be what indications we have that other classes of bugs can or can't also be eliminated with help from languages. (Of course there's also potentially a lot of space between "it's inherently impossible to write programs that are incorrect or unsafe in a specific way", which is something that things like memory safety and static typing can help with, and "it's possible for a programmer to do extra work to prove that a specific program is correct or safe in a specific way", which has been a common workflow in formal methods.)
I would be interested in your view of this; after memory safety and perhaps type safety, what kinds of bugs do you think could be eliminated by future language improvements, and what kinds of bugs do you think can't be eliminated that way?
I don't have a clear take on this so much as a "food for thought" take. I think it's obviously good to kill bug classes. But I also think programs built in memory-safe languages are so insecure that nobody should feel comfortable running it; omnia programmae purgamenta sunt.
I remember arguing on Reddit about the value of serious type safety languages, and the notion that they could eradicate things like SQLI bugs (ironically: the place where we still find SQLI bugs? Modern type safe languages, where people are writing raw SQL queries because the libraries haven't caught up yet). I'm both not sold by this and also stuck on the fact that there's already a mostly-decisive fix for the problem that doesn't involve the language but rather just library design.
Setting aside browser-style clientside vulnerabilities, which really are largely a consequence of memory-unsafe implementation languages, what are the other major bug classes you'd really want to eliminate? Scope down to web application vulnerabilities and it's SSRF, deserialization, and filesystem writes. With the possible exception of deserialization, which is an own-goal if there ever was one, these vulnerabilities are a consequence of library and systems design, not of languages. You fix SSRF by running queries through a proxy, for instance.
Also, the Latin plural of "programma" should probably be "programmata" (which also works for agreeing with the neuter plurals "omnia" and "purgamenta", whereas "programmae" would be a feminine plural which wouldn't agree with the neuter adjectives). Most often Latin uses the original Greek plurals for Greek loanwords.
I posted the sticker I made with this on it and Ben Nagy took me to task for my 10th grade Latin skill, and I felt like I held up okay! I remember there being a reason programmae worked, but I forget what it was; if I know me, I'll spend 20 minutes later today scouring Twitter for the conversation and then posting a 9-paragraph comment about it here. :)
Even if programmae is plausible, it would have to be "omnes" rather than "omnia" to agree with the feminine plural. (In retrospect I agree that it's going to "purgamenta" either way because that's a noun, not an adjective.)
> where people are writing raw SQL queries because the libraries haven't caught up yet
SQL is a pretty good way of representing queries, I don't see how a library could be constructed that could do better. But then as an SQL guy I have a hammer so Omens Clavusum Est.
> We could regard memory safety as a great success in this area that programmers can easily appreciate.
No one wants the overhead on current hardware but C can be made memory safe with bounds checked pointers (although not particularly type safe without introducing a lot of code incompatibilities).
It's important to distinguish actual language advantages from indulging the common fetish of wouldn't-it-all-be-better-if-we-rewrote-it.
There has been fairly little rigorous study of programming factors leading to more reliable software (esp in the kinds of environment most software we use is created and used in, as opposed to say aerospace).
Wouldn't it be sad if enormous amounts of effort were spent rewriting everything in a new language only to later discover that other properties of that language, such as higher levels of abstraction or 'package ecosystems' that cause importing impossible to audit millions of lines of code to get trivial features, lead to lower reliability than something silly like using C on hardware with fast bounds/type checked pointers?
Well it depends. That overhead may be quite acceptable in many places. Certainly not 'no one'. However you've not quantified what that might be, IIRC some guy turned on int overflow/underflow checking and it only cost 3% extra time. Guess that's down to speculation & branch prediction.
I don't like any overhead in time or hardware either, but measure first.
Then a follow-up question would be what indications we have that other classes of bugs can or can't also be eliminated with help from languages. (Of course there's also potentially a lot of space between "it's inherently impossible to write programs that are incorrect or unsafe in a specific way", which is something that things like memory safety and static typing can help with, and "it's possible for a programmer to do extra work to prove that a specific program is correct or safe in a specific way", which has been a common workflow in formal methods.)
I would be interested in your view of this; after memory safety and perhaps type safety, what kinds of bugs do you think could be eliminated by future language improvements, and what kinds of bugs do you think can't be eliminated that way?