Do the build processes mentioned in this email make heavy use of numpy, matplotlib, scipy, pickle, etc.?
NumPy is very exciting and all, but it's a subcommunity of the Python community, not 90% of its usage. NumPy users are probably not, in general, trying to spawn twenty five thousand processes in sequence to accomplish some task. The people who are complaining about fractions of a millisecond of startup time are not inverting massive matrices.
People who are importing numpy usually are also not inverting massive matrices. These libraries provide tons of functionality that you would like to use in a short script. For people accustomed to these libraries the code is quick to write in a reliable fashion, easy to read but not quick to run. Everything is great except the startup time (and speed when you have more data).
This group of people is so precise and exclusive that it is irrelevant. "People using NumPy for system scripting who are intensely sensitive to python startup time" is not a large enough group to be trying to argue Python-global policy with.
"People using Python for system scripting who are intensely sensitive to python startup time" is at least large enough to be worth talking about (since speeding up startup time will mostly only help, modulo any possible resources spent to accomplish it), though I'd notice that it hasn't prevented Python from becoming very popular. And plenty of them will find that Go could meet their needs, in a hypothetical universe in which switching languages was free. (That is, I'm not particularly advocating it. It's a last resort for sure.)
(Also this argument is predicated on the false assumption that Go has nothing like those things. They aren't as mature by any means, of course, and I generally consider them a bad idea [2], but they do exist.)
Google, Facebook, Netflix, Uber, Amazon, Microsoft are all using Numpy in their data science pipelines, spinning and taking down dockers for ML-as-a-service. I'm pretty sure they care about startup time of both Go and Python.
Again, read the article for what the topic of conversation is. If you're "spinning up an entire Docker container", a Python startup time is going to disappear into the multiple seconds that already takes. You are not spinning up several hundred docker containers per second, on a sustained basis for hours at a time, on a single piece of hardware, constrained only by Python startup time. That's going to be a vanishing fraction of the problem, even if you are spinning up that many containers that quickly, and the optimization for that is already obvious (don't do that, do more per container).
You are conflating what systems scripting is, which would be what would be managing the docker containers themselves, with what the docker containers would be doing, which would be very likely starting up just one Python instance to "do the thing". I don't imagine there are very many systems scripts out there in the world being started dozens of times per second that use NumPy. Anything that did, again, the obvious optimization would be "don't do that".
NumPy is very exciting and all, but it's a subcommunity of the Python community, not 90% of its usage. NumPy users are probably not, in general, trying to spawn twenty five thousand processes in sequence to accomplish some task. The people who are complaining about fractions of a millisecond of startup time are not inverting massive matrices.