Google, Facebook, Netflix, Uber, Amazon, Microsoft are all using Numpy in their data science pipelines, spinning and taking down dockers for ML-as-a-service. I'm pretty sure they care about startup time of both Go and Python.
Again, read the article for what the topic of conversation is. If you're "spinning up an entire Docker container", a Python startup time is going to disappear into the multiple seconds that already takes. You are not spinning up several hundred docker containers per second, on a sustained basis for hours at a time, on a single piece of hardware, constrained only by Python startup time. That's going to be a vanishing fraction of the problem, even if you are spinning up that many containers that quickly, and the optimization for that is already obvious (don't do that, do more per container).
You are conflating what systems scripting is, which would be what would be managing the docker containers themselves, with what the docker containers would be doing, which would be very likely starting up just one Python instance to "do the thing". I don't imagine there are very many systems scripts out there in the world being started dozens of times per second that use NumPy. Anything that did, again, the obvious optimization would be "don't do that".
Google, Facebook, Netflix, Uber, Amazon, Microsoft are all using Numpy in their data science pipelines, spinning and taking down dockers for ML-as-a-service. I'm pretty sure they care about startup time of both Go and Python.