Its not so much about programmers needing to know the numbers.
Its more about programmers needing to know what L1 cache is, the idea that some operations are faster than others, etc.
I know a lot of web dev type people who have no idea how the CPU works, what a register is, what paging or virtual memory are, etc. When you're treating compute resources like they're free and abundant (what web devs like to do these days), then of course you don't care. I just wish those devs did care, because their fancy dev machines blind them to the fact that their theoretically simple web app groans on anything other than an i7 with 8Gb ram. Their fast internet and local servers also seem to make them forget why its bad that first load requires megabytes of js. Sometimes I'd rather browse with my cheap tablet and that nonsense seriously blows.
There's a lot of gluttony in development these days. I wish every developer was required to take a basic OS or assembly course to see just how much is happening between writing js and having it actually execute. To see what it really means to program a computer and not a web browser.
I also wish more devs would take a look at the performance of MS word and the performance of Google Docs and apply a little critical thinking. On my cheap Surface 3 (4gb ram i3) Word loads instantly, sips power, and does everything I'd ever want locally. Docs takes forever, destroys the battery and is slow as molasses in January with both Edge and Chrome.
Yes, every programmer does need to know these numbers and why the numbers are what they are.
Sadly such (willful?) blindness is far from unique to web devs (though much of it seems to originate with web dev these days).
I have personally encountered people that dismiss the whole "lets stuff everything in /usr" issue with claiming that everyone (or at least those they care about) are using lights out management anyways.
I used to think that. But I was wrong. These relative ratios continue up the stack. Blowing them off is just excuse-making that gives us bloated and slow software.
You may be a programmer in a language that doesn't permit you to influence L1 cache performance, for instance, but you better understand the mechanisms involved and how that applies to your language and computational model 5 layers up.
Exactly. It's the same myth as premature optimization. Those people tend to ignore good data design (shrink the structs, share fields, order your fields, prefetch arrays, avoid ptr chasing, no trees but tries, hashes over trees, ...) and rather think in costs by lines or ops.
Which is horribly wrong for the last 15 years.
Given that any simple bit or int arithmetic might be 50x faster than accessing a bloated field in a bloated struct, they'll never be able to write performant software, yet understand why more code and more lines are faster.
Thanks for the read. I definitely agree, and that's where I was going near the end of my article / rant. People don't care if something is sponsored as long as it's what they want to read about. If you're trying to force something down a user's throat, they're not going to respond well to it. It seems like common sense, but apparently we're in the minority of the subject.
I agree with shaggy. I don't know how you would expect to have a VPN accessible only by your account's droplets for $5. A private network means an internal network. So it's just using the internal infrastructure of the data center to communicate with other droplets in the data center. It's more performant than using the external network. Usually companies' advertise a fully private account infrastructure as a "Private Cloud" or something of that nature.
"Private Network" is often analogous with "Internal Network". They aren't lying,it's private as in it's using private IP assignment within their own infrastructure. Thus, private.