Hacker News new | past | comments | ask | show | jobs | submit login

I used to think that. But I was wrong. These relative ratios continue up the stack. Blowing them off is just excuse-making that gives us bloated and slow software.

You may be a programmer in a language that doesn't permit you to influence L1 cache performance, for instance, but you better understand the mechanisms involved and how that applies to your language and computational model 5 layers up.




Exactly. It's the same myth as premature optimization. Those people tend to ignore good data design (shrink the structs, share fields, order your fields, prefetch arrays, avoid ptr chasing, no trees but tries, hashes over trees, ...) and rather think in costs by lines or ops. Which is horribly wrong for the last 15 years.

Given that any simple bit or int arithmetic might be 50x faster than accessing a bloated field in a bloated struct, they'll never be able to write performant software, yet understand why more code and more lines are faster.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: