Hacker News new | past | comments | ask | show | jobs | submit login

Unsurprisingly we've now reached the perennial "is premature optimization actually premature" of it all :)

Would it have been better for the person who originally wrote that just-iterate-the-list implementation to have been thinking about data structures and algorithms that would perform better? Opinions on this vary, but I tend to come down on the side of: Optimize for human productivity (for both the writer and the many future readers) first, then profile, then optimize any bottlenecks.

My assumption when I come across something that turns out to be a performance bottleneck that is easy to fix with a better data structure or algorithm, is that the person who wrote that was consciously doing a simple implementation to start, in lieu of profiling to see where the actual bottlenecks are.

But I also understand the perspective of "just do simple performance enhancements up front and you won't have to spend so much time profiling to find bottlenecks down the line". I think both philosophies are valid. (But from time to time I do come across unnecessarily complicated implementations of things in code paths that have absolutely no performance implications, and wish people wouldn't have done that.)




> Optimize for human productivity (for both the writer and the many future readers) first, then profile, then optimize any bottlenecks.

I don't agree. The problem with this approach is that there are some optimisations which require changes to how data flows through your system. These sort of refactorings are much more difficult to do after the fact, because they change what is happening at the abstraction / system boundaries.

Personally, my approach is something like this: Optimise first for velocity, usually writing as little code as possible to get something usable on the screen. Let the code be ugly. Then show people and iterate, as you feel out what a better version of the thing you made might look like - both internally (in code) and externally (what you show to humans or to other software systems). Then rewrite it piece by piece in a way thats actually maintainable and fast (based on your requirements).


I did mention that people disagree on this :)

I've moved both ways along the continuum between these perspectives at different times. I don't think there is a single correct answer. I'm at a different place than you on it currently, but who knows where I'll be in a year.


Totally fair :) I have the same relationship with static typing. Right now I couldn't imagine doing serious work in a dynamically typed language, but who knows what I'll think in a year too. From where I'm standing now, it could be ghastly.


Actually I think you're misunderstanding me. I'm not saying you should profile and optimize all the code you write, I'm saying that a basic understanding of algorithms, data structures and complexity analysis allows you to write better code without any extra tools. I didn't profile this report to find out why it took 30+ minutes to run. I just happened across some code, read it, saw that it was essentially two nested loops iterating through two huge (50-80k elements each) lists matching items by name, changed it to use a dictionary instead of the inner loop and that was that.

It's a trivial change, it wouldn't have taken any longer to write this the first time around. There is no excuse for doing this, it's just a dev who doesn't understand what they're doing.

That's my point. Understanding these fundamentals allows you to avoid these types of pitfalls and understand when it's okay to write something inefficient and when it isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: