If I ever see Knuth's quote "premature optimization is the root of all evil" in response to a question again, I think I'll puke. Not only is it hard for outsiders to know what's premature and what isn't, but sometimes it's nice to make a habit of doing things the faster way when you have two choices that are otherwise indistinguishable. For example I try to use ++x instead of x++, even though 99% of the time it makes no difference.
In my opinion, any optimization done before taking the naive approach is usually premature optimization.
That might sound a little extreme, but in the past 5 years I've run into exactly 1 problem that was solved by busting out the profiler and optimizing. In that same time, I can't count on all my digits the number of features that didn't ship, estimates that were overshot, deadlines that were slipped, etc etc. I've even been part of a team that ran out of runway while popping open jsPerf to choose between !! and Boolean(). Our app was fast as hell -- too bad no one will ever get to use it.
If you're expending cycles choosing between ++x and x++ and you're not ahead of schedule, please stop.
That was my point, I'm not expending cycles choosing between ++x and x++. I've just chosen a different default than most of the code I've seen, and you still need to realize when the default doesn't do what you want - but that's usually obvious.
Sorry to hear about your unsuccessful projects, that's a bummer. I hope that premature optimization wasn't a major part of the blame for any of them.
This vastly differs depending on what software you write. I have done lots of necessary performance optimizations. Of course, there is a balance. To say never first optimize, is as bad as optimizing everything, in my opinion. It should require a case by case "study" of whether it is worth it. This kind of judgement comes with experience.
Writing performant code by using the correct idioms, data structures, algorithms, etc, from the start is just common sense rather than 'premature optimisation'.
Writing unreadable, micro-optimised code in the name of performance without even attempting to profile it first is another matter.
My personal rule (as a not-very-good hobbyist) is that if I have to refactor everything in order to accommodate an optimisation in a path that's already 'fast enough', or introduce some arcane hackery that only makes sense with comments into otherwise clean code, then it must be backed up with realistic benchmarks (and show a significant improvement).
I was with you until you got to the ++x/x++ example. It is a bad example because they generate the same machine code except in situations where they are semantically different.
For simple types that's true, and that's part of the 99% I was talking about. Where it makes a difference is class types, C++ iterators being a prime example. Maybe the optimizer can make the two cases generate identical code, but why take the chance?
I chose that example because it didn't need a full paragraph to explain, but you're right that there are probably better ones. Edit: maybe a Python example? ''.join([a, b, c]) is faster than a + b + c, but again 99% of the time you won't be using it in performance critical code. But it's a useful idiom to know.
From what I remember, for such small examples, a + b + c is significantly faster. (The CPython folks might have done some optimization work on this a few years back?)
It’s only if you plan to concatenate a whole lot of strings that join is a clear winner.
> My rule of thumb is if you haven't profiled it, it's premature to try and optimize it.
The context was specifically about questions on StackOverflow. You don't know if the person asking the question has profiled it or not, and the assumption is often that they haven't. Probably true more often than not, but very condescending and unhelpful to the person who has.
especially considering that people usually do not quote the following part of the quote : "Yet we should not pass up our opportunities in that critical 3%".
I remember seeing a blog post from Andrei Alexandrescu that I can't seem to dig up, but this SO post seems to be a nice summary [1]. In short, in 99.9999999% of usages post increment is probably better.
Thanks. I guess I'm applying this rule only when the result of the increment isn't being used directly, so there is no dependency on the operation. When the result is used, the semantic difference between ++x and x++ obviously determines the choice.
Just look at the liveness of the expression x+1. post-increment means expression has to be alive before whatever uses '++x' whereas pre-increment can delay until next usage of x.
I rather take "correctness and safety before performance", and only if it doesn't meet the performance criteria of an user story, then with the help of a profiler analyse what are the next steps.
99% of the time I don't need to worry about the profiler.
Anything one line is prone to being misinterpreted. IMO, people who're still learning shouldn't be bombarded with a million different things, so one liners work for them. People who're experienced, realize the intent of that one liner and will easily know enough to know when it applies.
That hasn't really been a thing since the PDP-11, which had an auto-increment option on the register used for pointer offsets. That's why that feature is in C. It really mattered in 1978.
Interesting, so the C creators used a feature in their language, that was hardware-specific. I thought (from having read K&R) that one goal of C was to be hardware-independent. Maybe this was an exception, or maybe that auto-increment was common to many computer designs at that time.
Wow, B. A blast from the past, sort of. I had read a good book about BCPL (the ancestor to B) many years ago. IIRC, it was by Martin Richards, inventor of BCPL. Pretty interesting book and language. BCPL and B were both typeless languages, or languages with just one type, the machine word (16 or 32 bits, don't remember). Still I found that many algorithms and programs were expressed rather compactly in BCPL - or so it seemed to me at the time. Was quite junior then, and without exposure to more advanced programming languages - only knew BASIC and Pascal, probably; even C, I only learned a bit later.
Also just saw some other interesting stuff from the BCPL article above:
[
BCPL is the language in which the original hello world program was written.[3] The first MUD was also written in BCPL (MUD1).
Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS).
BCPL was also the initial language used in the seminal Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL.
]
Interestingly the code you quoted is not called at all. I guess eventually linking phase removes it as dead code.
That considered, both pre- and post-increment generate identical code, even with VS2017.
This matches my previous experience about pretty much any compiler in last 15 years or so -- there's no difference between ++i and i++, unless, of course, it's in a statement and changes the actual meaning of code.
"it++" case. Note that iterator function is not called.
foo PROC
mov rdx, QWORD PTR [rcx]
xor eax, eax
mov rcx, QWORD PTR [rcx+8]
cmp rdx, rcx
je SHORT $LN70@foo
mov r8, rcx
sub r8, rdx
sar r8, 5
npad 8
$LL4@foo:
add rax, r8
add rdx, 32 ; 00000020H
cmp rdx, rcx
jne SHORT $LL4@foo
$LN70@foo:
ret 0
foo ENDP
Here's the code generated for "++it" case. Iterator function is not called here either.
foo PROC
mov rdx, QWORD PTR [rcx]
xor eax, eax
mov rcx, QWORD PTR [rcx+8]
cmp rdx, rcx
je SHORT $LN68@foo
mov r8, rcx
sub r8, rdx
sar r8, 5
npad 8
$LL4@foo:
add rax, r8
add rdx, 32 ; 00000020H
cmp rdx, rcx
jne SHORT $LL4@foo
$LN68@foo:
ret 0
foo ENDP
Yes, but if used as their own statement and not used as some bigger expression (parameter to function, etc.), any decent compiler will compile them the same way.
Yes, and thus a perfect example premature optimization; semantics should define default usage, not some possible micro optimization. This is exactly the kind of case Knuth is talking about, going around doing ++x because you think it's faster when the standard idiom is x++ is premature optimization.