Hacker News new | past | comments | ask | show | jobs | submit login

Thanks Bruce/Brendan. Interesting info to know.

Another general perf question: How do you generally explore perf problems? I find that I can narrow down performance differences to the point of "we spent more time in function foo" fairly easily. Understanding why can be difficult (was it an intentional change? Should we have been in there at all? Did some data structure change this time around?).

All in all, I feel like a detective, and don't have a rigorous method to apply when looking at problems. Just wondering if anybody feels similarly or if with more knowledge and understanding will come clarity.




I've got no silver bullet for that. Recently I found a bunch of functions that were running slower on Windows. I eventually proved that their code had not changed and they were being run the same number of times as before. I had to use perf on Linux to prove that the i-cache miss rate was elevated due to changes elsewhere, making the identical code run slower.

Having a good sense of how long something "should" take is certainly important, but investigating small performance regressions is always hard.

That said, there are often a lot of big performance problems, and those are easier to find and often easier to understand, and more important to fix. Hey, maybe we shouldn't have a 50 MB global variable in each server instance -- whaddya think?

Being a detective is good. Apply the scientific method, form hypotheses, test and reject them, rinse repeat. That's certainly way better than the off-applied "random guess based on Internet search" technique.

Brendan's book is a good read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: