I can't help but boggle at the suggestion that in this day and age there are programming environments where doing this is less work than running a real profiler. I mean, come on, profilers that can't handle recursion or that can't show higher than function level granularity?
Well, it means compiling with profiling, running, visualizing the output, and then figuring out the hotspots. That takes some amount of work. (Or you run with valgrind, in which case it runs so painfully slow that before you've gotten to the slow part, you've gone home...)
Compared to that, firing up the debugger and hitting break is pretty easy. If the hotspot is like core meltdown hot, it's probably faster.
Plenty of tools don't require recompiling anything for profiling. Running the program once for profiling seems like it should be less work than running and interrupting it 10 times (heck, a lot of tools won't even require restarting the program if it already happens to be running). Visualizing the output by function or by something more fine-grained should be a matter of a single command. And if it's easier to manually find the hotspots by looking at a bunch of stacktraces than the profiler output, there's something badly wrong with said output.
Now, I'm sure there are special circumstances where this method is the right thing. But that's totally different from it apparently being suggested as best practice to newbies.