Hacker News new | past | comments | ask | show | jobs | submit login
Adding runtime benchmarks to the Rust compiler benchmark suite (kobzol.github.io)
120 points by lukastyrychtr 9 months ago | hide | past | favorite | 28 comments



I know a lot of places tend to use things like AWS spot instances for their CI runners but they obviously provide inconsistent performance. As others have noted you can always measure performance through other metrics like certain counters, not just the absolute runtime duration.

I can recommend getting your company a dedicated instance from OVH or Hetzner since they are dirt-cheap compared to cloud offerings. Setup some simple runner containers with properly constrained CPU and memory that would be similar to production environment, hook them up to your Gitlab or Github and you are good to go. You don't really need high availability for development things like CI runners.


> but they obviously provide inconsistent performance

No. Unless you're using a "burstable" (overcommitted) instance family, this shouldn't be the case. The performance will be consistent, but you may get preempted before you finish your work. I would keep those concepts separate.


That's true, however a lot of spot usage ends up being heavily diversified across instance types in order to avoid momentary supply issues and optimize cost.

Across different families, CPU performance can vary by a decent amount.


Sure, but you should still keep the concepts disjoint. "I took whatever I could get" isn't the same as "spot instances are inconsistent".


I’ve wondered about something like that. Is it possible to get the number of cpu operations some test takes? I get with modern CPUs it won’t be as exact as when I was counting 8080 operations, but it would surely tell you something.

Or at higher level abstractions, things like how many Python byte codes.


Yesn't, you can by, e.g., using qemu[0]. This however has the obvious downside of not accounting for pipelining and cache as well as the subset of instructions encoded in microcode vs. hardware. All of those are very important but CPU (microarchitecture, cache) specific.

[0]: For example: https://www.qemu.org/docs/master/devel/replay.html#instructi... more can be found searching the docs.


Iai^1 uses CacheGrind^2 to count instructions

[1] - https://github.com/bheisler/iai [2] - https://valgrind.org/docs/manual/cg-manual.html


On Intel CPUs yes you can, using perf. See https://www.brendangregg.com/perf.html ; getting started: perf stat command


CI/CD runners are a great example of something that doesn’t require a dedicated 24/7 server. Spot instances are great for this purpose unless you’re looking for ultra-stable benchmarking performance.

If you do need precise performance measurements for customer experience, it’s usually better to run the code on the same hardware you’d be using in production anyway. For client workloads this generally involves getting the actual hardware your customers would be using: Laptops, Macs, and other things you’re not getting from cloud providers.

Also, the performance differences of cloud instances aren’t so significant that it warrants going out of your way to set up something dedicated. If you need more performance, just get larger spot instances.


It’d be neat if benchlib was published as a crate. Criterion could use some competitive pressure.


Only tangentially related to the post, but I don't see it mentioned there: what do people use to run benchmarks on CI? If I understand correctly, standard OSS GH Actions/Azure Pipelines runners aren't going to be uniform enough to provide useful benchmark results. What does the rust project use? What do other projects use?


> what do people use to run benchmarks on CI?

Typically, you purchase/rent a server that does nothing but sequentially run queued benchmarks (and the size/performance of this server doesn't really matter, as long as the performance is consistent), then sends the report somewhere for hosting and processing. Of course, this could be triggered by something running in CI, and the CI job could wait for the results, if benchmarking is an important part of your workflow. Or if your CI setup allows it, you tag one of the nodes as a "benchmarking" node which only run jobs tagged as "benchmark", but I don't think a lot of the hosted setups allow this, mostly seen this in self-hosted CI setups.

But CI and benchmarks really shouldn't be run on the same host.

> What does the rust project use?

It's not clear exactly where the Rust benchmark "perf-runner" is hosted, but here are the specifications of the machine at least: https://github.com/rust-lang/rustc-perf/blob/414230abc695bd7...

> What do other projects use?

Essentially what I described above, a dedicated machine that runs benchmarks. The Rust project seems to do it via GitHub comments (as I understand https://github.com/rust-lang/rustc-perf/tree/master/collecto...), others have API servers that respond to HTTP requests done from CI/chat, others have remote GUIs that triggers the runs. I don't think there is a single solution that everyone/most are using.


Do I really need dedicated hardware? How bad is a VPS? I mean it makes sense but has anyone measure how big the variance is on a VPS?


Dedicated hardware doesn't need to be expensive! Hetzner has dedicated servers for like 40 EUR/month, Vultr has it for 30 EUR/month.

VPS's kind of doesn't make sense because of noisy neighbors, and since that has a lot of fluctuations, because neighbors come and go, I don't think there is a measure you can take that applies everywhere.

For example, you could rent a VPS at AWS and start measuring variance, which looks fine for two months but suddenly it doesn't, because that day you got a noisy neighbor. Then you try VPS at Google Cloud and that's noisy from day one.

You really don't know until you allocate the VPS and leave it running, but the day could always come, and the benchmarking results are something you really need to be able to trust that they're accurate.


Is there something to be said for practicing how you play? If your real world builds are going to be on VPS’s with noisy neighbors (or indeed local machines with noisy users), I’d prefer a system that was built to optimize for that to one that works fantastically when there is 0 contention but falls on its face otherwise.


Different things for different purposes. Measuring how real software under real production workloads in variable enviornments behaves is useful but inherently high-variance. It doesn't let you track <1% changes commit-by-commit.

Field work vs. lab work.


Rust uses a dedicated consistent server that runs exclusively benchmark loads, so that nothing else is interfering with the benchmark results.


A solution is mentioned in the article, but perhaps obliquely:

> while I also wanted to measure hardware counters

As I understand it, hardware counters would remain consistent in the face of the normal noisy CI runner.

The article talks about using Cachegrind (via the iai crate) and Linux perf events.

I use iai in one of my projects to run performance diffs for each commit.


> As I understand it, hardware counters would remain consistent in the face of the normal noisy CI runner.

With cloud CI runners you'd still have issues with hardware differences, e.g. different CPUs counting slightly differently. even memcpy behavior is hardware-dependent! And if you're measuring multi-threaded programs then concurrent algorithms may be sensitive to timing. Also microcode updates for the latest CPU vulnerabilities. And that's just instruction counts. Other metrics such as cycle counts, cache misses or wall-time are far more sensitive.

To make sure we're not slowly accumulating <1% regressions hidden in the noise and to be able to attribute regressions to a specific commit we need really low noise levels.

So for reliable, comparable benchmarks dedicated is needed.


> With cloud CI runners you'd still have issues with hardware differences

For my project it really is the diff of each commit, which means that I start from a parent commit that isn’t part of the PR and re-measure that, then for each new commit. This should avoid accounting for changes in hardware as well as things like Rust versions (if those aren’t locked in via rustup).

The rest of your points are valid of course, but this was a good compromise for my OSS project where I don’t wish to spend extra money.


The thing is that things like Cachegrind are supposed to be used as complements to time-based profilers, not to replace them.

If you're getting +-20% different for each time based benchmark, it might just be noisy neighbors but could also be some other problem that actually manifests for users too.


> used as complements to time-based profilers, not to replace them

Sure. I also use hyperfine to run a bigger test as a user would see the system. I cross reference that with the instruction counts. I use these hardware metrics in a free CI runner, and hyperfine locally.


I've looked into this before and there are very few tools for this. The only vaguely generic one I've found is Codespeed: https://github.com/tobami/codespeed

However it's not very good. Seems like most people just write their own custom performance monitoring tooling.

As for how you actually run it, you can get fairly low noise runtimes by running on a dedicated machine on Linux. You have to do some tricks like pinning your program to dedicated CPU cores and making sure nothing else can run on them. You can get under 1% variance that way, but in general I found you can't really get low enough variance on wall time to be useful in most cases, so instruction count is a better metric.

I think you could do better than instruction count though but it would be a research project - take all the low noise performance metrics you can measure (instruction count, branch misses etc), measure a load of wall times for different programs and different systems (core count, RAM size etc.). Feed it into some kind of ML system and that should give you a decent model to get a low noise wall time estimate.

Good tips here:

https://llvm.org/docs/Benchmarking.html

https://easyperf.net/blog/2019/08/02/Perf-measurement-enviro...


Surely it’s possible to build some benchmark to demonstrate the difference right? Otherwise, what’s the point of making that improvement in the first place?

I think what you’re saying though is that having benchmarks/micro benchmarks that are cheap to run is valuable and in those instruction counts may be the only way to measure a 5% improvement (you’d have to run the test for a whole lot longer to prove that a 5% instruction count improvement is a real 1% wall clock improvement and not just noise). Even criterion gets real iffy about small improvements and it tries to build a statistical model.


> Surely it’s possible to build some benchmark to demonstrate the difference right? Otherwise, what’s the point of making that improvement in the first place?

No, sometimes the improvement you made is like 0.5% faster. It's very very difficult to show that that is actually faster by real wall clock measurements so you have to use a more stable proxy.

What's the point of a 0.5% improvement? Well, not much. But you don't do one you do 20 and cumulatively your code is 10% faster.

I really recommend Nicholas Nethercote's blog posts. A good lesson in micro-optimisation (and some macro-optimisation).


> It's very very difficult to show that that is actually faster by real wall clock measurements so you have to use a more stable proxy.

That’s what I’m saying though. You don’t actually need a stable proxy. You should be able to quantify the wall clock improvement but it requires a very long measurement time. For example, a 0.5% improvement amounts to a benchmark that takes 1 day completing 7 minutes earlier. The reason you use a stable proxy is that the benchmark can finish more quickly to shorten the feedback loop. But relying too much on the proxy can also be harmful because you can decrease the instruction count and slow down wall clock (or vice-versa). That’s because wall clock performance is more complex because branch prediction, data dependencies, and cache performance also really matter.

So if you want to be really diligent with your benchmarks (and you should when micro optimizing to this degree), you should validate your assumptions by confirming impact with wall clock time as that’s “the thing” your actually optimizing, not cycle counts for cycle counts sake (same with power if you’re optimizing the power performance of your code or memory usage). Never forget that once a proxy measurement can stop being a good measurement once it becomes the target rather than the thing you actually want to measure.


On my workplace we use self-hosted GitLab and GitLab CI. The CI allows you to allocate dedicated server instances to specific CI tasks. We run a e2e test battery on CI, and it's quite resource heavy compared to normal tests, so we have some dedicated instances for this. I'd imagine the same strategy would work for benchmarks, but I'm not sure whether cloud instances fit the bill. I think that the CI also allows you to bring your own hardware although I don't have experience taking it that far.


> I'd imagine the same strategy would work for benchmarks, but I'm not sure whether cloud instances fit the bill. I think that the CI also allows you to bring your own hardware although I don't have experience taking it that far.

Typically you use the solution between cloud hosted VPS and your own hardware, dedicated servers :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: