Hacker News new | past | comments | ask | show | jobs | submit login

> Pure spinlocks are almost always a bad idea

True, but they might still appropriate for very tiny critical sections (a handful of instructions) of finely distributed locks where the probability of both contention and preemption is very low.

Also some applications have the luxury to exclusively dedicate on thread per core.




> Also some applications have the luxury to exclusively dedicate on thread per core.

To save power and thermals to enable higher frequencies for other cores doing useful work, if the latency requirements and CPU architecture allows, be nice and put the spinning CPU in a lower power state.

Like x86 PAUSE or MONITOR/MWAIT instructions.


Someone wrote a comment and then quickly deleted it. Since I already typed a reply, I'll include it here anyways:

---

> Does this mean all spinlock operations are pinned to the low-power core?

That wouldn't make sense. Perhaps for example in a case a core is allocated to immediately operate on some data once the spinlock is released.

> If yes, how is this usually supported by OS/language runtimes?

That really depends as MONITOR and MWAIT require ring 0. That means executing those instructions in userland always causes an exception. Yeah, I develop kernel drivers... :-)

PAUSE [0] works at any ring level, so it works fine in usermode code. It's encoded just as "REP NOP".

> And as soon as the spinlock operations succeed, the work will be moved out of the low-power core, which would involve the usual context switch costs. So I'm assuming this strategy would need a lot of workload-specific benchmarking before you decide to use it.

If you're doing this, you're actively avoiding context switches. That's the whole point in this kind of spinning. Kernel is not involved for usermode code.

[0]: https://c9x.me/x86/html/file_module_x86_id_232.html




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: