Hacker News new | past | comments | ask | show | jobs | submit login

Not just databases - we ran into the same issues when we needed a high-performance caching HTTP reverse proxy for a research project. We were just going to drop in Varnish, which is mmap-based, but performance sucked and we had to write our own.

Note that Varnish dates to 2006, in the days of hard disk drives, SCSI, and 2-core server CPUs. Mmap might well have been as good or even better than I/O back then - a lot of the issues discussed in this paper (TLB shootdown overhead, single flush thread) get much worse as the core count increases.




Varnish' design wasn't very fast even for 2006-era hardware. It _was_ fast compared to Squid, though (which was the only real competitor at the time), and most importantly, much more flexible for the origin server case. But it came from a culture of “the FreeBSD kernel is so awesome that the best thing userspace can do is to offload as many decisions as humanly possible to the kernel”, which caused, well, suboptimal performance.

AFAIK the persistent backend was dropped pretty early on (eventually replaced with a more traditional read()/write()-based one as part of Varnish Plus), and the general recommendation became just to use malloc and hope you didn't swap.


Varnish has a file system backed cache that depends on the page cache to keep it fast.

What did you differently in your custom one that was faster then varnish?


Simple multithreaded read/write. On a 20-core 40-thread machine with a couple of fast NVMe drives it was way faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: