I've spent the last month or so working on a write-behind cache in both Rust and Zig, with a RAM + Disk component, similar to what a OS kernel memory page swap does, but at the app level and key-value-oriented.
My experience trying out cache algorithms is that they are all very generic, cache benchmarks are typically based on random distributions or on web-request datasets (twitter, CDNs, ...) that may not match your use case. Mine is about caching data stream transformations with specific ORDER BYs. Hits may come at a very wide point in time and LFU ended-up working better for me. Also your eviction policy choice is very important like number of items or RAM use (my case). So don't go running to the latest "benchmark proven" cache algorithm paper without weighting in your specific needs.
Do you mind sharing more details on the time length of workloads in your benchmark? Are you using LRU with no aging? Drop me an email if you would like to chat more juncheny@cs.cmu.edu
My experience trying out cache algorithms is that they are all very generic, cache benchmarks are typically based on random distributions or on web-request datasets (twitter, CDNs, ...) that may not match your use case. Mine is about caching data stream transformations with specific ORDER BYs. Hits may come at a very wide point in time and LFU ended-up working better for me. Also your eviction policy choice is very important like number of items or RAM use (my case). So don't go running to the latest "benchmark proven" cache algorithm paper without weighting in your specific needs.