Beating plain LRU isn't very interesting, but they also evaluated a bunch of other algorithms (e.g. ARC) and concluded that it performed better than those as well.
I know the ARC paper discusses that other algorithms are often better if properly tuned, although ARC is usually consistently decent in a variety of situations without tuning. It would be awesome to have a new algorithm in that space.
ARC is good overall, but we find that when evaluating on such a large dataset (over 5000 traces), the adaptive algorithm in ARC is not robust, sometimes the recency queue is small, while sometimes it is too large. If we think closely, why
does "moving one space from recency queue to frequency queue upon a hit on the
frequency ghost" make sense? Should we distinguish the hit on the beginning of ghost LRU and the tail of LRU? The implicit parameters may not be reasonable in some cases. But overall, ARC is a good algorithm, just not perfect yet. :)
I know the ARC paper discusses that other algorithms are often better if properly tuned, although ARC is usually consistently decent in a variety of situations without tuning. It would be awesome to have a new algorithm in that space.