Hacker News new | past | comments | ask | show | jobs | submit login

You could do this, although LRU has the nice property that it is at most a constant factor slower than an optimal (prescient) algorithm that has less memory available [1] (e.g. with twice as much memory it's at worst half as slow). Interestingly FIFO has the same property, suggesting that even LRUs attempt to keep frequently used items in cache can only have a limited effect (so in some cases FIFO might be faster simply due to lower overhead).

Interestingly LFU has access patterns that force it to be slower than LRU. From the paper referenced above:

>A counterexample for LFU is a sequence of k + 1 accesses to each of N - 1 pages, followed by 2k alternating accesses to two new pages, k to each, with this pattern repeated indefinitely (so that all but the initial N - 1 pages have k accesses). [here N is the size of the cache]

[1]: https://www.cs.cmu.edu/~sleator/papers/amortized-efficiency....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: