Hacker News new | past | comments | ask | show | jobs | submit login

ARC is good overall, but we find that when evaluating on such a large dataset (over 5000 traces), the adaptive algorithm in ARC is not robust, sometimes the recency queue is small, while sometimes it is too large. If we think closely, why does "moving one space from recency queue to frequency queue upon a hit on the frequency ghost" make sense? Should we distinguish the hit on the beginning of ghost LRU and the tail of LRU? The implicit parameters may not be reasonable in some cases. But overall, ARC is a good algorithm, just not perfect yet. :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: