If you iterate over the list within an order of magnitude as often as you remove an element from the middle, an array will still be faster despite not having constant time removal, no matter the size of the list.
To restate, if you have a workload where you iterate over your collection ~500 times, and remove an element from the middle ~1000 times, an array will usually outperform a linked list on a modern computer, no matter the size of the list.
It is not until you do removes/adds far more often, and far from the end of the collection, that the linked list will perform better overall.
That's why I mentioned the LRU implementation. It does not iterate over the list, only adds elements to the front, deletes from the back and moves from the middle to the front. Refs to nodes are stored in a map, so iteration is not necessary to find an element.
You use a hash map, and make the linked list a linked list of hash entries. So, if you're using chaining (as opposed to the many variations on open addressing):
class Entry<K,V> {
Entry<K,V> younger; // LRU doubly linked list
Entry<K,V> older; // LRU doubly linked list
Entry<K,V> next; // for hash map bucket chaining
V value;
K key;
}
The naive solution using two separate data structures would look like:
class MapEntry<K,V> {
Entry<K,V> next;
DoublyLinkedListEntry<MapEntry<K,V>> LruEntry;
K key;
V value;
}
class DoublyLinkedListEntry<T> {
ListEntry<T> prev;
LintEntry<T> next;
T value;
}
The commenter mentions that in the last sentence. There is some separate data structure that also has references to nodes in the middle of the linked list.
To restate, if you have a workload where you iterate over your collection ~500 times, and remove an element from the middle ~1000 times, an array will usually outperform a linked list on a modern computer, no matter the size of the list.
It is not until you do removes/adds far more often, and far from the end of the collection, that the linked list will perform better overall.