Finger trees are an example of a truly elegant functional data structure that can be used for just about everything. The problem is they don't match the memory model of real hardware so for all of their elegance and theoretical performance in reality they're too slow to be useful.
It's true that they have a high constant factor, but I think it's too harsh to say they are not useful.
One could also say that linked-lists don't match the memory model of real hardware.
It's okay to lean on the hardware a bit, provided you're careful: most programmers don't have to squeeze maximal performace out of the hardware, just good enough performance. The extra code you need to make arrays work may not be worth the speed boost.
Also, if you're working in an HLL, investigate your implementation details. In Common Lisp, for example, a lot of work has been put into making LLs as fast as possible: If you create a list with QUOTE (such lists are immutable), then thanks to an optimization called CDR-coding, you'll get a data layout very similar to an array under the hood.
OTOH, it is important to understand why LL perf isn't actually what the textbooks claim, and to consider using a different structure in contexts where performance matters. Just don't go too far the other way, either: LLs aren't dead just yet.
I believe (intrusive, as they usually are) LLs are so prevalent in C applications mainly because they are the simplest-ever list structure where insertion and deletion of items is just a no-brainer to implement in a couple lines of C macros. They are also inherently self-describing (ie. you don't need to pass around anything except one pointer).
And this is totally fine in many places. However, when the list is frequently (ie. performance is relevant) used, then it's likely that a LL is not the right choice. And usually one notices that in these cases no LL is used. For example, one might use a LL for a list of drivers, while eg. directory entries will be kept in a hash-table, tree or trie.
So do real arrays on real hardware (pagetable trie).
Small arrays are contiguous so memcpy is fast.
For large arrays, if you create a dummy file (or uses a memfd) you can mmap a region, and then copy it by mmaping the same region with the appropriate flags. You can resize (grow) it almost as easily.
They're so much faster than linked lists, that there's no (performance) reason to use linked-lists on real hardware.
The Linux kernel uses intrusive linked lists extensively. Container based linked lists contain a copy of the data item. Intrusive linked lists force the data structures in the list to contain list pointers inside of the actual data structure, and the list operations manipulate those list-specific pointers in the data structure.
I am not sure if anyone has evaluated using alternatives, but my understanding of why has generally been memory efficiency. If you allocate a Foo, you also allocate everything you need for Foo to be on all of the lists it may appear on. I couldn't find any confirmation (with actual numbers) for this intuition, but this is what I could find:
I'm quite aware of how inappropriate linked lists are for most applications because of their poor cache locality. I thought the Linux kernel may be a case where linked lists are actually better, but I can't find any numbers or even arguments why they would be.
Linux uses linked lists because it is simple to code. That linked lists are slow doesn't matter very much because there are lots of slow parts in Linux that are a better use of attention.
Doom moved to vectors (arrays) because linked lists are slow, and because there wasn't enough other slow parts that needed attention.
Linked-Lists often get used in low-level code and embedded systems in situations where you might need multiple statically allocated entries to be turned into a list of unknown length. There certainly are times where this comes in useful, but it is also a memory constrained space with an emphasis on determinism, where malloc and new can be bad ideas.
There is a time and a place for them, but if you need speed I agree... there are better solutions
An OS kernel has no business maintaining, and even less business accessing in bulk, large data structures that are performance-critical enough to worry about coherent memory access. It is also an infrastructure-deprived environment in which swapping around pointers to implement a linked list with correct locking is relatively easy to get right but allocating vectors is out of the question.
> An OS kernel has no business maintaining, and even less business accessing in bulk, large data structures that are performance-critical enough to worry about coherent memory access
File systems tend not to have large data structures. They are relatively modest data structures which manage bulk access to large blocks of data. A subtle distinction, but important in this context.
The data structures under discussion here are those used to track individual files. Yes, there can be squillions of them, but as data structures, they are, in fact, rather modest.
We can evaluate the Linux kernel developers' priorities by benchmarking[1] and assuming they aren't stupid, because if they are stupid, then their opinion doesn't matter, and if they're not and they're not making things faster, then it's because they have other priorities.
That being said, there are a few[2] notable[3] moves away from linked lists that were ostensibly for performance reasons.
You provided a reason for why the kernel does a certain thing (easier to code; performance in those places doesn't matter). I was asking if this was your understanding based on inference (your understanding of the performance trade offs in general combined with the fact things are done a certain way) or from fact (claims made directly by kernel developers, or experiments).
You can't necessarily replace intrusive linked lists with arrays. They have more operations. You can traverse so far in one list then switch to traversing in another.
One list is traversed when committing blocks, so a linked list was never necessary - just a commit-list (vector of pointers).
Another list is traversed when dequeueing the next lock, so again: a linked list isn't necessary, just a dequeue (which might not have to be serialised).
Another list is traversed when finding the next blocked reader, but again serialisation wasn't required here.
However when you have the right data structure, you'll find you won't need it.
Benchmarking is important because searching for the right data structure is time consuming (expensive for the programmer) and it's usually not necessary.
When you need that kind of behaviour, you can't replace it with something flat (without doing binary searches or similiar at each crossing). Sometimes intrusive lists are the right thing.
I built something once that used intrusive skiplists because it needed to expire elements using one ordering and search them by another. It would have been much less efficient if I'd have broken it up into multiple flat representations.
(Actually, it was flat, but explaining that aspect of it is quite difficult).
> memory efficiency. If you allocate a Foo, you also allocate everything you need for Foo to be on all of the lists it may appear on.
I can't speak for the kernel developers, but for the same line of reasoning it may be more significant that there is the lack of error paths.
When you allocate an 'object', further changes of state (eg. added or removal to other lists) in its lifetime can be done without the possibility of running out of memory or address space. This can be a huge benefit to overall program structure in certain types of application.
In contrast, addition or removal to a vector or flat list can ultimately require various actions to happen.
My example would probably be the work of Phil Bagwell.
IMHO the biggest issue with Linked-lists is they do not support parallelism, they are inherently sequential.
If you want queues, and you can give up sequential, then you can get huge gains[1] in performance. Specific use cases will have better specific solutions.
The page table on x86/amd64 is a trie. OSDEV[1] has a pretty good page on it.
All you need to remember is that this is what is inside the operating system and hardware layers, so you're already paying for it. What I propose is taking advantage of it. If you can arrange things correctly (see my other comment describing this more fully[2]) you might find things that seem expensive (when dealing only with von neumann memory) are suddenly very cheap when you accept you're programming a piece of real hardware.
I'm interested too. I've been thinking about the idea, and it seems like you'd have to be sure to re-use the hell out of your file descriptor on /dev/zero (otherwise, you could easily run out of file descriptors). It also seems like you're trading cache misses for system calls if you have to re-map pages a lot. Maybe it's a clear win, but I'd like to understand it better.
No need for /dev/zero: Linux has memfd[1] and OSX has vm_remap[2]. You only need one file descriptor per heap because Linux lets you poke holes with fallocate[3].
I'll define objects with a header that looks roughly like this:
struct header {
off_t phys;
size_t len;
int localrefs;
short flags;
char mstrategy, type;
};
phys is the offset in the heap file descriptor. len is the number of units and type is indexed into an array of unit sizes.
mstrategy is used to select the bucket size (allocated range is a power of two, so 1<<(mstrategy&31)) and the heap number.
localrefs is an optimisation which I'll get to.
If I want to allocate an array of type t, size n, I can use a BSR[4] to identify the bucket that it needs, and see if there's anything on the free list. If there isn't, I can see if I can split a larger bucket into two parts (this is effectively Knuth's buddy allocator).
I know that (mstrategy&31) < BSR(page size) needs to be moved just like the classical allocator for appending or prepending, but when (mstrategy&31) >= BSR(page size) I can take a virtual address of a bigger region, then either mmap the memfd (using phys) or vm_remap the region into the bigger region. Instead of copying the contents of the pages, the operating system will simply copy the page table (which is 3 levels deep[5], hence the log log log, although using 1G pages means log log with a lower coefficient). This is a tremendous win for problems that need to deal with several large arrays.
Now localrefs gives me a further optimisation: In-process, I can track the reference count of objects, and if my functions always consume their arguments I know inside the grow/append/prepend routine if this is the only holder of the object. If it is, I can potentially reuse this virtual address immediately, saving 3-4 syscalls.
When it's time to deallocate, I can put small objects on my free list, and garbage collect any big objects by calling fallocate() on the physical address to poke a hole (freeing system memory). OSX doesn't need fallocate() because mach has vm_unmap.
> It also seems like you're trading cache misses for system calls if you have to re-map pages a lot
Cache misses aren't the dominant force here.
The real trade off is illustrated with benchmarking: memcpy one page, versus 8 bytes (page table entry). How many pages do you need to copy before it is faster to pay the fixed (<100ns) cost of the system call and just copy the page table entries?
Memory streams at a rate of around 10GB/sec, but the TLB flush is ~100ns and the memory latency is only around 10ns, so it's easy to see how quick the gains add up when you're using 1GB pages.
There are many strategies to deal with that efficiently. Eg. negative indices (having memory before index 0), append-and-reverse.
You can also map memory before an existing mapping, but it's a bit system-and-platform dependent. (But an entirely feasible thing to do).
Naively you can also grow-and-shift or copy ... the first one is actually incredibly fast due to perfect locality, and also very simple to implement. It's still, technically, O(n^2) when building a list, so it's usually a better idea to go for append-and-reverse (which works outside the actual list implementation). Normally that's not a problem at all, and often no reversal has to be materialized.
If you know the format you want, and read infrequently, just implement it as an append and read it backwards.
If you have frequent reads, you can also consider writing your data to a file (or a memfd), and then using mmap to construct the memory layout you want. You will then see that prepend is worse-case log^3 (depth of the pagetable, assuming you have to keep moving the whole thing) time, which is much better than finger trees, and if you prepend often you can make it constant time (by choosing a high start virtual address: remember, there's 64 bits of address but only 48 bits of memory, so you can have 64k big objects without much work) which makes it faster than linked lists.
Another idea: use VMX extensions and collaborate between each layer to trade a lower constant mean and coefficient to parameterise max to log^k. Gains here might require very big data sets though.
deques support fast prepending. You can implement them on top of shared memory, although as the canonical implementation is block based and uses indirection for access, the dynamic resizing capability is less useful.
Even ignoring real hardware, finger trees are only efficient in an amortized sense where more specialized purely functional data structures offer better real-time guarantees.
Firstly, let me say you're most likely correct. I do find finger trees useful in functional languages as well and for creating immutable structures with them.
I want to add some general comments about anyone thinking about using a finger tree or anything else.
Give any potential data structure a try with real data, not contrived stuff and benchmark on the actual target hardware and in a real code path if possible. Further, try it with real-world access patterns like adding, removing, ordering, etc. at different times at different sizes - the results can be interesting. Even if something does well for one use-case, the other use-cases you need it for it may perform awful, so sometimes it's a compromise. Data structure performance is quite often not performance for a single operation, but a function of several operations. This makes it even more important to test it with real data when possible and access it different ways.
Moreover, too many times I see positive or negative benchmarks where someone is filling a data structure with exactly the same integer, or the same instance of an object or something like that. In practice, the underlying data can introduce other considerations based on the size, pointer chasing, memory allocations (especially if the data structure can dynamically resize) and so on in just about every language. Factor in hardware, and the results can change even further. Also ensure your benchmarking itself isn't flawed - we've been there and discussed it ad-nauseam and yet I continue to see bad benchmarking in the wild (particularly on the JVM or CLR).
As a related example, I've found that even the famed link list can suck in real-world. I've always been a "choose the right data structure for the job" kind of guy, and yet sometimes it's more deceptive than it seems. You might say, oh, my access patterns are exactly a linked list, and yet a simple array with lots of helper cruft code will smoke it in practice. Likewise, I've seen shocking results replacing something with a skip list or other data structure that seems like it should perform worse, but depending on the data, size, hardware, and implementation, I get interesting results. I've seen plenty of stupid things like someone who picked a hashtable for fast lookups, only to iterate through it in some other code path for some other use-case accessing the same data, completely murdering overall application performance due to selecting based on a single criterion. If you're using a language where you can create your own allocators, it can become even more interesting than you think - i.e. there are gains or losses to be had.
The moral is simple: "just try it." This comes up a lot in game programming for example and I've seen it in embedding programming have huge effects as well. I can't even begin to explain how much faster I've made things just by changing a data structure because of real-world conditions vs. lab conditions. Don't just assume that picking what the textbook says in the Big O table is what actually will dictate performance, positive or negative. There's a reason most good data structure books also contain explanation regarding behavior and performance, though even books don't and can't explain the real-world nuances when working with non-contrived data.
Sure, if you need to do hundreds of millions of insertions per second. In the vast majority of real-world use cases, most of your time is going to be spent elsewhere. Spending a microsecond waiting on memory lookups for a cold finger tree isn't going to do much.
Drop to using less convenient array-backed structures in the few places you need really fast operations, and use slightly slower (constant-wise) data structures where it's more convenient.
Not if you're writing UI code, networking code, most IO-related code, etc. A microsecond is very small compared to the time penalty you are likely to incur doing any sort of syscall. For the vast majority of code, it's negligible.
For tight, fast, repetitive code it is far too slow.
As a rule, pointer-based data structures tend to be slower than array-based data structures (though, as with any general rule, there are exceptions). This is a simple consequence of how caching and prefetching works in modern CPUs. And this performance difference can easily amount to several (!) orders of magnitude.
However, there are a few caveats:
1. A garbage collector with a bump allocator can reduce these overheads significantly. So can a custom implementation (e.g. emulating pointers as offsets within an array).
2. The overhead often does not matter for performance due to the 90/10 law or because the application is not performance-critical to begin with.
3. Pointer-based data structures may support a richer set of operations than array-based data structures (or have significantly more efficient implementations for some operations).
4. Pointer-based data structures can have better worst-case or per-operation costs where array-based data structures rely on amortized or average performance to pull ahead. This can matter in soft real-time situations or when dealing with user-supplied data.
5. Certain types of data structures do not have good array-based implementations to begin with and some algorithms demand pointer-based implementations. (Think of spatial queries, for example.)
6. Performance cost is not always dominated by operations on the container, but can also be dominated by cost of operations on the items within the container. For example, there are algorithms where the most expensive operation (by far) is comparison between elements (e.g. for a priority queue implementing an expensive heuristic); in this case, you want to minimize the number of comparisons more than the cache behavior of the data structure itself (note that this may still result in you picking the array-based implementation).
7. Pointer-based data structures allow for the sharing of substructures, potentially reducing the memory footprint of an application. Binary decision diagrams are a popular example.
8. Some array-based data structures require a much larger working set than their corresponding pointer-based implementations and take a performance nosedive when the size of the working set exceeds the size of the cache. Especially when you have several such data structures in use at once. (This is where relying on microbenchmarks for performance comparison can be dangerous.)
Red-black trees are pretty much terrible for performance as well. I spend most of my work days trying to eradicate std::map and std::set for performance reasons.
Academics are always concerned about bounds, but practitioners are always concerned about cache misses.
I generally agree with this and most of the time it's hard to justify using a red-black tree if you care about performance. You can use one if you have other concerns perhaps or performance isn't one of them.
I also concur cache misses are vital and sometimes don't get enough attention since people pass hand wave them away as micro-benchmarking which isn't true often for critical code paths. I'll add that again, this is why testing on your target hardware is important. You may think you know, but often you don't. I've seen too much stupid stuff where someone wrote code that performs great on their development desktop and awful on a server or a game console because of different CPU architecture and cache design or sizes.
Maybe someone can dig it up, but I've seen some of the sub-variants of red-black trees have dramatically better performance than I thought possible (or worse, ex: left-leaning), and the results can vary across runtimes. In addition to cache misses, there can be some other considerations for wanting to use a red-black tree like concurrent or parallel programming. In these cases, some of the cousin data structures like AVL trees can swing you for or against a red-black tree or other backing structure. I had this come up recently when selecting an interval tree that needed to be accessed between threads for instance.
Selecting a data structure is harder than most people make it out to be. There's just a lot of considerations at work, so you need to balance use-cases, performance (against those cases and raw), access patterns, allocations, and so on to come to a decision. Sometimes if you add up the total cost of doing something like replacing a red-black tree with other methods that are more cache friendly or have some other characteristic that is better, the overall performance cost can end up being worse in aggregate. So the answer is the same as always, "it depends" and the follow-up, "try it."
Parent is definitely right that academic and practical concerns of those of us in the trenches being different. That's why arguing with pure textbook evidence is stupid and you should challenge anyone who does it. Textbooks are a starting point for making and justifying a conclusion, not a shortcut.
When you don't need persistence, a search tree data structure with fatter nodes (e.g., B-trees) is almost guaranteed to perform better on real hardware. But when you do need persistence, red-black trees become competitive again.
I would expect B-trees to outperform binary trees even in a persistent data structure. The necessity of binary trees comes when you need stable iterators/references.
Theory academics are primarily concerned with bounds, but systems academics are very much concerned with cache misses. I've done work with colleagues where we were very concerned with both.