The use of direct, indirect, double indirect, etc. pointers to data blocks shows how antiquated some file system designs are. When files that can be fragmented regularly reach sizes in the gigabytes, using extents is the best way to track their data blocks in an efficient manner.
It's always annoyed me how I can query hundeds of millions of records in a SQL database instantly, but searching metadata from my filesystem is still comparatively slow. Are there any filesystems out there that take a more database-like approach to this (while still appearing and integrating like a traditional filesystem - I'm not after object storage, nor do I want to rely on a decoupled indexing process).
I built a system that I designed to be a file system replacement. It can efficiently manage the metadata for hundreds of millions of files without a separate indexing system that can become out of sync with the actual data. You can attach dozens of metadata tags to each file and query for every file that has a certain tag or other attribute (size, type, datetime stamp, etc). It works a lot like a database in that results are returned almost instantly.