If the algorithm calls for a balancing tree, you'd end up re-implementing it across of bunch of (linked) hash tables or over an array treated like a heap. Explicitly, or if one does not know what they are doing, implicitly. In either case you would have no performance advantage.
If you know what you are doing, you may well realize a performance advantage.
For example LevelDB (which is conceptually based on Google BigTable's design) looks a lot like a balancing tree in its access patterns, but is a lot faster than a variety of alternatives. And it is faster in part because sequential data is stored sequentially in order instead of using a data structure that results in random access patterns.