Though I think Clojure has a linear datastructure that's O(log32(n)) for all the common ops. (This is the one with all the memeing because someone called it "practically O(1)".)
Right, and updates are also O(32 log32(n)) rather than O(2log2(n)), rather than O(1) for arrays. It’s a bit more complicated when you think about cache lines being updated but mutable arrays massively win on cache
It is meaningless mathematically. It’s as meaningless as the parent talking about log32(n) because of course log32(n) = log n / log 32, so there is just a constant of 1 / log 32 being put at the front.
I think the subtext is really a statement about constant factors. The parent is saying “if trees are wide then lookups are cheap” and I am saying “if trees are wide then updates may be expensive” (to update an immutable binary tree of size n, you must allocate log2(n) nodes of size (say) 3. To update a 32-ary tree you need log2(n) / log2(32) = log2(n) / 5 nodes of size 33 so, as 33/5 = 6.6 > 3, the wider tree requires more allocation for updates.)
If you are talking about Pippenger-Fischer theorem, then that's about Turing machines, not machines with RAM. Or is there another theorem I don't know about?
It's a separate theorem. I'm not quite sure where I've seen it before, but https://www.cs.princeton.edu/courses/archive/fall03/cs528/ha... seems to the right thing. Generally I've also heard it mentioned in the context of Okasaki's book about purely functional data structures.
Ah, I wasn't referring to the point about garbage collection further up the thread. I was just noting that Elm is another example of a strict purely functional language (and so another example of a language that gets purity in exchange for losing access to certain O(n) algorithms).
This doesn't happen when you've got mutation or laziness.
One language that I'm aware which makes this tradeoff is Erlang.