Hacker News new | past | comments | ask | show | jobs | submit login

Strict purely functional languages will sometimes provably introduce log n slowdown that can't be amortized away. This is certainly a trade-off.

This doesn't happen when you've got mutation or laziness.

One language that I'm aware which makes this tradeoff is Erlang.




When you say log n slowdown, what is n here? The length of the program? Or something completely different?


I think the theme is ‘you can’t do arrays; the best approximation is made of trees, hence log n in the size of the array vs 1’.


Though I think Clojure has a linear datastructure that's O(log32(n)) for all the common ops. (This is the one with all the memeing because someone called it "practically O(1)".)


Right, and updates are also O(32 log32(n)) rather than O(2log2(n)), rather than O(1) for arrays. It’s a bit more complicated when you think about cache lines being updated but mutable arrays massively win on cache


Why do you have a constant before the log term? Isn’t that completely meaningless?


It is meaningless mathematically. It’s as meaningless as the parent talking about log32(n) because of course log32(n) = log n / log 32, so there is just a constant of 1 / log 32 being put at the front.

I think the subtext is really a statement about constant factors. The parent is saying “if trees are wide then lookups are cheap” and I am saying “if trees are wide then updates may be expensive” (to update an immutable binary tree of size n, you must allocate log2(n) nodes of size (say) 3. To update a 32-ary tree you need log2(n) / log2(32) = log2(n) / 5 nodes of size 33 so, as 33/5 = 6.6 > 3, the wider tree requires more allocation for updates.)


malloc isn't typically O(n) though, so I don't think that's reasonable math?


If you are talking about Pippenger-Fischer theorem, then that's about Turing machines, not machines with RAM. Or is there another theorem I don't know about?


It's a separate theorem. I'm not quite sure where I've seen it before, but https://www.cs.princeton.edu/courses/archive/fall03/cs528/ha... seems to the right thing. Generally I've also heard it mentioned in the context of Okasaki's book about purely functional data structures.


Elm is also strict and purely functional.


elm does not have its own runtime so this discussion doesn’t really apply.


Ah, I wasn't referring to the point about garbage collection further up the thread. I was just noting that Elm is another example of a strict purely functional language (and so another example of a language that gets purity in exchange for losing access to certain O(n) algorithms).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: