Hacker News new | past | comments | ask | show | jobs | submit login

Fully populating a 128 bit memory pool would take enough energy to boil the oceans[1]. This requires a branching factor of 26 to address.

So the absolute worst case scenario - even if we get science fiction computing - is a small, constant number of indirections.

The whole purpose of this type of analysis is to get an intuitive approximation as input size grows. The graphs for log32(n) and 1 will be essentially indistinguishable, and most programmers have a much clearer understanding of constant time versus O( log32( n ) ).

It is not misleading, it is a simplification that is absolutely valid to make.

[1] https://blogs.oracle.com/bonwick/entry/128_bit_storage_are_y...




Agree with what you wrote. However, the article is misleading when it says that because the operations are "constant time in practice," that they are "fast."


Unless the article has been edited, it doesn't say that[1].

What the article calls "constant time in practice" is necessary, but not sufficient.

As you say, it's not correct to say that CTIP means that you're fast - after all, you could make a HTTP request per array access and maintain constant time.

But you can say that taking advantage of CTIP allows you to achieve speed.

In my opinion, it takes quite an uncharitable reading of the article for it to seem like they mean the former.

[1] (The closest section to your quote) "Elm now has fast immutable arrays. How can that be? Is there such a thing? Thanks to Christian Widera, the new Array library uses some very clever data structures that make common operations like get and set constant time in practice! We will get into the details later in this post, but the big takeaway is that you can have immutability and speed. "


> What the article calls "constant time in practice" is necessary, but not sufficient.

So non-CTIP data structures cannot be fast! There it is again!

This line in particular jumped out at me:

One way to make these trees faster is to increase the “branching factor”.

And we are lead to conclude that the 32-ary tree is faster than the binary tree. But there's necessarily a tradeoff here: a larger branching factor results in fewer node traversals, but more copying and wasted space on insertions and deletions.

So it's not at all obvious that a higher branch factor is faster. After all, in the limit as the branch factor approaches infinity, you recover naive arrays.

Basically I'm begging for benchmarks! It would be very interesting to see the performance effects of adjusting the branch factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: