Hacker News new | past | comments | ask | show | jobs | submit login

What you are saying in that post is that a hashmap should be O(log(n)) because if you have a word size w you need k_w = log(n)/w operations to compute the hash of a key that's large enough to not have collisions.

However by that logic the comparison function should also have a similar factor, since by the same argument the key values need to be large enough not to collide. Ie if the key values are just the natural numbers up to n for example, you'll need k_w words to store them and hence comparison takes O(k_w) = O(log(n)).

Thus a binary tree should also have an additional log(n) factor due to this, and hence be O(log(n)^2)).

However the point of big O is to compare, and since both have this log(n) factor you could just cancel that in both, leaving you with the familiar O(1) and O(log(n)) respectively for the hashmap and binary tree.

Is it sloppy to do that prematurely? Kinda yes, but it makes comparisons much easier as you don't have to go cancelling those log(n) factors all the time. And the point of big O is to compare the algorithms, not accurate running time.






Okay, so some communication tips:

1) You should have led with that to get to the point immediately instead of cryptically alluding to the existence of such a point and adding an irrelevant example.

2) Your point was already made in an initial reply[A]. You didn't need to make your comment at all, and if you were going to lazily allude to the existence of an argument, you could have just linked that one. Or, just upvote it. Comment sections don't need repeats of points already made unless you have something to add.

To address the point you did add:

3) You're still conceding the core point that you have to go outside the usual big-O model to get hashtables being O(1). In no other case does anyone take big-O to mean asymptotic complexity relative to comparable solutions for the same problem -- not unless that's explicitly stated. You can't have a data structure with constant-time lookup, period.

[A] https://news.ycombinator.com/item?id=43006957


I actually kind of get where you're coming from, I think. I'm trying to understand your argument, I think it goes something like this:

1. "But if the universe grows too big, then the integers get too big!". Fair point.

2. You concede that the Word-RAM model does accurately describe an O(1) hashing operation.

3. We now define a new computation model, where hashing is just O(1).

4. "this is still wrong, because this doesn't model the behavior as keys grow to infinity"

The important thing here is that the hashing oracle isn't defined to even care about integer arithmetic complexity. I think one thing you may be confused about is that it seems like the hashing oracle can only be implemented with integer-sized keys, actually the thing that epeople are trying to minimize in hashtable research is calls to that oracle. That oracle can be some genie in a lamp, my deadbeat cousin, or a computer program.

---

I think you even get that though on a logical level, so the crux probably goes something like, "The word-RAM model forces me to be of bounded size, but for higher level intuition, I just forget that? What? And then I stitch them back together? I mean sure, given that it's O(1), I get it, but there's no way this 'stitches together' cleanly, it's not actually O(1)!"

I didn't even know this term, but it seems like people have invented a term for this, the trans-dichotomous model. Basically, word size (integer size) scales with problem definition size. So I imagine that it's just taken for granted that trans-dichotomy sort of just "propagates up" your stack of logic, informally, if you want to understand each and every operation.

I mean, for me, the first paragraph was mostly fine enough for me - analyzing abstract models asymptotically w.r.t. various kinds of cost is already something I'm familiar with, so just saying hashing is an O(1) cost by definition makes sense.

The proofs doesn't have to know about what the hashing oracle actually is - the "integer" in the Word-RAM model is not the same as the "integer" produced by the hashing oracle, even though they both have "O(1)" cost within their respective models. I think viewing it as hooking them up together hurts you.

Word-RAM Integer is O(1) because it's trans-dichotomous, and in this specific hashtable analysis, the hashing oracle is O(1) just because it is by definition, we only care about # of calls to that function.

Even though the intuition is that they're hooking up together - recall that your hashing oracle backend can be your drunk cousin at 3am spouting out a consistent uniform random number mapping in the range [0...n-1], we're not really particularly interested in anything else.

Oh, and also, the oracle must take in two parameters, (key, size of space), and the assumption is that, for each size of space, the key is randomly uniformly consistently mapped. Although in the paper they don't even need to deal with variable hashtable resize schemes so they don't even need to re-define this.

But like again, like, we make these "constants" by saying, "I don't care how the hash function is performed, as long as it satisfies these properties; all I'm gonna try to do is implement an algorithm that does the minimal hashing possible". That's not nefarious, that's standard abstraction practice.

I did try to sit with your doubt for a while though and this is the best explanation I can come up with. If I didn't understand, oh well.

https://en.wikipedia.org/wiki/Word_RAM https://en.wikipedia.org/wiki/Transdichotomous_model


The problem is, big-O doesn't use the word-RAM model mentioned in the comment. That's something you have to bolt on to the get the O(1) hashtable result. Which is the very point I was making originally, that you have to go outside standard big-O and incorporate practical considerations, which isn't done anywhere else as a matter of course.

> That's something you have to bolt on to the get the O(1) hashtable result.

No, you brought that on by claiming not considering it was an error.

In standard big-O you take as an assumption that memory operations are O(1), comparisons are O(1) and calculating hash values are of the same order as making a comparison (and vice versa).

Once you do that, then a hash table becomes O(1) on average, as it's just a O(1) calculation followed by a O(1) memory lookup (assuming no collisions on average).


Again, Big-O is only relevant w.r.t a specific cost analysis.

Hashtable analyses don't care how you get the hash function - it can take exactly 72 hours to arrive, but as long as the oracle satisifes some probability distribution guarantee, it's good. That is an O(1) operation, a single hash.

Like, there's no going "outside" for big O, there's no "standard". It's always backed by some notion of what you are computing. There's pointer machines, turing machines, automata, circuits, for your complexity. Pointer machines can't directly address at offsets, Word-RAM can. Different tradeoffs for different analyses - pointer machines often model GC-collected languages and heavy pointer data structures, while Word-RAM deals with more compact data structures.

Again, as mentioned in the paper, Big O isn't used as a notion for number of total operations, it's only trying to *minimize the number of calls to the hashing oracle*. So your analyses can get even more fine-grained.

You might even be excited to learn about the External Memory Model. In that model, you have two blocks of memory; one block where *every single operation* is free, and one block in which you can't access, you must fetch into memory. You fetch blocks of size B, your local memory is B << N, and external memory is N << M. Even operation in block N is free, but and the only thing you can do on block M is read/write a block of size B. The question then is, to fetch the minimum number of blocks for your specific algorithm.

I've been originally confused by algorithms in this model, too - sometimes the "real number" of operations would be an order of magnitude more than the number of block accesses, but we just don't even count them? How does that make sense?

Turns out this actually models real hardware surprisingly well - it's often faster to simply recompute and do extra computation in order to minimize memory accesses. It's used a lot in databases. Although I can't find the exact wording "External memory model", the FlashAttention paper uses the wording "IO/Aware" and analyzes block transfers like this on the GPU, so... yeah.

So going 'higher up' in the abstraction stack doesn't necessarily sacrifice practicality either; in fact, it can sometimes help.

Sure, we can view the intuition as chaining together layers of abstraction, I get that view. But each layer functions perfectly logically, independently of the others. It's just like good decoupled software practice.

> you have to go outside standard big-O and incorporate practical considerations, which isn't done anywhere else as a matter of course.

Hopefully I've explained it better. Different models of computation are a fundamental part of theoretical computer science, since of course, as you've pointed out, you have to be very careful about defining operations. At the same time, part of the great part about CS (and math in general) is, that you can specify, "Assuming these conditions, what can I logically derive?" and that's just done everywhere.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: