Hacker News new | past | comments | ask | show | jobs | submit login

I'm really sorry didn't see your comment earlier. I like your solution but am a little stuck on the limitations.

1.) It requires applications to specify the entire transaction in advance. ACID transactions allow you to begin a transaction, poke around, change something, and commit. You can derive the transaction by running it on a replica, then submitting to the primary, in which case this becomes an implementation of optimistic locking including transaction retries.

2.) As you pointed out the trick is to keep the array small. However, this only works if you have few conflicts. Jim Grey, Pat Helland, and friends pointed this out in 1996. [1] That in turn seems to imply a very large number of buckets for any non-trivial system, which seems like a contradiction to the conditions for high performance. In the limit you would have an ID for every key in the DBMS.

3.) Finally, what about failures? You'll still need a distributed log and leader election in case your fast machine dies. This implies coordination, once again slowing things down to the speed of establishing consensus on the network to commit log records.

Incidentally Galera uses a similar algorithm to what you propose. [2] There are definitely applications where this works.

[1] https://dsf.berkeley.edu/cs286/papers/dangers-sigmod1996.pdf

[2] https://galeracluster.com/library/documentation/certificatio...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: