Hacker News new | past | comments | ask | show | jobs | submit login

I'm one of the founders of RethinkDB. At no point did we ever work on an in-memory database product.

RethinkDB is immediately consistent by default -- in our earlier days we did spend a lot of time optimizing our serializer for solid-state drives. However, we never designed it with, "the tenet that [the database] was 100% in memory."




Curious what kind of optimizations you did for SSDs.


Here's a few links (from previous HN threads and blog posts) that dive into some of the optimizations we made around SSDs:

* https://news.ycombinator.com/item?id=4795443

* http://www.rethinkdb.com/blog/on-trim-ncq-and-write-amplific...


The trick involves writing sectors (of size 4K, say, and it can depend on the SSD model) sequentially in aligned blocks (of size 1M, say, but it can depend on the SSD model) so that the SSD firmware that can erase one physical block at a time can handle it efficiently. Especially in 2010.


If you are interested in SSD optimizations you may want to read this research from Microsoft: The Bw-Tree: A B-tree for New Hardware[1]

1 - http://research.microsoft.com/apps/pubs/default.aspx?id=1787...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: