I'm one of the founders of RethinkDB. At no point did we ever work on an in-memory database product.
RethinkDB is immediately consistent by default -- in our earlier days we did spend a lot of time optimizing our serializer for solid-state drives. However, we never designed it with, "the tenet that [the database] was 100% in memory."
The trick involves writing sectors (of size 4K, say, and it can depend on the SSD model) sequentially in aligned blocks (of size 1M, say, but it can depend on the SSD model) so that the SSD firmware that can erase one physical block at a time can handle it efficiently. Especially in 2010.
RethinkDB is immediately consistent by default -- in our earlier days we did spend a lot of time optimizing our serializer for solid-state drives. However, we never designed it with, "the tenet that [the database] was 100% in memory."