Hacker News new | past | comments | ask | show | jobs | submit login

> and the old one is sorted and written to an sstable asynchronously

This doesn't happen.




No? Am I mis-remembering? Their wiki says:

The logfile is a sequentially-written file on storage. When the memtable fills up, it is flushed to a sstfile on storage and the corresponding logfile can be safely deleted.

[...]

Background compaction threads are also used to flush memtable contents to a file on storage. If all background compaction threads are busy doing long-running compactions, then a sudden burst of writes can fill up the memtable(s) quickly, thus stalling new writes. This situation can be avoided by configuring RocksDB to keep a small set of threads explicitly reserved for the sole purpose of flushing memtable to storage.


The in-memory memtable gets converted to sstable.

The WAL is ONLY read after crashing, to fill a new memtable.

Your comment looked like "WAL is sorted and converted to sstable":

> If a WAL file gets too long then a new one is created and the old one is sorted and written to an sstable asynchronously.


Seems a bit pedantic. The memtable is (when fully flushing writes) a derivative of the WAL. Or vice-versa if you like. They hold equivalent data, organized differently (yes yes modulo tombstones). Anyway you're right, I was being lazy in not writing out memtable explicitly.


WAL splitting isn't connected to memtable flushing, they are separate processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: