Hacker News new | past | comments | ask | show | jobs | submit login

tangential point: one thing that always bothered me about WAL is that it is supposed to exist to help maintain data integrity, recover from crash etc but that file itself is written (committed to disk reliably) in batches and not after every change to the database, apparently to gain performance. Doesn't that defeat the purpose? How haven't things broken down despite this? Not specific to sqlite but databases in general. Never found an answer to this.



I think that depends on the setting of PRAGMA synchronous.

I'm not an expert on this, but i think the idea is to separate durability from db corruption. (When synchronous = normal instead of full) you can potentially lose (comitted) data in WAL mode if a power failure happens at just the right moment, however your database won't be corrupt. No data will be half written. Each transaction will either be fully there or fully missing.

https://www.sqlite.org/pragma.html#pragma_synchronous


You can still batch and block before returning from a request to maintain durability. This improves throughout at the expense of latency.

Since SQLite is single writer I'm not sure if it does this. But this (batch yet block) is how I understood Postgres works.

Of course you can turn off the blocking too by setting postgres fsync configuration to an interval rather than synchronous.


You only need to achieve durability on a COMMIT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: