Hacker News new | past | comments | ask | show | jobs | submit login

But having separate applications can give you better perfomance and isolation than one single threaded process?

Or do you compare using separate processes to having multiple threads in a single process?




There's no universe where a single threaded embedded persistence implementation is slower than a single threaded application synchronously talking to a single threaded database over the network stack.

As far as isolation goes, if you are worried about the properties of reading and writing data to the disk then I simply don't know what to tell you. Isolation from what?


Why network stack? On same host IPC over shared memory is a normal thing.

Perfomance-wise, I do not know of a nice portable way of flushing changes to disk securely that does not block (like, e.g. fsync does).

If you own the whole system and can tune whole kernel and userspace to run single application, sure, why overengineer. Otherwise software faults (bugs, crash due to memory overcommit, oom killer etc.) take down single process, and that can be less disruptive than full stop/start.


> On same host IPC over shared memory is a normal thing.

Not for Redis.

> I do not know of a nice portable way of flushing changes to disk securely that does not block (like, e.g. fsync does).

If you use Redis, you're either not waiting for writes to be acknowledged or you're waiting on fsync. You always fsync no matter whether it's in process or not or you're risking losing data.

Which process blocks doesn't affect performance, it's getting the data there in the first place.

> Otherwise software faults (bugs, crash due to memory overcommit, oom killer etc.) take down single process, and that can be less disruptive than full stop/start.

Even worse: Redis crashes and now your application (which hasn't crashed) can't read or write data, perhaps in the middle of ongoing operations. You have a whole new class of failure modes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: