Hacker News new | past | comments | ask | show | jobs | submit login

PostgreSQL does not have the concept of in-memory tables.

Except for temporary tables which are wiped after each session.




That's simplifying things a bit.

Postgres has a shared memory cache, which can be set the same as redis, so your operations will all happen with in memory, with some background stuff putting it onto disk for you in the case your computer shuts off. Storage won't be involved.

BUT, postgres still has ~6x the latency [1], even when run from memory.

[1] https://medium.com/redis-with-raphael-de-lio/can-postgres-re....


Pretty pointless benchmark since it does not use prepared statements (that much is obvious from the low TPS, plus I confirmed it with how he ran the benchmark). You need to pass "-M prepared" to pgbench. And it is very possible that the Redis benchmark is equally flawed.

If you have to parse and plan queries every time PostgreSQL is obviously much slower than Redis. It is much more interesting to see what happens if prepared statements are used.


The article is flawed, here's an issue of someone fixing some of the issues:

https://github.com/raphaeldelio/redis-postgres-cache-benchma...


Thanks, I spotted the lack of prepared statements and stopped looking after that but you spotted the rest of the issues too. And, agreed, it is likely that the Redis benchmark is flawed too but I do not know Redis well enough.


But the whole point here is that PostgreSQL will be used for other tasks e.g. storing all of your business data. So it will be fighting for the shared cache as well as the disk. And of course storage will still be involved as again you can't have in-memory only tables.

And having a locking system fluctuate in latency between milliseconds and seconds would cause all sorts of headaches.


If you are both small enough that you’re considering cohosting the app and DB, the odds are good that your working set is small enough to comfortably fit into RAM on any decently-sized instance.

> And having a locking system fluctuate in latency between milliseconds and seconds would cause all sorts of headaches.

With the frequency that a locking system is likely to be used, it’s highly unlikely that those pages would ever get purged from the buffer pool.


Once again. PostgresSQL does not have in-memory tables. So:

a) Writes will be happening on a slow, highly contentious disk.

b) You have no guarantees that the data will be in-memory.

Both of which make it a poor solution for use cases such as locking.


As pointed out by blackenedgem above: PostgreSQL has tablespaces, and one may simply declare tables which should stay in RAM in a tablespace built upon a tmpfs with enough reserved RAM to store them all. There is a only small associated burden (restoring the tablespace while starting PG).


> And of course storage will still be involved as again you can't have in-memory only tables.

That's fine. If what you want is a single-node deployment combined with in-memory purism them SQLite has your back.


No but it does have the concept of tablespaces. If you want you can map RAM to a disk location, set that up as a tablespace, then tell postgres to use that tablespace for your given table. Also set the table as UNLOGGED while you're at it.

A bit more work yes that could be simplified, but fully supported if you control the stack.


Yes. Putting a Postgres tablespace on a RAM disk (tmpfs) does wonders. Even if NVMe may be comparable to RAM by bandwidth, whatever Postgres has to do to ensure that data are written to the durable store as a part of a commit is still significantly slower compared to RAM.

Highly recommended for running tests, especially in CI/CD pipelines. Doing this simple change can speed up DB-heavy tests 30-50%.


Well if your working set fits RAM your tables will be stored in memory in the shared buffers.


> PostgreSQL does not have the concept of in-memory tables.

Is that relevant though? Some benchmarks on the web show Postgres outperforming Redis in reads as well as low-volume writes (less than 1k key-value pairs), and Redis only beating Postgres for high volume key-value writes.


> Redis only beating Postgres for high volume key-value writes

I wonder what type of use cases have high volume writes.

Maybe queuing, locking and pub/sub ?


Usually none of them. Queuing is almost never high volume and rarely is pub sub.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: