Hacker News new | past | comments | ask | show | jobs | submit login

Man, I’ve wanted the “compatible SQL engine with only RAM storage” for testing for YEARS. Closest I got was some shenanigans with MSSQL’s LocalDB.



I use docker to spin up new postgresql DBs on the fly


Same here, and I run the same set of migrations that run in production. To he clear, this is only done once per test session, not for individual tests, and the tests are written in such a way that they don't interfere with each other.

The overhead is actually pretty small, less than 10s. I'd saw too much for unit tests, but we'll within the tolerable range for integration/functional tests. Compared with the time I'd spend hacking together some brittle and unrealistic in-memory alternative, I much prefer to use a real database.


I do that too but it’s a noticeable overhead, I guess it wouldn’t be great for mass-testing scenarios.


RAM FS + fsync off?


Those solutions still have a high overhead. There's acid compliance, serialisation in memory, maintaining indices, and many other layers. Compare it to an ideal testing solution with no initialisation cost and insert being literally: parse the query, add a new entry to a list, done.


But you want to be testing against something that is as close as possible to the deployment environment. So if that means acid, indices etc, then that's what it is.


You can still do them in a trivial way that works like production. For example: if some column has a unique index, look at all rows and compare the new value. You don't need an actual index for it. (And definitely not a fancy concurrent access btree) For transactions/mvcc you can literally make a copy of everything.


Have you tried postgresql with libeatmydata?


I'd build this, but I'm not sure if I'd be able to get anyone to fund it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: