Hacker News new | past | comments | ask | show | jobs | submit login

Hey, your first statement is true but your second is not.

The disk engine does store data in memory which is part of the design. You wouldn't want to use a database that doesn't utilize caching.

The last benchmark panel, https://raw.githubusercontent.com/scottrogowski/mongita/mast..., shows cold starts where I test it without cache. So in that, it does hit the disk.




> The last benchmark panel, https://raw.githubusercontent.com/scottrogowski/mongita/mast..., shows cold starts where I test it without cache. So in that, it does hit the disk.

Okay, so looking at the first two tests - "Retrieve all documents" and "Get 1000 documents by ID" ...

If you switch the order around, does it make a difference to the benchmark? Because I suspect that the first test preloads all records into RAM, and the second test simply searches RAM, which is not what we usually do with SQLite. We don't cache all records before searching.

Switch those first two tests around, and lets see if it makes a difference.


You're right. Fixed. SQLite is more-so the winner but Mongita appears to squeak ahead in id lookups https://github.com/scottrogowski/mongita/blob/master/assets/... (to be fair, it might be the thin translation layer I built). Regular MongoDB struggles a lot and I don't think I'm being unfair to it in any way afaik.

Thank you for pointing that all out.


O(n) cache size is a very novel approach in computer science. This "cache" is crucial part of implementation and limits project applicability.


O(n) cache size is fine (and IMHO, preferable) for small datasets. For large datasets, you're correct. Cache eviction is important when memory usage gets too high. I didn't have it explicitly on the README as something that needs to get done but now it is. So thank you for pointing it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: