If you’re hitting the disk, aren’t you losing some of the advantages of using an in-memory database in the first place? Or would it still be more performant than a traditional RDBMS due to optimized in memory data structures?
There is a compound effect in building memory optimized features and being a distributed database that can put a lot more cores at work and cache a lot more data in a cluster:
- In-memory row stores. Super fast for updates and point lookups
- Memory optimized hash joins that minimized cash misses. Great for analytical/reporting use cases
- Vectorization for columnstore query processing. Super fast aggregations that work best when data is cached in memory