Hacker News new | past | comments | ask | show | jobs | submit login

> While the VM backend helped, we found that it still wouldn't stay within the bounds we set, and would continually grow no matter what we set. We did report the issue but never came to a good solution in time. For example, we could give Redis an entire 12GB server and set the VM to 4GB, and given enough time (under high load, mind you) it would climb well above 12GB and start to swap, more or less killing our site.

We came across this same issue while implementing a Redis-based solution to improve the scalability of our own systems. Someone filed an issue reporting this: http://code.google.com/p/redis/issues/detail?id=248.

Basically, antirez confirms that Redis does a poor job estimating the amount of memory used, so you'll need to adjust your redis.conf VM settings to take this into account. For anybody relying on Redis's VM, I'd recommend writing a script to load your server with realistic data structures with sizes you expect in production. You can then profile Redis's configured memory usage vs. the actual memory usage point at which swapping starts occurring, and set your redis.conf according to the limitations of your box. For example, we run Redis 2.0.2, and using list structures with ~50 items of moderate size, we found configuring Redis to use 400MB actually resulted in it using up to 1.4GB before swapping. We configure our settings to take this into account. Mind you, this may all change with diskstore, and later versions of Redis which are supposed to be more memory efficient.

For those curious, our Redis-based solution is helping us scale some write-heavy activities quite nicely, and has been running stably.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: