Hacker News new | past | comments | ask | show | jobs | submit login

Has anyone ever had problems with Redis?

I expect it to always work and it always has. I really like it but am worried I trust it too much now. Please tell me I'm fine to trust single instances!




It's been one of the least problematic things in our infrastructure. We keep around for a bit of internal caching of things, transient state, and some non critical queueing. We have a couple of redis nodes that we've had for years. Saying it is a key value store is selling it short. What really makes it useful is things like queues, sets, ttls on keys, etc. The API has dozens of different operations and variants of operations. Mostly, redis is rock solid and stable but because it is not a transactional datastore you should not rely on it preserving your data. Bearing that in mind, you have to plan for the worst. IMHO treating it as a transient thing that can go at any point and that you don't back up is a sensible thing to do. Blindly restarting and wiping a redis node should not cause any harm. Mostly this never happens but when it does, we simply restart.

Redis cluster is more about availability than it is about consistency. If you are aware of that, it's a fine solution.

A couple of things I do with it:

- buffer log messages before we send them to elasticsearch via logstash. This is a simple queue. Technically it's a single point of failure but worst case we lose a few minutes of logging. This happens very rarely and typically not because of redis. This node is configured to drop older keys when memory runs out. We did this after a few log spikes killed our node by running out of memory. Since then, we've had zero issues.

- we have a few simple batch jobs that we trigger with an API or via a timed job in our cluster. To prevent these things running twice on different nodes, I implemented a simple lock mechanism via redis. Nodes that are about to run check first if they need to and abort if another node is already doing the same or recently completed doing this. This does not scale but it works great and I don't need extra moving parts for some routine things that we run a couple of times a day.

- some of our business logic ends up looking up or calculating the same things over and over again. We use a mix of on server in memory caching and shared state in redis for this. Keys for this have a ttl; if the key is missing the logic is to simply recalculate the value.

Once you have redis around, finding more uses for it is a bit of an antipattern. It does queuing but you probably should use a proper queue if you need one. It can store data but you probably want a proper database if you are going to store a lot of data, etc. It's great for prototyping though. Use it in moderation.


Only problem I ever encountered was mainly my fault -- my Redis instance was used to hack my server (the attacker manipulated Redis data and dumped it to overwrite /etc/passwd, etc). I was an idiot and hadn't locked down my installation. Luckily my provider had disk snapshots.


Yup, same thing happen to my VPS. I had redis running on a tcp port instead of a unix socket and I didn't have a firewall setup.


Sounds interesting. Can you share how and what happened in detail?


There's actually a writeup of this technique on the Redis blog: http://antirez.com/news/96

In my case they overwrote ~/.ssh/authorized_keys, /etc/group and /etc/passwd as well.


Pretty much the only problems I've seen it cause are due to people not understanding its role in the infrastructure, not defects in Redis itself.

Most people run Redis in-memory only (in my experience, at least). Those that don't usually sync to disk only periodically, whether they intend to or not.

The only problems that crop up from that pattern are that many users (especially new ones, or people who haven't worked with Redis before) forget that it's fundamentally an ephemeral cache. Eventually maintenance or failure drops the in-memory dataset, and then a wide variety of disasters occur because it was being treated as a source of truth, or as a datastore with durability.

In situations where the ephemerality of in-memory data was consistently known (or when disk persistence was configured with some thought), I have had the same experience as most others here: Redis was one of the most reliable, least surprising pieces of infrastructure present.

...except for TTL handling with read-only replicas, I guess. That behavior (TTLs can get ignored on replicas) was really rough and surprising, but is fortunately now fixed. Shame on me for running an old enough version to keep getting bitten by it.


I did, but I am not sure the use case was right. Inherited an old web forms project and tried switching from Memcached to Redis. It didn't work out due to a large number of serialization/deserialization of object stores. The .net Redis library was causing a massive amount of garbage collection storms. I was probably asking it to do too much though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: