Hacker News new | past | comments | ask | show | jobs | submit login

Tons of reasons, but the main one is that cache is shared mutable state, pretending not to be. It has all of the ugly attributes of global variables, especially where knowledge transfer and reliability are concerned.

In a read-mostly environment you can often more easily afford to update the state all at once. It’s clear what the effects are because they happen sequentially. The cost of an update isn’t fanned out and obscured across the codebase, where or your team can delude yourself of the true system cost of a suspect feature.




I agree that caching is mostly a bandaid fix. But IMO if it's used judiciously -- namely in response of a demand for a quick fix of a performance problem -- they can be OK mid-term.

As for shared mutable state, yes, that's true, but what are the alternatives? Whether it's memcached or Redis or an in-process cache (like Erlang/Elixir have), the tradeoffs seem mostly the same.


> namely in response of a demand for a quick fix of a performance problem

Caches are addictive. The first one is 'free' (easy) and people start wanting to use that solution for all their problems, especially social problems (we can't convince team A to get their average response time to match our SLA, so we'll just cache them to 'fix' it)

They defer thinking about architectural problems until later, when they are so opaque that "nobody could blame you" for having trouble sorting them out. But I do. Blame them, that is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: