Hacker News new | past | comments | ask | show | jobs | submit login

Probably it just depends on the dataset. The issue we have with our service (http://lloogg.com) is that you need to take the last N items of logs for every site. To ask for the latest M items should be fast. To push a new log line on the list should be fast. Every kind of MySQL configuration we tried was unable to reach the 10k writes/second we reach with Redis. Obviously. Even when you use MySQL as a btree implementation you get a lot of overhead. Starting from the protocol and the format of the statements, for example.

The idea to encode things with json or other formats in a blog text is just a ugly hack. People are using this because they are desperate, not because is good computer science. They started with mysql, know mysql, hacked with mysql. Clearly will try to fix their site with MySQL.

The json+blob can work as long as the data that's stored in this fields is trivial to serialize-deserialize. What about having a 10000 elements list in every blob and at every page view you need to append an element?

So: great hack, you found a way to work with the tools you have, but this does not mean in any way that fast key-value persistent DBs don't have something to say into the web-scale theater.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: