Hacker News new | past | comments | ask | show | jobs | submit login

Haven't had trouble into the low hundreds of millions on a single RDS server with 16GB of ram. YMMV.



We had about half a billion rows in one table back in 2008, and doing thousands of updates/inserts/deletes per second. Can't remember how much RAN we had, but we sure did have a few spinning disks.


Is that web scale?


You run MongoDB as a backing store. Very web much scale.


I prefer/dev/null for write heavy workloads that need long term storage. There’s plenty of data in modern physics to suggest that there’s no information loss from going into a black hole, so there shouldn’t be any problems. Put “architected web scale data singularity with bulletproof disaster recovery plan” in your CV. You don’t need to mention the recovery plan is to interview for new jobs when someone starts asking to access the data.


Haha. Good one.


Literally pays for itself in scale


Anyone who wants to get these references: https://www.youtube.com/watch?v=b2F-DItXtZs


"Does /dev/null support sharding?" - literally dying over here, so good


Someone needs to remake this, just global search and replace “web scale” with “blockchain”.


This is Google: https://www.youtube.com/watch?v=3t6L-FlfeaI&t=12s

I don't even know how to count that low


same. we've got 3 to 4 hundred million rows in one table and our queries still complete in the 100ms range!


Any partitioning?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: