Hacker News new | past | comments | ask | show | jobs | submit login

A lot of startups engage in a sort of cargo-cult architecture. Their reasoning goes something like this:

1. Amazon/Facebook/Google have a lot of traffic.

2. Amazon/Facebook/Google use X to scale horizontally. ergo:

3. My little startup should use X and scale horizontally.

What they fail to realize is that most of these companies would be ecstatic if they could scale machines vertically, if they could focus on great user features instead of having to figure out how to shard in the application layer. You should never forget that Amazon, Facebook, and twitter all started out as pretty basic LAMP stacks and built the tools when it was obvious that no other tool would do. I think google's an exception because their MVP was in fact a web scale application. So by all means, vet your idea, get some customers, get traction, and scale the cheap way by buying more ram for as long as you possibly can.




Yes. But let's look at the other side too.

Before Google came along and showed the business people the benefits of horizontal scaling, any software engineer would be automatically considered crazy if they suggested an architecture that wasn't built on a central RDBMS.

So you have to weigh it against the other cargo-cult. How many startups along the way have failed due the inability to scale horizontally?

How many have failed due to too much cost and complexity associated with re-engineering an architecture in which the assumption of fully ACID transactions permeates the entire codebase? (While the phone is ringing off the hook because production systems are falling over under load.)


I realize this may not be a popular opinion on HN, but there's something to be said for planning ahead. I've seen this story before and already know how it ends: you wait until the last possible moment to switch to a more horizontally scalable system and next thing you know you're spending more time and money maintaining the "cheap" solution than it would have taken to switch to something like Cassandra beforehand. To make matters worse, your service is crashing and the short term fix takes a day and requires two or three people to do the replica switch shuffle. The long term fix will take a couple of weeks, if you have the time for it of course.

Long story short, I have to grant that you shouldn't worry about scaling up too soon or too quickly. But that don't go to the opposite extreme by putting it off until the last possible moment.


Amazon did not start out as a LAMP stack, it was more like a "DNBC" stack.

D igital Unix

N etscape Commerce Server

B erkeley DB (NoSQL!)

C code (linked into the HTTP server)


B erkeley DB (NoSQL!)

I'd call BDB "pre-SQL" rather than "NoSQL".


Granted I'm speaking from second hand experience, but for the last couple years I worked closely with some former and current amazonians who worked on their frontend and order workflow team. I do know for a fact that much of their frontend is/was in perl, and that oracle databases have their place. But I guess that wasn't what they started with.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: