Hacker News new | past | comments | ask | show | jobs | submit login
The Five Stages of Hosting (blog.pinboard.in)
747 points by stilist on Jan 30, 2012 | hide | past | favorite | 179 comments



This is an amazing article. I intended to write such a blog article myself.

One aspect I intended to cover (and will now do so here) is that of cost. I get very frustrated by cash-strapped startups which from day one are expecting to be 'web-scale' and need to turn up machines at the drop of a hat. Lets get real. I HOPE you have to worry about that... much, much later.

I'm quite pleased you started your article mentioning solutions like heroku. Unless you have some special-needs not met by a PaaS, this is where you should start. You should be writing code, not managing servers (this coming from an operations guy who has been managing servers for 12 years, and worked at a webhost). Once you scale far enough that its worth hiring someone to deal with the knowledge required to deal with the maintenance of managing your application stack, OS updates, security, etc -- THEN move on. Not a moment before.

Cloud servers are realistically not the price/performance/low-maintenance solution for MOST startups. You should get a VPS (Linode) or dedicated server (from a reputable company which can offer quick SLAs on replacing parts - like 2 hours at voxel.net). Dedicated servers are cheaper than you think. I pay voxel $180/mo for an 8GB quad core 1TB box on 100mbit/sec. It outperforms servers costing twice as much in EC2 - and thats not counting bandwidth or storage. Concerned about reliability? Buy TWO - In different datacenters. -- You're STILL saving money, and you have the exact same level of maintenance overhead as AWS (OS, Updates, full application stack); while reaping the performance benefits of bare-metal.

You do NOT want the headaches of colocation. You cannot pay your staff enough to stay local to the server 24/7 and the cost of extra parts on hand to make up the money over dedicated.

Your startup is not Google, so I won't get in to having your own datacenter. (Well done pointing out that they're not getting advice from your blog)


I had some poor experiences with dedicated servers at well known providers. Essentially if the hardware fails, they'd rather fix it than get you into something new so you can be back up and running. My site ended up offline for several hours, several times, so that some tech could run hardware diagnostics, etc.

EC2 may be expensive, but if anything goes wrong, I can boot another instance in seconds and abandon the old one. They fix it on their own time.

Sure, I could buy more servers so that one down one doesn't affect me, but that's twice as expensive. On EC2 I don't have any penalty for abandoning an instance for any reason. I've actually moved data centers in EC2 when the performance profiles were better on the other side for the same size instances. It was a temporary difference, but migration was pretty much painless.

You're voxel suggestion may have a 2 hour SLA for replacing parts, but what happens when they don't know what is wrong?

That said, I agree that a person should try for the PaaS and evaluate all the options fairly. I tried Linode, several dedicated hosts, and then finally moved to EC2 on a previous project.


I agree, I love EC2 personally, their mini server plan(or whatever its called) is great for keeping a production version of a project running while its still in an initial phase (very low user base) and then you can scale easily without having to learn anything new as you can run basically things the same as you do locally (I run Ubuntu locally and mainly develop for Tomcat so it makes it really easy to keep things the same on there). I looked at Heroku but as soon as I realized it would be rather expensive just to get regular datbase access (so I could run straight SQL if I needed to, etc) it was out.


FWIW - you can run command-line SQL for all heroku instances: https://github.com/ddollar/heroku-sql-console


The micro instance seems incredibly slow for me running Rails. It runs out of memory while doing a git index-pack.


The micro instance seems to be routinely throttled, for random reasons. My vm was choked for days despite only pulling in ~200 visitors per day. Upgrading to the Small instance has removed all throttling.

I'm running a standard LAMP stack on it, nothing crazy.


hmmm, yeah I have noticed weird behavior a few times, however I run a Java Spring app on there and most of the time it works, its pretty much me that is using it though, I would think that Ruby does use more memory than Java though.


What dedicated hosts did you use?

We've been using Softlayer for years with ~100 servers and have never had this problem. No host is perfect but overall it's been a good experience and a lot cheaper than EC2.


I love Softlayer. Excellent facilities, hardware, network and people. I just wish their RAM pricing wasn't so insane... $200/mo forever to add a $80 8GB stick.


This is certainly true for the common case, but there are certain types of workloads where dedicated is the only way to go. Where I work we were doing a bunch of compute heavy work and we started out on EC2 and recently switched to dedicated hardware, which resulted in a 30-50x performance/price increase.


You can find the full voxel SLA here: http://voxel.net/sla

The nice thing about going with a larger dedicated host like Voxel is that they have extra hardware & servers on hand.

In fact, they even have instant-provisioning of dedicated hardware: http://voxel.net/voxservers

I worked for, and have used dozens of dedicated hosts over the years (I spend far too much time on http://www.webhostingtalk.com ), and can totally agree that these are common issues with dedicated servers.

To each their own; just sharing my $.02 perspective on the situation ;)


The problem I have with Ec2 is their configuration. They have small instances and then large instances. Nothing in the medium range with 4 cores, 2 GB ram, 64 bit etc. So if I were a small company and I need 2 medium instances (which IMO is what most people want) I’ll end up paying .34 * 24 * 30 = $244.8 in Amazon where as in Linode I pay $60 per month.

I run my staging boxes on Ec2 and production on Linode. The small micro instances are good for staging/dev with just $15/month. No other provider I know gives you a VM at such low costs.


Dedicated servers are cheaper for the same performance, so you can afford 2.


I agree that you should be writing code and not managing servers, but in my limited experience PaaS only replaces managing servers with learning the platform's APIs, learning idioms specific to that platform and working your way out of the lack of libraries you are accustomed to.

A lot of programmer overhead.


This has been getting better. The platform-specific knowledge necessary to get a Heroku app running on Cedar is pretty much limited to knowing you need to put "heroku run" in front of your rake tasks or command line utilities, and perhaps 3-4 sets of arguments to back up a database, restore one, and restart your app.

Compared to the amount of knowledge necessary to effectively and reliably run a dedicated server, it's honestly pretty trivial.


I've had to rewrite gems and Rails code to work properly with Heroku. The biggest drawback to Heroku IMHO is the read-only filesystem. It makes simple tasks like creating ZIP files on the server more challenging.


They've actually moved from a read-only to an ephemeral file system with Cedar. You can write to your hearts content, but can't really rely on the files being there. In most cases where you'd want to write files, you really should be using something like S3 anyway.


Heroku's Cedar stack does not have a read-only filesystem. However, it is ephemeral, meaning it is not durable and it's not a guarantee that things you put on the filesystem will stick around between requests.


This happened to me as well. If I had to do it all over again, I would have started pushing to Heroku from the start.


I think you are implying that cloud servers are different than Virtual Private Servers. I am new to this, so forgive my ignorance, but how are they different? Is it just that effectively that with standard cloud server options, you can spin up new/multiple instances, whereas with a VPS, you've just got the one?

Or is it more about how it is all handled in the background?


So Amazon uses some of the same technology (Xen); but their packaging, support infrastructure, and pricing model make it very different than a VPS offering.

VPS you typically get bandwidth, storage, etc. included - and its 'uncomplicated' - You pay one monthly fee, it covers it all.

AWS can be viewed as both fault tolerant, and additional probability of fault due to the extra complexity they've built in (they've gone down for substantial periods due to hiccups in this additional complexity).

A good VPS provider will make the particular system you're on fault-tolerant on its own: dual power supplies, RAID arrays, etc.

AWS doesn't care about making any particular machine fault-tolerant, because their model is that you should spin up a new instance, and throw away the one that failed.

Thats great for a 'web-scale' enterprise that has invested the resources making their site operational for that mindset. For early-stage startups where dealing with a failed instance means their site is down until they create or restart a new one; the advantage goes to VPS IMHO.


One of the differences is how VPS and "cloud" is metered: typically monthly (may be prorated daily) for VPSs and hourly for cloud servers like EC2. You can spin multiple instances in both cases, the more differences come in play if you start using cloud-specific features like load balancer, cloud storage (ie S3), etc


Using Heroku is not free as a programming overhead as you have to bake in Amazon S3 as a dependancy.


That implies your users are uploading files. Certainly not the case for all startups, but yes, this could certainly be an additional overhead. Still, compared to running and configuring a VPS, thats nothing.


Also, if you're handling user uploaded content you shouldn't just be writing it to local disk. You're going to have to figure out how to do off-instance blob storage at some point if you scale beyond a single server or want any redundancy. S3 is just about as good as it gets for blob storage today.


as someone that has done colocation, dedicated hosting, and VPS, i'm a huge fan of dedicated hosting.

colocation was expensive and the hardware problems were all mine. i was pretty much tied to my local datacenter because i didn't want to ship a server around (which would be at least a day of downtime). pricing can be hard to compare because of power/space/bandwidth. if the equipment i colocated didn't have IPMI support, it could sometimes take up to a half hour to have a datacenter tech be able to put a remote console online when there were problems. at the end of it, i had a bunch of servers that were worthless on the resale market due to their age.

VPSes were never a serious option for the reasons stated in this article. it's impossible to track down performance problems when a dozen other VPS customers on the same server are taxing the CPUs and disks. i do use one that i pay $10/month for just to run a network monitor for some off-network perspective. they can be useful for single-task servers that don't need a lot of processing power like dns servers.

with dedicated servers, though, you can signup on a website and within a few hours have a complete server with modern CPUs, disks, and lots of memory assembled, tested, and connected to the internet with a remote console waiting for an o/s installation. when hardware goes bad, the server provider has lots of spare parts waiting around to be swapped in for free. and the best part of all, when you're ready to upgrade or move to a different provider, you just cancel the account and let the provider worry about what to do with the old hardware. i have a handful of these on various providers costing between $140-$190 a month for something like a core i5 ~2ghz with 8gb of ram, 2 big sata drives, and 100mbit ethernet with more than enough transfer every month.


Off-topic piece of advice: Not capitalizing your sentences severely degrades the readability of your post.


i have a handful of these on various providers costing between $140-$190 a month for something like a core i5 ~2ghz with 8gb of ram, 2 big sata drives, and 100mbit ethernet with more than enough transfer every month.

This seems ridiculously cheap compared to something with comparable RAM/CPU on EC2.

Is there a catch?


No catch, you've just discovered how ridiculously expensive EC2 is.


Then why do so many startups (and even established businesses like reddit and Netflix) use EC2?


Reddit and Netflix have entirely different models at their 'web-scale'

They actually DO need to be able to scale servers up and down based on demand-usage, time of day, growth patterns, etc.

For them, the extra EC2 cost is negated by being able to spin up an extra 400 instances in the evenings, and turn them back down after everyone goes to sleep.

Under the dedicated server model they'd ALWAYS need to have an extra 400 dedicateds to handle that peak load (and in fact would need to have far more to handle additional spikes, projected growth, etc.) Those dedicated servers would sit idle, costing them money most of the time.

This is the difference between 'web-scale' and your typical startup's usage pattern.


> They actually DO need to be able to scale servers up and down based on demand-usage, time of day, growth patterns, etc.

Cost wise, your best scenario is usually going to be dedicated or colo + EC2 or similar for overflow / peak.

If you do just dedicated hosting you need to leave enough spare capacity that you feel comfortable handling the spikes for whatever the worst case provisioning time your host has.

It's still cheaper than EC2.

But if you do dedicated + ability to spin up EC2 to take peaks, you can go much closer to the wire with your dedicated hardware, and increase the cost gap to EC2 massively. You don't need as much spare capacity to handle peaks any more. You don't need as much spare capacity to handle failover.

It's rare for it to pay off to spin up EC2 instances to handle inter-day "normal" load changes, though - most sites don't have differences that are pronounced enough over short enough intervals for it to be worth it. If you do the dedicated + EC2 model, your EC2 instances needs to be up no more than 6-8 hours or so on average per day before it becomes cheaper to buy more dedicated capacity.


Also, reddit used to have different budgets for servers (lots) and people (very little), so anything that reduced system administration was a win.


I don't think you'll find that reddit actually spins up extra instances though. They simply run 240 instances (right now). So it still seems like an odd choice, given they could probably reduce that to about 50 instances of dedicated hosting.


Also, they probably get much better rates from Amazon than you or I.


Thanks for the explanation. This comment (and indeed this whole thread) has been very useful to me.


You should not make your own technology judgements based solely on what other people are using. There's too many people jumping on the cloud bandwagon who have no clue what they're doing.


Making technology decisions based on what others are doing is the fastest way to make minor decisions. There's immense value in asking "what does company x do?" choosing that option, and moving on. In my experience time deliberating is time wasted--it's better to make a decision and move on. You can (and should) change later.


> Making technology decisions based on what others are doing is the fastest way to make minor decisions.

Sure, but a minor decision is who prints your business stationery or what kind of toilet paper to buy.

What hardware/networking platform to build your service on is not a minor decision. It will fundamentally affect the architecture of your entire system, and getting the decision wrong will potentially be expensive-with-a-capital-Failed-Company.


Reddit very much regretted using Amazon EC2 EBS disks for their site according to their blog. It killed their site for almost entire days, many times, for a long time. It's taken them months to undo that decision.


That was just EBS, not EC2 or any of the other AWS products. Most of AWS is fairly solid AFAIK, just expensive at certain scales.


Because of the cloud fad. People have always made bad IT decisions based on the flavor of the month. They still do.


Paying $140-$190 per month is already ridiculously expensive.

I pay about $3 per month for a very low-end xen VPS. Sure, I'm not running anything at all resource intensive on it.. but if I was maybe I'd spring for a higher-end VPS for $6, or if I really wanted to get crazy I'd find a really solid one for about $20 a month.

At $140-$190 a month, you'd better be getting a bunch of high-end dedicated servers and fantastic support, or you are getting seriously ripped off.

Hell, given the very modest needs of most small startups, having to shell out even $20 a month is a ripoff.


$140 for a second gen i5 and 8GB of ram is pretty reasonable.

A $20 VPS is only for pet projects. For a startup it's worth it to invest in a/some dedicated server(s). Having some redundant firepower of dedicated servers so that you don't have to worry about the upgrades for a while presents a sweet spot between saving cost and scalability.


My hoster of many years charges for a similar setup 60 EUR, "unlimited" bandwidth (whatever you can pump though 100Mbit within a month), RAID10, etc, 75 GB external backup, as a managed server. And at another big hosting company you get an even bigger package for 45 EUR, not managed though.

However, I remember that hosting in the US is surprisingly expensive for some reason, and has been for many years. Don't know why.


What company are you using?


It may be Hetzner (Germany). They have a 45 € root server with i7 quadcore, 16 Gb RAM, 2 x 3 Tb Disk (link: http://www.hetzner.de/hosting/produktmatrix/rootserver-produ...). I'm very content with what they provide, even good/friendly support.


In my opinion a very good choice. I run a EX4 Box there and i'm more than satisfied with the service they provide.

Recently i fired up a Support Request around 5am and got a response 6am when their support team picks up work. during the day you usually don't have to wait longer than one hour for a support response.

They have awesome hardware for reasonable prices... i migrated everything i own from ec2 to one EX4 box with 8 cores and 16 gig of ram for less than i payed at EC2...


Also have a look at OVH's Kimsufi brand: http://www.kimsufi.com/fr/

i7 + 24GB RAM + 2TB disc + 100Mbps for €50

(they geo-restrict their plans, and I think only the French get unlimited bandwidth; UK people have to survive on "just" 15TB/month, for example)


http://www.kimsufi.co.uk/ is showing unlimited for all three plans when I visit (from a UK IP address). I might have to get one of the lower spec (celeron/atom) machines for myself, do you have personal experience of the company?


Careful of the small-print: "The traffic is unlimited. If you exceed 5TB/month for the Kimsufi 2G, 10TB/month for the Kimsufi 16G or 15TB/month for the 24G Kimsufi the connection will be limited to 10 Mbps." My understanding is that if you do get restricted, the restriction isn't automatically lifted the next month. You might want to check with pre-sales to find out exactly what the process is.

I have had a box with them on and off (more on than off) for the last couple of years. I'm not doing anything critical there, but have also had no issues. Everything seems rock solid (which it should be -- Wikipedia says that "The company has six datacentres housing more than 100,000 machines.") You can reinstall from the web control panel, they rent KVMs, not sure what else to add.

The major things people tend to bring up in forum postings are:

Kimsufi is the "non-professional" brand, and hardware support is slower than the OVH parent. Nothing ridiculous, but you aren't going to get a bad drive replaced within a couple of hours either.

OVH has "low quality bandwidth". Of course, people making this complaint never quantify exactly what they mean by this. I just tested on my "24G" box, and see 30-40MB/s to cachefly, 90MB/s to Leaseweb in Amsterdam, 75MB/s to Linode in London, 3-4MB/s to Linode in Newark and Atlanta, 12MB/s to SoftLayer in Dallas, 10MB/s to SingleHop in Chicago, 11MB/s to Joe's Data Centre. (Of course these are large test files.)

If you're interested, there's no setup fee and no minimum contract length (check this for yourself, obviously), so I'd say go for it.


I might be interested at that.

5Tb as a limit really shouldn't be a problem for what I would use it for.

Then again for what I currently do 10Mbit would not be much of an issue either to be honest, though I'll ask about the procedure to reset this if I do set a sudden bandwidth spike one month.

There is VAT on top of that listed price, but even with that it is still a good deal if the kit and company are up to scratch. Last time I looked there was an extra cost for paying monthly, but that seems to have been removed.

> You can reinstall from the web control panel

That sounds interesting for a dedicated box. Are they using some for of SAN rather than each machine having its own local drives (so they can "reimage" your machine just by dropping its volume on the SAN array and creating a fresh one from template)? Or is this less instant (more like: make the request via the control panel and in X hours a passing support tech will plug a USB stick in and reboot to reinstall)?


Local hard drives, and I guess the (re-)install just boots from the network.

You can customise the partition layout, but otherwise get a standard set of packages for whichever distribution. There's a choice of the usual Linux suspects in 32- and 64-bit, FreeBSD, Open Solaris and Windows Server 2008.

The process is fully automatic and a CentOS install takes about 20 minutes.


I am using netclusive, but I think they only have a German language site. Others and with English site are Server4you or Hetzner.


No; you can get a VPS with those specs in that price range too. It's EC2 that's extremely expensive. I moved $1000/mo in fully utilized EC2 instances to dedicated servers for half the cost.


One more vote for dedicated hosting. It's a sweet spot for people who need anywhere from a handful to hundreds of servers. You have to get very large or have unusual requirements before colocation makes economic sense.


The problem is the "o/s installation" part, while I can install Ubuntu which is fairly easy these days, I'm not a sysadmin and I'm pretty sure I wouldn't be able to configure it properly. Do you recommend any book that'd teach me how to set up a dedicated server from start to finish, including all those little details of OS configuration?


I am a sysadmin, and frankly no book will teach you sufficiently.

Either you use Linux all day every day for some time, or hire someone to consult with you. That's my opinion anyway from years of consulting with people :)


I'm usually on Fedora, but I barely have to touch anything.


> i have a handful of these on various providers costing between $140-$190 a month for something like a core i5 ~2ghz with 8gb of ram, 2 big sata drives, and 100mbit ethernet with more than enough transfer every month.

With who?


http://m5hosting.com and http://singlehop.com are two that i'd recommend.

one thing to watch out for with dedicated hosting providers is that the networks are sometimes not so great. you may get a great deal on hardware and tons of bandwidth, but if it drops packets all the time, it's not worth it.

make sure they segment you off onto a vlan (see http://jcs.org/mitm for why), make sure they are well peered, make sure they actually have staff at or near the datacenter they're running things from, and make sure they have the ability to block certain traffic from reaching you if you need them to (see http://jcs.org/sip for why) so it doesn't count towards your bandwidth total.


There are plenty of options. Have a look at https://www.snelserver.com/ (they just sponsored a server for a project i am affiliated with) or Hetzner - http://www.hetzner.de/en/hosting/produktmatrix/rootserver-pr... 1and1 also has some nice offerings.

If you go this route you can just choose to run virtualization yourself with VMware ESXI or kvm and libvirt


I have that exact setup with Voxel.net for $180/mo. Just bought a 8core 16GB, 120GB SSD & 1TB HDD /w 1gbit/sec connection for $141/mo from Incero.com


Curious about incero.com. Seems like a 1 man company essentially renting rackspace from Netriplex. How comfortable are you with that as a point of failure?

I'm also uneasy with people that put up a page like this which basically makes it seem (unless you read the first sentence carefully) that you are in the secure hands of a much larger organization.

http://www.incero.com/datacenter-network

http://www.linkedin.com/company/incero-llc

Thoughts? (Note all blog posts are by "Gordon") I have less issues of course if the idea is redundancy.


Yes -- I should've prefaced that this is a combination performance test, staging server, hot failover and evaluation of the company. They appear to have been in business for about 3 years (whois info + archive.org), and have positive reviews on WebHostingTalk: http://www.webhostingtalk.com/showthread.php?t=1120499&h...

Info in that post seems to indicate more employees:

Real office location in Austin, TX

Experienced owners; over 8 years experience in the web industry, over 50 years combined business experience

I haven't even gotten my incero server yet (just ordered it Friday evening); but I am impressed by their offering - such as IPMI/KVM, which is often not offered by fly-by-night hosters (and not offered by EC2 in the form of out of band access).

I also happen to be in a unique situation of not needing 100% uptime for my startup - a few minutes of downtime to switch to a hot failover is a non-issue; so if Incero proves competent to me, I'll still give them serious consideration for production hosting (again, with proper hot-failover backup elsewhere); despite potentially being a one-man-show.

Difficult to say which is the lesser of two evils:

* Being the customer of a small business, having to potentially wait for response

* Being one insignificant customer among millions when there's a datacenter meltdown at AWS, with no phone number to call, and no neck to strangle until you get back up -- at the end of a long list of small customers.

I only casually mentioned Incero here, vs. Voxel in many other replies, as I believe voxel is the more appropriate choice for most startups.


I once rented servers from a company that came highly recommended on WHT, back in 2004 or so. One day, all their servers went offline. There were hundreds of posts in the thread at WHT. They never came back. It turned out they were operating at a loss and had stopped paying their bill to the data center for so long that the building finally pulled the plug on their racks.

I'm reminded of this every time I log in to my now 12-year-old PayPal account. I see "1 open case" in my "Dispute Resolution Center" -- it's the case disputing my prepayment for the servers to this host, which PayPal said they would leave open forever, and if the principals of that company ever opened a new PayPal account they would have to resolve those disputes with all their former customers first.


I have a quarter rack at the same place for many years now. I make a point of stopping by in person to get the feel of how they are doing business wise from time to time. Just by walking through the office, talking to the employees things like that. They are actually on the expensive side but I like that it keeps out the riff raff that could tax the network or create problems (when I say expensive I mean for bandwidth..)


In the end I guess I would go (in addition to redundancy) with the organization that was smaller with hopefully more expertise. I feel more confident with the smarter people being able to solve the problem and the smaller people having the motivation to deal with something and not have the personal aggravation of something not working. In a large organization it's easy to hide behind someone else or make excuses.

Ultimately better to have a person or two that will pick up their cell phone when they are in the mall on a Saturday or out for dinner but are quite willing to tackle the problem when they arrive home. Not because they have to but because they care about what they do and have a conscience.


TIP: I see a lot of people calculating off-the-cuff AWS prices for comparable hardware somewhere else and declaring how expensive it is.

Don't forget that 3yr reserved pricing is 48% cheaper than the on-demand costs, so once you know what your hardware reqs are on EC2, you can purchase some reserved instance and more or less cut your costs in half. Pricing out any hardware configuration on EC2 using the on-demand pricing is tear-inducing.

For Day 1 release, probably not an option. But at the 6-month mark you probably have a much better idea of what hardware your startup needs and can adjust accordingly.

2 CENTS: For the folks that need something better than a Micro and less expensive than a Large, don't forget about the Medium instances.

They aren't in the primary section but down in the "High CPU" section; they are an excellent fit for work that isn't quite big enough for a Large.


The hostel

You decide to run a site on 'shared' hosting (e.g. dreamhost, 1&1, godaddy (shudder), etc.) because it looks really cheap and they list so many "unlimited" things.

The good:

It's surprisingly pretty cozy for your startup wordpress blog about fish, although at that point you start wondering why you didn't just get a wordpress.com account instead, but you justify it by saying you were able to put your custom design theme with lasercats this way. You're not sure why your friends are saying the site feels slow.

The bad:

Oh, you like to program huh? Sorry our python version is 2 years behind. What's this ruby stuff you speak of? You're hosting user-submitted content on your bunk bed? Get out! Or upgrade to our overpriced "VPS" solution! You give up on the site and throw money away.


The thing is, the price difference between this and a vps is too dramatic. For example, I can get a Lunarpages basic hosting, with "Unlimited" Bandwidth and "Unlimited" Storage for $4.99/mo with support for rails apps and python (not django however). Not to mention a free domain name.

However the cheapest linode vps is $19.99/mo.

I would LOVE to have a vps to play around on, I would have so much use for it and yet I'm a student living in London and all my money goes towards the cost of living. It makes much more sense for me to go with the cheaper shared hosting as I just can't afford anything more.

As an aside, I found http://virpus.com/ the other day and they are selling vps's starting at $3/mo. Does anyone have any experience at all with them?


The price difference between shared hosting and VPS is non-existent. http://www.lowendbox.com/ is a site dedicated to low-end VPS offerings under $7/mo. They have tons of offers listed and guides like "Yes, You Can Run 18 Static Sites on a 64MB Link-1 VPS" (http://www.lowendbox.com/blog/yes-you-can-run-18-static-site...) To be listed there, $7 should be recurring maximum price, not limited time offer. Some are really good.

The real difference is not in price; it's that in shared hosting you don't get the option and associated responsibility of managing your software stack; in VPS you do.


I've heard quite a few people knocking virpus for unreliability and bad support, though I've never used them myself. http://www.lowendbox.com/ is a good resource for cheap VPS information, though be aware that I wouldn't put anything "production" on just one (I have a couple of cheap VPSs that playthings and some non-sensitive off-site backups go on, but anything of any import does not just go on one of them). Also, be wary of "burstable RAM" - more often than not it is far more trouble than it could possibly be worth (the feature might make sense in an environment where you know the likely use patterns of the whole host and its guest OSs, but it just causes problems in a shared environment IMO).


I've heard good things about http://prgmr.com/ and they have 5/6/8/12 -dollar tiers.


If the amount of traffic they offer feels adequate, good for you. Otherwise, practically anyone else will offer more traffic these days.


I use prgmr to host a website and couple of private git repos, I would definitely recommend it.


I'm on the EC2 free usage tier and having no problems for my minimal use of a VPS. For just playing around and testing stuff, it's great.


Ah, I hadn't see this before! Going to register now.


Rackspace Cloud starts at a little over £7/month for their cheapest offering.

They also give you some ability to "scale up" by simply asking for a bigger instance, and they handle moving your server over.

The only real downside is the default password. Unlike AWS, you are given a root password, which is pathetically easy to crack, so the first thing you need to do is change that.


Gotta love those "CPU slices" that ensure you'll never dream of touching those "unlimited" limits.


CloudSigma is in Europe, and has a little lower barrier to entry compared to Linode. I'm on the lookout for an even cheaper solution. Virpus seems to be almost too cheap, so where is the catch?


Thats what I'm trying to find out. TBH I'm not looking to use it for a startup, more for actually playing around with all the new stuff I see on HN. But yeah it does look a bit dodgy being SO cheap.


yeah. I'm currently using dreamhost shared and despite my gripes, it's not bad for the lay person. Dare I say I recommend it if you want to run a static-page driven site (e.g. jekyll / hyde / etc.) . I want to see how far they take unlimited and see if they'll handle my several-gig photo collection soon.

Hell, my dad uses godaddy of all things, but at least he has his tiny wordpress site up to help his non-tech business.


Dreamhost handle my photo collection fine.

Also, Dreamhost at least gives you shell access, which is a big plus. And, contrary to a comment above, you can build your own environment. I have an up to date python 2.7 virtualenv from easy_install, and same for ruby.


Disregarding that you can get VPSes for less than that, is $20/m really that much of a problem? That's 5-7 fewer trips to the cafe, 3-4 fewer beers at the bar, or 1-2 fewer times eating out per month. If you really wanted to you could find $20/m.


Ha, well said. I could never understand why GoDaddy didn't bother to improve its Rails support. I tried more than once to get a Rails app up on GoDaddy only to find that they were still supporting a version from 5 years ago. Seems like it would be an easy way to get several thousand paying customers who just have simple Rails applications.


An important addition to the article is that, as you descend the list, you run out of people to blame.


"people to blame" only helps if you work for someone else. When you are the boss? more "people to blame" means more people that can cause serious problems if they can't do their job.

Having other people that can do something for you so you don't have to is a good thing. Having someone else to blame, if you work for yourself, is the downside to outsourcing, not the upside.

Outsourcing is great; but make sure you can always move to another provider if you have trouble with your current provider. Your boss might let you off the hook if a provider screws it up, but your customers won't.


I wrote a similar article trying to segregate hosting stages (though beware I'm a hosting company promoting a product) - http://blog.bytemark.co.uk/2011/11/03/the-cloud-is-your-inst... . tl;dr version is that I think the ultimate flexibility is in your application's install script. If you can deploy to one host, or several, and collapse or split out caching layers, databases etc. depending on resources available, you have a truly portable application that's ready to scale and/or move ISPs.


An option was left out which is to have your own T1 running into your own facility (that is not a data center). We did this for years ('96 to 2004) and had much better up time than with colocation with diverse paths and biometric security. We didn't need a generator either. Just an array of industrial batteries hooked up to a power inverter with a line conditioner could keep the equipment running for 24 hours if utility power went down. (This is much cheaper than anything you would buy commercially from APC or Triplite).


Well, that does trade off risks to your T1 getting knocked out by a backhoe, perhaps power failures between you and the other end of it (e.g. if I bought a business DSL connection from AT&T to run a server at my apartment, I could lose it in a general power outage when the DSLAM's battery ran out).

But I can well believe it's better than many co-lo experiences, although all of mine have been positive. Right now I'm helping a friend with one: due to it housing Protected Health Information we pretty much have to run it on our own servers, which I built and he put into a co-lo that he's worked with before for this sort of thing.


"trade off risks to your T1 getting knocked out by a backhoe"

Well luckily no backhoe but I did keep, at my own expense, Westell DS1 NIU's around because I had a situation where one went bad and it took the Verizon tech time to go and find one. So I bought a few so he would have the parts around. I also made sure when they strung the fiber from the connection point to our office (several hundred feet through other offices in the ceiling) that it was in orange conduit as opposed to just strung through the building (they were just going to run it like phone wire). I also had them bring an extra fiber through, as well as a fishing wire in case it was ever needed for anything in the future.


A T1? Can you even get hosting that slow these days? At hundreds of dollars/mo for a T1 (or more, depending on where you are at), you can get a dedicated gigabit connection and hardware colo'd somewhere. A T1 in an office is laughable these days.


Depends on what stage you are at and what type of site you are hosting. You get the advantage of being able to have unlimited hardware space and not worrying about paying a premium for electric to the colo which, in many cases if you aren't serving up video could be a trade off worth making. You also get the hardware physically where you are which can be a benefit. It may not be the desired option the majority of the time but it belongs on the list as an option.


A dedicated line is hundreds of dollars a month in most of the US. Electricity is also not-at-all cheap in many office parks/buildings.

You'll run out of hardware space quite rapidly, simply because the electrical density necessary just isn't there in many office environments (how many boxes can you fit on a 20A circuit today?).

Colo / hosting providers exist and sell for tens of dollars a month, and handle all of that pain and suffering. I can't imagine when self-hosting would ever be actually cheaper over the long run.


No-one has mentioned mixed hosting, whereby you pick from each category.

Something like Varnish you want a lot of RAM for, dedicated suits that well.

Web servers tend to be numerous and are just computational power (stitch all this gubbins together and return a string), their number vary according to demand and they suit virtual servers really well.

Now databases, these really need good disks, lots of RAM, decent CPUs. They are best dedicated or colocated. When things go wrong with a database server you really want to be able to rule out the invisible magic of other hosts, and the voodoo of being a virtual machine.

The best thing I can hope for whilst I scale is to find a provider that will sell me dedicated and public cloud instances that can live on the same VLan and still be reasonably priced.

I'm currently still totally with Linode, but with 9 instances, and an over-heated database server I know I'm getting close to the limits of what I can do there without re-focusing on splitting up the app when I could be adding new features.


This is an excellent write-up; I like the metaphors, and it doesn't feel too much like it's trying to force a preordained conclusion.

One option I have trouble fitting in, though, is the "run your own server locally". This might be #5, except it's often seen as actually a lower-class option than #4, rather than a step up: before you go all out with a colocated server, how about just a machine with Apache sitting in the office hooked up to your office's business-SDSL line?


> how about just a machine with Apache sitting in the office hooked up to your office's business-SDSL line?

I don't consider that an option for real hosting. There are a lot of reasons why it is bad.

Internet: DSL, or whatever your office has in probably not that reliable, and single-homed (your ISP goes down, so do you).

Cooling: Offices are not designed to cool servers. The AC gets turned off at night and on weekends. Airflow is bad.

Power: Once you start running more than a few servers you will need to add special wall/roof mounted AC. The combined power of the servers and AC will cost you thousands per month.

Need: If you don't need more than a few servers, you don't need your own servers. It will cost less to rent a little space on someone else's server (VPS).

Hard: As the article says, "Hardware is hard." You have all the downsides of Condo and Manor, with none of the upsides. Power, cooling, out-of-band console, internet, networking, backups, provisioning, monitoring, the list goes on. And none of it will be to the quality of a datacenter. It's a lot of time and money for nothing.

"But ask not for whom the pager beeps — for sysadmin, it beeps for thee." I'm the sysadmin, and I don't want my pager going off because it's a three day weekend and the temperature in the server closet reached 120F (I've seen the temp reach 120F in about 30 minutes when the AC failed).


it's probably not included because keeping your production server outside of a datacenter is kind of a silly idea. you save some money, but you don't have the cooling, the power conditioning, the network redundancy, or the security of a proper facility.


I suppose it depends on your use case, but I don't see those as necessarily worse failure modes than other options. For example, Reddit has in the past been down for >20 hours at a time with its cloud solution. With a local solution, you would want offsite backup for the catastrophic "building burned down" failure mode, but most local failures can be solved in that kind of timeframe with the "drive to Frys, buy new server, and restore from backup" recovery plan.

Now if you can't afford any downtime, a local server is probably not a good idea, but then neither are many of the cloud alternatives. Also depends on size, of course; one or two local servers is a more reasonable proposition than 35 servers randomly thrown under desks. (Though the "so uh, does anyone remember which room 'thor' is in?" moment used to be a classic startup rite of passage.)


You've raised what I consider to be the elephant in the room, and you can see all the responses quick to tell you why can't do that> It simply doesn't fit with what most people consider acceptable, which is that one must be a user for everything internet related.

Most of us have at least ~5Mbps/ 2Mbps (up/down) connections at home that are always on with minimal latency. The gaming service OnLive shows that these connections are adequate for most people's needs. Home/ office connections will only get better, and I think the idea of running one's own server(s) for things like family and/ or small offices will make more sense than some 3rd party corporation's "cloud." PIM, email, IM, pictures, videos etc. simply don't need to be subject to the whims or TOS of a company or outside one's own control.

This is what I do currently, and it certainly is a kludge at the moment, but I believe it can be improved to the point of appliance-usage eventually.


This is against many consumer ISP ToS and they can/will arbitrarily start blocking traffic on certain default ports depending on how draconian they are about it. I agree that hosting on a home server is an option (even a good one) for some people and usecases, but you are still subject to the whims of someone else, and in this case you often have no recourse as you are violating the ToS.


reddit goes down because they abuse(d) the shit out of AWS. judging cloud hosting by their failures is pointless. EBS is not meant to be a large scale memcached server.

i've done the "drive to fry's, buy a new server, restore from backup" type recovery. it sucks, you don't want to do it, but that's not what i'm talking about. if you're self-hosting at your office your run into all sorts of problems that datacenters solve. the cleaning staff at the DC will never unplug your server. you won't spill beer on it. if the internet goes down, they can fail-over to another connection, your office probably can't. the DC is closer to the backbone, giving your customers less latency. adequate cooling and smoother power means longer life for your hardware. and as you grow it's much easier to grow in a DC than having the server portion of your office gradually take up more and more space.


Just curious, how has reddit abused AWS?


They didn't. They just tried to store data on EBS volumes, you know, like they are supposed to. But EBS performance is very bad, and incredibly variable. So they would end up getting timeouts trying to write to the volumes, which broke their DB replication, which took the site offline for hours to re-replicate.


I've been running my own servers out of my closet right here, on my ADSL (though formerly business SDSL) line, for over 10 years. A PC on the floor is fine to start with. It can run MySQL, your framework of choice and give you something you can show people. Evidence: I ran and wrote the backend to a site, starting with two PCs on a DSL line, before ultimately being acquired by EA (stopover at Rackspace). For a couple hundred bucks (2x 3GHz 4G RAM) you can have a zillion times more flexibility than any cloudy free plan (Heroku, etc.).


What kind of uplink do you have?

An app that starts getting into the thousands of users will quickly saturate a normal ADSL line (1-2mbps up), and an SDSL/T1 connection is many times more expensive than a remote dedicated server.


Well, you move to hosting before you have thousands of users, silly! I'm talking POC/MVP stuff. My uplink is something like 384Kbit, plenty for showoff purposes.


That speed guarantees you a load time of at least 5 seconds for the average HTML page (with a single visitor), assuming best conditions. Even shared hosting, which is almost free, will fare better than that.


Oh do please supply the numbers you're working with.


300kb in an average HTML page, 47kb/s top transfer speed. It's physics.


384kbit (48kB/s) is plenty? You are kidding, right?


I should have asked...what requires more than that? Besides "thousands of users," I mean.


Nope.


He forgot to add in #0 The parents' basement

Good: You (almost) have complete control over everything

Bad: Mom accidentally unplugged the power while vacuuming


You forgot another popular option that a lot of people use as an alternative to the "monastery" stage: shared hosting (shudder).


Agree. I was surprised they overlooked this one. Perhaps they couldn't think of an appropriate metaphor. Prison? Orphanage?


I'd suggest slum. It's a place to sleep, but you have no security and no property rights. It's generally unpleasant, and the whole thing could burn down at any moment. And you have (practically) no recourse if you don't like what you're getting. Even the best hosts have laughable SLAs (if they have one at all), and no meaningful performance guarantees.


Show me any reasonably priced service that has a SLA worth reading.

Nearly all retail providers have some sort of "if we have downtime, if you complain, we will refund you for the time you were down" SLA. The pennies you get don't compensate you for the time it took to complain, much less the lost business from being down. Look at all the companies advertising a 100% SLA; I mean, even 99.9% is unrealistic in most cases without some very expensive hardware or application-layer redundancy.


> Prison? Orphanage?

Whorehouse.


When you're doing something even remotely technically interesting shared hosting isn't an option. However, if you need something for, you know, the restaurant down the street, your nephew's school blog, a small shop that justs wants a Google Map and a contact form i don't think shared hosting is such a bad option.

It's really cheap ($20 for a VPS might be low for you, but it isn't for your nephew), doesn't matter that much when it's down, usually easy to set up, etc.


Yes. I'm not disputing the usefulness of shared hosting, in fact I called out the fact that it had originally been omitted from the list.


Virtual Servers (item 2) are essentially a form of this, albeit with slightly more protection. Debugging application performance in a virtualized (and opaque) environment is...a challenge.


I don't think it would be fair to subsume shared hosting under "2. The Dorm Room" just because of performance characteristics. A virtual server gives you a lot more freedom to choose your software stack than most shared hosting plans can ever approximate.

Even when it comes to performance, there's a world of difference between the average GoDaddy hosting plan and a medium-sized Linode.


NearlyFreeSpeech + static site = really cheap hosting that works. Of course, static sites are really easy to host...


One thing no one has mentioned is the tax implications of the different choices for startups - something which can be as important as the technical side of things.

The OECD model tax convention, which has been adopted by many pairs of countries for their double taxation avoidance agreements (DTAAs) uses the term "permanent establishment" - "a fixed place of business through which the business of the an enterprise is wholly or partly carried on". If a business derives income attributable to a fixed place of business in another country, they will likely need to deal with the tax department of that country. At best, this will mean considerable expenditure (by a bootstrapped startups' standards) on legal compliance, and at worst, could mean considerably more tax is paid.

For hosting stages 1-2 in the article, no fixed place can be attributed to the business purchasing the hosting (CPU resources are shared and no CPU fixed CPU is assigned to the business). Disk space may temporarily be associated with a business, but this is under the control of the service provider - it is disk space and not physical disk sectors which are being rented, and the provider is free to move data.

However, stages 3-5 mean that a business has a very definite physical location (i.e. place of business) in another country.

These types of non-technical issues can often outweigh the technical ones in terms of business priority.


Have you seen this be an issue in practice? Our place of business is our registered office.


It is a place of business if it meets the definition, and there can be more than one.

The definition is somewhat ambiguous, but it is likely that if there is an actual physical place (even if it is just a server) used to offer goods or services for sale to the public, there is a good chance that those sales are attributable to that server, and will be taxed in the country that server is located in.

This might not be a problem for an established corporation with the sales volume to justify the tax accountant expenditure and payment overhead needed to comply with tax law in multiple countries, but for a bootstrapped startup it can be an important consideration.


Aww, this is a gem. Beautifully written and none of the bias and factual errors that you commonly see in articles on this subject.


Plus the metaphor pretty much holds up. That is rare in this kind of post.


The article isn't completely accurate in that it states that you need to build your application in a certain way in order to host it on Heroku, which isn't true. I just don't see Heroku in same category as something like GAE.

Heroku places no requirements on your code that you wouldn't find through general best practise when building scalable applications. A lot of people will cite the read-only filesystem as a special requirement (which requires S3 or similar), but this is a common requirement with clustered systems. Yes you might have a local SAN that you can use as a local filesystem but the point is the same.

With the multiple applications I've deployed to Heroku I don't think any would not run on a 'regular' VPS as is. There's no Heroku specific code in there period. In fact, if I have changed my approach to better suit hosting on Heroku, it's generally been changing it to a better approach that would suit all types of hosting.


I'd like to add that for different parts of your application, or website, it's ok to use different services.

For example, a large majority of tech startups have a WordPress blog that is totally separate from their actual web application. In many cases it also drives the marketing front-end of their website.

So while my main application may be on Heroku or AWS, I like to fire up a free PHPFog (I don't work for them, you could use DotCloud, or likely another alternative as well) account, and have my WordPress install setup there. It's insanely easy to setup (it's all Git based), and the free account will get you a long way.

It's also nice to know that the WordPress install lives on an entirely different server, so if you get slammed with great press, your entire stack isn't feeling the heat. There are security fears here as well, so I like having it separate.


Or, if your Wordpress site gets hacked, your entire site isn't p0wned. I have a friend who had his WP site hacked last week. Fortunately, he hosts his application on a separate server, so there were no (major) security issues.


My app has a Wordpress install in a subfolder /blog/. Any suggestions on how to break that out of the app? Apache proxy? 301 redirect to a new subdomain?


No one seems to be commenting about the Stately Manor :) Datacenters come with their own set of interesting problems that are just as much fun to solve as programming puzzles. Thermal imaging goggles to design airflow.. Designing extra redundancy into power systems while adding alternative energy to reduce power costs. In this category, rack space and power consumption are almost as important characteristics as ram/ghz.


The best part? There's no one "right way", each one has it own ups and downs. Too few authors do this these days and think they have all the answers to everyone's problems.


I've been stuck between #3 and #4 for years. I could really, really use more than the 8GB of RAM my largest (rented) servers have. Not being able to build high-RAM servers limits the sites I can build and the features I can offer on the ones I already operate.

For example, I built something like Mixpanel 2 years ago, but I never launched it because in load testing it really didn't take a very large client's worth of data to exceed the 4GB of RAM I could afford for a server, and hitting disk would make reporting far too slow. Buying a new server for each client (who may decide to cancel the next day) was not something I wanted to commit to. http://i.imgur.com/DAOEA.png

I've ended up at Softlayer after trying a number of hosting companies over the past 10 years. They want $25/mo/GB for RAM. It's almost like they want you to pay the one-time cost of the hardware every month... and then some!

Yet, supporting all this on my own, I can't really colocate -- I don't have the expertise or the money to get it off the ground, nor can I be there to drive to a data center and fix things if something breaks 24 hours a day. I have no employees. I have 60k users to support myself.

It seems like I'll be stuck here for a long time.


Hetzner offers Servers with 16GB of Memory at 49€/month (~$64) or 1and1 starting at $129.99. I don't know the current state of dedicated server hosting in USA but you can nowadays can get a lot of hardware very cheap


The may be off topic, but... At the moment we're mostly using option two (Linode VPS), and it's working pretty well. I've been repeatedly tempted by option option four (colo), but I'm kind of daunted by the task of getting started. I've been burnt by option three (renting a dedicated server) before - it seems like the worst of both worlds.

Can anyone recommend a good guide to getting started with colo? Obvious questions include:

Where do you go to buy a cheap server? Can you just have it shipped direct to the data centre, or do you need to configure it yourself? How does that even work for startups in a different country to the data centre? Is there anything I should look for in a data centre to make it easier? Do any offer out of band consoles? What sort of costs are we talking about? Is there a break-even point beyond which you really should colo instead of using VPS? Is there a detailed tutorial anywhere on "getting your first coloed server up and running without bricking the stupid thing and needing to spend thousands of dollars getting a data centre technician to fix it"?

And so on. :) It seems like there are a LOT of resources to hold your hand as you get up to speed with Linode-type services, but colo is dark magic.


I worked full time in a DC for a couple years, and pretty much all of the above is possible, but you will pay for what you don't do yourself. Most servers in the wild, at least where I work, are Dell or HP (both with 4 hour hardware replacement plans for about 1k/yr) If you buy your own, you should set it up, and configure it before you ship it out, and typically there will just be a racking fee (0-100$), if you just send out bare metal, the OS install etc, will be extra. Typically, colo's will have a KVM available for you to rent/use for setup or troubleshooting hardware issues (ie boot problems), if you want to go that route. EVERYTHING is negotiable. Not being a dick client goes a long way in getting things done. Ie, if the colo has a policy that any work being done is X$/hr, if you are cool, don't open tickets for inane things, and exhausted all your options, you can typically squeeze a lot of free work out of us.

How were you burnt w/ the dedicated server? IME, those are guaranteed NO downtime, and hardware failure is replaced at no charge and immediately , and a dedicated server failure takes top priority for the guys on the shift. Working in a fairly high pressure/low reward/24x7 environment is taxing, I sometimes miss it, but I like dev a lot more.


I don't see it as simple as pick one from the list. I use a hybrid approach. Colo the servers you need for the cost savings and performance. Then backup your data to the cloud (Amazon S3) with server images ready to launch (Amazon EC2) if you have a major hardware problem with the colo.


When the last company I was with bought some colo rack space (we hosted our own data center but had a need for some colo), we purchased the server, installed it, tested it, and set up our own console on our IP range. I don't know what the costs were because that was part of my job.

As far as guide to not bricking the thing: Our particular colo required DC power, which is what we configured our Dell with. Open up the box when we got it and -- aw, we can't turn it on. Luckily we had another AC power supply so we swapped that in (hurray modular!).

When you go to the colo, bring fuses, particularly if this is the first time you're wiring DC. Grab however many you think you'll need. Then grab some more because you'll futz it up.

Test your console (iLO, IPMI, etc) to make sure it's functioning before you leave. Most things at this point you can fix remotely.

As far as swapping hard drives and things go we decided to maintain that ourselves. RAID1 + Hot Spare on the OS to keep it running before you get over there to swap the drive.


Maybe you should test the waters with http://macminicolo.net/. I haven't used them (yet) but it's relatively cheap, and they will handle lots of things for you (including buying the mini). A dual-core mini can do a lot of work.


Neat idea, but their bandwidth overage charges are brutal. .80c/GB?


This is a great list. I've noticed that I always hate whatever option I'm on that's short of "the condo." And even if you have a condo (or a subdivision of blades), there are still the inevitable screwups that leave you cursing your colo. But we live in an imperfect world. We just have to spend money engineering for it.


Love the article, just one note: I think #1 is not necessarily constrained as it may sound. Especially with Heroku, I think it is very easy to upgrade later on for most set ups, not too much special code or conventions. The major one would be probably the read only file system, but I'm sure using Amazon S3 is not that bad


Great article and this thread is great for checking out some new hosts for dedicated servers. Pricing has gotten pretty good in the last year so I think it's a good time to upgrade from PaaS to Dedicated hosting again.

Has anyone had experience with a hybrid dedicated/cloud model? I'd love to stick our Postgres servers on dedicated hardware but then be able to spin web servers as cloud servers when needed for traffic spikes.


Going along with the metaphor, shared hosting is like crashing on your friend's couch while you get your life in order and look for a place to live.


    the homeless  
    -- (*.tumblr, *.posterous and *.wordpress.com)
 
    the billboard space 
    -- stackoverflow, facebook, twitter


I don't currently host anything with them, and I do find the prices a little much (especially RAM, what are they thinking?!), but for dedicated hosting, I think Softlayer has to be mentioned. Their network connectivity and multiple data centers (including Europe and Asia) make them, in my mind, the best alternative to AWS.


As a small web shop, we went the other way. From 3 (dedicated hosting) to 2 (VPS). It's quite nice to know that when a piece of metal breaks you just get a new machine on the fly.

You do need a good host though. Quick e-mail replies and pro-active management of server load is absolutely vital. Happy to have found it (in NL) :)


Feel free to mention the name of your hoster.


I think the secret is dedicated server costs can only come down so far but VPS prices should get better and better for even better hardware as time goes by.

$50/mo can get you a 4-core raid10 xen VPS which is plenty powerful and almost as flexible as dedicated.


I'm thinking of getting another rack at the colo place that I use now. Anyone looking for space on that rack that doesn't have high bandwidth or power needs needs feel free to contact me and I will see if this makes sense for both of us.


One missing advantage in #3 is the ability to launch instances across continents


I'm a brazilian and EC2 just opened in São Paolo (then I realized its kinda expensive).

I'm not sure if 40ms vs 160ms latency is so important for many kinds of web sites.

I'm considering a Linode VPS in Dallas.

What are other brazilians using?


O que exatamente você quer fazer?


Where would you go to find a "condo"? There are no links in the article. What is the pricing/reliability/support like having your own server in someone else's data center?


What this (very good) post calls the "condo" tier is what the hosting industry calls "co-lo" or "co-location". Search that term and you'll get more options than you can probably wade through. The "rackspace" option (biggest name for good quality mid-tier sized setups) is http://www.equinix.com/

Huffington Post, Gawker, BuzzFeed, CafeMom and AdMeld all co-lo with http://www.datagram.com/, most of them are within a few feet of each other.


Equinix is a little on the expensive side... Just a little bit.


I've found a lot of good information and deals at webhostingtalk.com. It is a just a forum where the webhosting providers big, small, and in-between meet and share.

Disclaimer: I am not affiliated with webhostingtalk -- just a user there.


Search for colocation in your area, then call and ask. I'm paying around $70/month for a 1U server with a fixed amount of bandwidth. Reliability and support (aside from network issues) are up to you.


Most datacenters should be pretty good on support and reliability of things like the connection, power, etc. Otherwise it's all up to you to support your box and to make sure you buy reliable hardware.


Search for [colocation].


Also make sure to hire someone who has done it before if you want to go that route. "Racking" is an art-form in itself and the cabling alone can turn into a real problem[1] down the road if you take it too lightly for too long. You also want to plan for redundancy, power density etc. from the start.

Sorting out messy cabling or adding redundancy to a badly managed rack on live hardware is not fun.

[1] http://www.vibrant.com/cable-messes.php (for reference, this is what it should look like: http://royal.pingdom.com/2008/01/24/when-data-center-cabling...)


Not a huge fan of zipties, honestly... there's a point where it's less trouble to just pull the cable and see what it's attached to, especially since we all get lazy when it comes to labeling our source/dest endpoints.

Most of the shots of "cable messes" don't really look that unacceptable to me. I don't complain about messy cables until they start to look more like http://static.cray-cyber.org/General/LARGE/Cyber_860_BP_Skys... .


Matter of pain threshold I guess. Personally I prefer spending the odd hour on tidy zipping and labeling (yes, both ends of the cable) over cable-pulling during an emergency at 4am...


As a matter of opinion I use zipties for permanent cables (tray runs, other mostly fixed cabling), and velcro ties for in-rack where they're more likely to be updated. But then again zip ties are cheap and it's sometimes just easier to cut them off then mess with the velcro.


Indeed, when I wrote "zipping" I actually meant velcro.

We're doing it pretty much the same way (in some racks we even have a velco-strip on the side as "parking zone").


Don't use zipties as much as possible. A roll of velcro cable ties cut to length is much easier to handle and reconfigure.


They forgot my preferred hosting method from my pot smoking college days.

A second hand Pentium II in the corner of the lounge, running Slackware with ports 80 and 21 forwarded to it from a cheap belkin router off your domestic DSL connection.

Good: Basically free (assuming the power bill is in your landlord's name) , host whatever the hell you like.

Bad: Someone might spill the bong water over the power strip and ruin your uptime.


Regarding the DataCenter, yes, correct, you will need some divine help ...


The power of strong metaphors strikes again.


Why is it that a well written site can't scale quite large on heroku? (or similar - I use heroku so I'm biased) Perhaps I'm naive, but I feel one can go from heroku to stage 5 if you truly have a blowout.

According to their website (http://success.heroku.com/) some pretty large websites run there, including Urban Dictionary and Rapportive.

Sure, it may cost more, but not more than a full time sysadmin and you are buying efficiency and flexibility. You can buy a lot at heroku for $10,000/month (the minimal cost of a full time deployment / sysadmin / dbadmin,) including I'd imagine some rather hands-on support.

This article seems to downplay the great advances that have been made in "cloud" deployment. IMO, a cloud service like heroku beats the pants off of self-operated virtual servers and debatably some of the higher "stages."


Why is it that a well written site can't scale quite large on heroku?

Is anyone claiming that? The linked blog post doesn't make that claim.


It doesn't directly claim that, however calling these "The Five Stages of Hosting" implies that an application will likely progress through these stages as it grows, and that application platforms will be quickly outgrown when any sort of scale is reached. That's a pretty clear implication from the title and I'm calling for counter-evidence, because I don't think it's accurate.

I also host on Heroku applications that I hope will grow, and if that's a bad choice, I'd like to know others opinions.


I think companies outgrow PAAS platforms because they need more flexibility, not because the platform won't scale .


I think you pointed it out, it's the cost. I love Heroku too, however, I can definitely see how Heroku can be much more expensive in the long run. 37signals just recently showed how their recent RAM addition was just a tiny fraction of what they would have payed at Amazon.


Pinboard needs to filter some of the bookmarks that are coming in the site. In the recent stream I constantly see links to depraved sexual acts, including physical and sexual abuse.

Often these links are fan fiction but sometimes they are not.

Who is reading this stuff and why? What kind of behaviour does this inspire? And why do all Pinboard subscribers need to be exposed to this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: