Hacker News new | past | comments | ask | show | jobs | submit login
Why we moved away from AWS (blippex.github.io)
176 points by karli on Sept 23, 2013 | hide | past | favorite | 99 comments



EC2 was designed for elastic computing. On demand high computation (low memory) that are elastic.

With that in mind, pure EC2 is a terrible choice for general web application hosting.

If using the complete AWS set (S3, simpleDB, etc) then it makes more sense as stuff like db hosting can be pushed out to their services designed for it, but if you're gonna fire up a windows box, stick SQL server on there and use it as a general web app hosting environment then it is a terrible choice.

Unfortunately, it's a choice that still appears to be easy for management to justify: It doesn't require a server admin to use, it doesn't require mirroring or backups because obviously amazon EBS volumes can't die because they're in the cloud. The extra cost and lower performance is obviously just an Ok side effect of these benefits.

(Yes, I'm being sarcastic here, but it's all arguments I've seen made.)


I know this is a tangent, but I think it's a worthwhile one to mention that backups and redundancy are not the same thing. There have been a few high profile ventures (including businesses) that had to shut down because they lost all of their redundant data in some way. Redundancy doesn't save you from malicious people who've gained access to your systems. It doesn't save you from errors (oops, dropped the wrong DB, thankfully it's .... replicated virtually instantly across all RAID volumes and clustered DB instances). It doesn't save you from the one building where all your data is in burning down or getting flooded. It doesn't save you from software bugs (either yours, in firmware, in the kernel, in the DB, etc.) from corrupting data.


This is why you make backups from your physical hardware to onsite storage, but also replicate those backups to Amazon S3 and inhibit the delete functionality so you need MFA in order to complete the delete.


As i said in the blogpost, we still love AWS, its awesome, we used it for many other projects and S3 is great in combination with EC2, but in some cases it makes sense to think about it, maybe it saves you something!

As i said, we really miss the simplicity of AWS, one mouse click and you have a loadbalancer, ec.

PS: trust me, AWS EBS volume can die, and this is a pain! :)


I think the trick here is figuring out how much day-to-day workload you can host in a more traditional cost-effective way and how much elastic workload you can use EC2 for.


Yes, the beauty of that is that if you can handle spikes with EC2 (or any other cloud provider) quickly, then you can load the servers that handle your base load much, much higher.

You might not even need to spawn EC2 images very often - many sites have daily variations that are too small for it to really be worth it. If your hosting is cheap, spawning EC2 images for more than 6-8 hours per day might already be ineffective compared to renting more servers on a monthly contract. But just having the ability might make the difference between aiming for a peak utilization of, say, 50% of your servers, in case of unusual peaks or server failures, and aiming for a peak utilization of 90%+.

That can make a huge difference in cost.


Yes, thats exactly the trick! With our previous startup we spent around $60K/year for AWS :)


pure EC2 is a terrible choice for general web application hosting

"Terrible" is seriously overstating it. There are a lot of advantages to AWS (I understand you said "pure", but that really makes no sense in the context of AWS) that can justify the price premium -- ELB, elastic IPs, the ability to spit out AMI images (and machines from them) at will, the private networks, the firewall, etc. The fantastic network capacity (I am always wary of services like OVH that offer "unlimited" anything, because it is always limited, and unlimited means that your peers will be saturating switches because it's "free").

There are a tremendous number of flexibility reasons why EC2 comes at a premium, and it is an easy justification in many shops. Even if you aren't using ELB today, and won't have to spin out machines, etc, that flexibility has significant value.

I say this having machines at AWS, Digital Ocean and OVH. OVH is very, very bare bones, and you'd better have an escape hatch because the simplest configuration error can leave your machine incapacitated and beyond reach (adding the KVM option is usuriously expensive -- like $350 per month per machine).


> I am always wary of services like OVH that offer "unlimited" anything, because it is always limited, and unlimited means that your peers will be saturating switches because it's "free"

Comparing bandwidth between OVH and AWS is a little cheeky. Bandwidth on AWS costs an absolute fortune, not remotely economical for bulk transfer.

The switch saturation problem doesn't even necessarily go away if you instate X TB/month data caps. I would have thought local switches could handle it anyway, with cheap boxes typically only having 100Mb ports.

Some data centers don't even charge for internal traffic, which means you're still exposed when cheap VPSs and dedis are used as P2P file sharing nodes and are exchanging a lot of traffic within the building.

In any case, I'm just grateful the multi-terabyte range is so affordable, bulk transit costs in the data center has been falling year-on-year for a decade, and lots of hosts don't seem to have passed on the benefits.

Incidentally, OVH do have different SLAs across their server range. The low end stuff is "best effort", the more expensive options are supposed to be "guaranteed". They even tell you what switches they use.

> OVH is very, very bare bones, and you'd better have an escape hatch because the simplest configuration error can leave your machine incapacitated and beyond reach

Their network boot facilities are pretty handy. As long as you use a sensible filesystem you can always network boot their recovery option and access your files (and I think chroot in?). The lack of KVM is annoying though... especially when you're like me and compiling and running custom kernels (but you can network boot one of their kernels as well).


The $750/month savings cited here is not real†, but for the sake of argument let's pretend it is.

Is $750/month a significant amount of money for the company? In the USA, this is perhaps the cost of one engineer-day, and one could raise a year's worth of this money by successfully applying for a single additional credit card. (Not that I recommend bootstrapping with credit cards. But it has been done.)

Of course, it may be the case that a company could improve customer satisfaction, and therefore revenue, by double-digits by improving performance on optimized hardware. But if this is the case, where is the discussion of that? Where is the data: A/B testing, customer satisfaction, churn rate, monthly revenue? They should be front and center.

† Without getting into the reduced redundancy, the additional complexity of hosting multiple unrelated services on each instance, the "additional maintenance" referred to in the post, the lack of server capacity to cover emergencies and staging and load testing and continuous integration, and the risk involved in switching infrastructure out from under a working business-critical application... any estimate which doesn't include the cost of engineering time is wrong. All changes have engineering costs. Just talking about this idea is costing engineering time.


Yes, $750 is about one engineer-day. Someone is now going to be spending at least a full day per month managing your new hardware, running security patches, etc. Even if your sysadmin guy is cheaper than an "engineer" it's not going to be cheap.


You realize you need to do all that on the EC2 instances as well, right?

This is the common disconnect I see when people tout The Cloud as a solution to having system administrators - that somehow that instance of Linux running in EC2 doesn't require the same maintenance as a physical one. It does.


Surely you didn't mean to type "physical". Even if Amazon did nothing else, they'd still order, receive, unpack, assemble, rack, stack, power, cool, and network their servers with an economy of scale that I can't possibly match.

And who could ever claim that AWS requires no maintenance? It takes plenty; I should know. But the problem isn't that Amazon is necessarily less expensive, or more expensive, or more reliable, or less reliable. All of that depends on the context. The problem is that the context is rarely reported in this genre of blog post. These posts tend to fixate on the size of the hosting bill. This is the year 2013, and unless its business model is hopelessly flawed, the hosting bill is one of the smallest problems a new company will ever have.

But maybe I'm wrong about that, so I wish these writeups would provide more context to explain why I'm wrong in this or that particular case, and by how much. Yes, I see the hosting bill is down. But are the savings significant to the business? Did the migration take one engineer-day, or twelve, or thirty-eight? Did it reduce the size of the codebase or increase it, and which modules were affected? Is the time required for testing and reliable deployment up or down, and by how much? How has your planning for various disaster scenarios changed? Are you getting more or fewer alerts in the middle of the night?


No, you don't really. You don't need to spend time considering and researching different load balancers to see which one is the best for your use-case, running through your company's purchase process (in itself a big project), lead time, physical install, configuration, and monitoring. If you want an AWS load balancer, click EC2 > Load Balancers and config one. From "Hey, I'd like a load balancer" to having a functioning, active load balancer in literally less than five minutes. No jaunt to the colo necessary. And that's just one item - rinse, repeat for a pile of other aspects as well.

It's not true that AWS gets rid of the need for sysadmins, but it's absolutely not true that you do all the same sysadmin tasks on a cloud service.


This is why we have managed hosting. You can pay someone else to do all that, on a real, physical server, on a network they manage, and have it still come out as much cheaper than AWS. Yes, the turnaround time might be more than 5 minutes. Or, depending on who you go to, it might be less.


I agree. This is an optimization. For many startups, it's not worth prematurely optimizing hosting costs, especially when figuring out the MVP/establishing market/growing customer base.


$750 x 21d x 12m = $189K

Do you think $189K per year is an average salary? It is not true.


It's about right for salary + payroll tax + benefits + office space in Silicon Valley.


Blippex says they're in Austria... so SV prices don't figure into this at all.


It is true for Silicon Valley but not for the country.


You're seriously underestimating how much it costs. $750/day is probably low-balling it for anywhere in the country if you also count hiring costs/employee turnaround and training costs.

Also, for the skills implied, have you ever tried to hire a systems administrator who has experience in production environments with all of those aspects of back-end web servers? It's not easy, and it's not cheap.


I have, uh, pretty convincing evidence that $750/day is in the ballpark for the fully-loaded cost of a devops engineer in Boston. It certainly doesn't buy two days of an engineer in Boston.

But Boston is Boston, and SV is SV, and this is estimation, so I'll happily concede a factor of two. Okay. Suppose $750 buys you two engineer-days per month. Same question: Is the $750 important?


AWS is just not very cost-effective in terms of performance per dollar, especially when it comes to storage performance (my own specialty). It only appears that they are because of the hourly billing and a human inability to compare quantities across nearly three orders of magnitude (hours vs. months) intuitively. Now that there are hundreds of vendors with hourly billing, as there have been for a while, it's easy to see how much they suck in terms of cycles, packets, or disk writes per dollar. They still have the most advanced feature set, they have by far the best public-cloud network I've used (out of nearly twenty), there are still good reasons to use them for some things, but don't go there to reduce permanent-infrastructure costs.


I just completed a project at an organization-owned datacenter where we wasted 4 months on needless BS to deploy about 12 servers.

My team's time is easily worth $500-600/hr, so we easily wasted $300k. So the fact that my internal datacenter provider can give me a VM that costs 20% of what EC2 charges or disk that is more performant at a similar cost is interesting trivia, but isn't saving money.


For comparison, I moved a bunch of servers into a local datacentre a few months ago. It took us ~3 days to get a rack assigned, access arranged, and a couple of days to move and set up the physical servers for a single person. Fully loaded cost of the time spent was adds up to about $4k. Total leasing costs of the equipment (some of it has already been written off, but lets assume it was all new), rack cost, and 10Mbps CIR is about $1600/month for servers totalling about 80 x 2.4GHz cores, and ~20TB storage. Of course we need to factor in some maintenance cost, but the time cost for work spent managing the actual hardware as opposed to supporting the application environment, which we'd have to do regardless of hosting, adds a few hundred a month.

Comparing EC2 costs to what sounds like a completely botched project isn't very fair, in other words. Of course there are worse alternatives than EC2 as well.


So your whole team spent 4 full time months to get 12 servers deployed? That organization sounds rather degenerate.

We colocate at a datacenter and can get cabinets pretty easily. We've done this for over 10 years now. When we aren't growing or shrinking I spend about an extra 4 hours per month because we have physical servers rather than use something like AWS.

12 servers would probably take us about an extra 6 of our person-hours to get up and running vs AWS. If we needed a new cabinet it might take a couple days, but we aren't actively working - we put in a request, and they tell us when its ready for our use. We don't sit and twiddle our thumbs while this happens, and we do it before the development side of the project is completed.

We've talked about AWS before for the redundancy and convenience but the price and the extra headache of dealing with the inconsistent performance never made sense for our use.


> That organization sounds rather degenerate.

That may be true, but it doesn't seem that uncommon.


"My team's time is easily worth $500-600/hr"

Clearly not.


Does your anecdote translate to other organizations, though?

In my own case, my company ditched AWS in favour of getting our own rack with about 10 custom servers. We have a full-time sysadmin, so nobody's time was wasted on the transition; whatever stuff the developers (who are also $500-600/hr people) were needed for during that time was valuable, because it forced us to rewrite the deployment system, which would have been required at some point anyway.

What was the "needless BS" you had to do?


who are also $500-600/hr people

Did you sneak an extra zero in there? Even fully-realized, I'd $100-$200/hr tops in a prime market.


My apologies -- yes. I was actually thinking in a different currency.


OVH is specialized in deployment tho, there is NO "deployment BS". You get the machines nearly instantly. It doesn't one-click scale / unscale like AWS since you need to go purchase more machines and do the deploy, but the machines themselves are available as soon as you pay them.


Agreed, the features/cost tradeoff is why we're still with AWS!


Just so you know, OVH has just _halted_ its dedicated server offer.

TL;DR from today's French blog post:

Our offers were so competitive that too many customers wanted them, and we're loosing money if we don't keep customers for at least 2 years. Sadly, they migrate to new offers before that. We're halting dedicated servers until we figure out what to do.

[edit] Link: http://www.ovh.com/fr/a1186.pourquoi_160sold_out160


Discussion here: https://news.ycombinator.com/item?id=6399569

In summary: Their main problem was no "installation fee" meaning the barrier to hopping to a newer server every couple of years just wasn't there. If their new offerings were priced competitively to attract new customers they would also be priced similar to how older hardware was priced when sold a couple of years ago, so anyone on the older hardware would jump to new boxes.


Thanks for the update. Would you mind linking to the article?


If you move to Rackspace, stay away from DWS, the dallas datacenter. It's over-booked, the network has constant issues, vm's on the same host machine as you are able to cause your vm network issues, the list of problems never stops.

We recently switched to Azure from Rackspace, but we're still evaluating if it will work for us long term. Azure's issues are that you have to request number of core increases, and you can't capture an image of a vm without shutting it down. Also you can't just give your VM a regular ssh public key, you have to generate SSL like certs. Also weird is a lot of the documentation is only for the Windows side of things, even though you can get some of that stuff to work on linux and that you can do that by installing an SDK even though you might not be installing an application, just running your own stuff on a VM.


I'd stay away from Rackspace London as well. Horrible horrible experience.

1. Noisy neighbours impact you all the time

2. The staff are really poorly trained and don't know how to troubleshoot.

3. They're expensive.

4. Their control panels are really bad, constantly being updated and migrated, and are just a complete mess.

5. They've had several major network outages that have lasted for quite a long time (hours) that they blame on "upstream routing issues" despite supposedly having multiple redundant upstream carriers.

6. They'll randomly reboot your box without notice. If you open a ticket there's an almost certain chance they'll just reboot your box no matter how much you ask them not to.

7. The IO on the boxes is really bad.

8. They don't proactively monitor any of their servers, and their "new fancy" monitoring product only goes down to 5 minute resolution, so it's worse than Pingdom, for example.


Cloud Monitoring (disclaimer: I work on it) can actually be configured to poll as often as every 30 seconds from each location, or just every 30 seconds in the case of agent checks. I believe we default to 1 minute intervals, but if you want to change it you can browse to your check in our Control Panel and click the little edit icon where it says "Period: 60 seconds".


This is either brand new (within the last few weeks) or your coworkers don't know anything about it. The whole monitoring thing has been a farce for a year or more, as it's been coming real soon now, then in beta, then severely limited, then costs money, etc.


we have had rather similar experience with RS support. Essentially when you call in what's happening is you get to talk to some people who literally have no clue and are just "call" masters...they HOPEFULLY after some time pass you on to L1 technicians who have A clue...and it goes on and on and on like this until someone more senior takes over and resolves the issue. Worth the extra money ? Nah. The only advantage I'm seeing is non-ephemeral instances, though you should be prepared for failure in Cloud and don't expect miracles.


Was this on the 1st-gen and/or 2nd-gen Rackspace cloud servers?


On all gens. They're all terrible.


For balance, my previous venture hosted (still does) on RS London and we had good experience with them. The few times where we needed support, they were excellent.


I too have had issues with connectivity with Rackspace. Also anecdotally I've heard that the main recommended solution by the article in the OP, hetzner, is one that crashed non stop for someone in the past.

You just can't beat AWS right now for reliability, feature set and speed. We started using them recently and they are a tiny bit more expensive. But it's the difference between fresh air and breathing carbon monoxide.

At least so far.


Just to add another data point, we've been using a 4GB cloud server in the Dallas datacenter for 9 months now and it has been solid. (Solid meaning works as expected, no outages/problems.)

Maybe we lucked out with who else is sharing the hardware.


How many server(s) did you have with RS?

We only have 2 mid-sized virtual servers in DFW and things have been working flawlessly for us..


Really the issue is that virtual hosts can affect each other quite a bit on Rackspace compared to, say, AWS. If your server behaves poorly, Rackspace can and will shut it off. One of our non-critical servers ran out of memory, thrashed swap, and was shut down in pretty short order by Rackspace. Which is good, sort of, I don't want to hurt other customers. Still, getting it turned back on was not a very fast process.

So it is kind of a roll of the dice. Are the other customers on your hardware well behaved? Will they stay that way?

It is a trade-off, you get way better performance if the other virtual hosts on the box are quiet. But if you plan your capacity around those quiet periods you can be in for quite a shock once the hardware gets busy. I've run critical servers on hosts like this and it can be a headache.


It's because you only have 2 servers, which reduces the likelihood that you're sharing a host machine with a misbehaving vm. You're probably also not using a load balancer or taxing the network much yourself.


Yep!

That's why I was asking about the performance with more VMs, I don't use many virtual servers at RS for my day job.


AWS isn't really a solution for people trying to run a "small" project on a fixed amount of servers 24/7.

It's great if you want to be able to:

- provision lots of machines without delays

- launch and terminate new instances to cover load spikes

- do geo-redundant failover (aka: a datacenter in Europe, Australia, the US, ...)

- have 'plug and play' components like load balancers (ELB), storage (S3), databases (RDS), queueing services, ...

- ...

Amazon provides a lot of things that cheaper solutions will have a hard time achieving (e.g. the backup space redundancy that OVH provides will probably be quite a bit less 'secure' than S3/Glacier).

That being said, these premium features are something that a project might simply not need. We run some of our jenkins build slaves on OVH. We don't need to launch new ones all that often and the bang for the buck makes them very much worth considering.


I'm running a small project on a fixed server 24/7 and AWS makes sense for me. Why? I'm a one man team supporting a research project. I have no ability to self host. I have no time to look around at a lot of options and trying to figure out all the details of every offering. I need a server that has good uptime and good performance. Most of all, telling my users that we're hosted on Amazon makes them feels secure - it isn't going anywhere. Believe me, for a certain class of users, this is important.


A dedicated host would most certainly be a better (and cheaper) option for you, but hey, if you don't have time to look around, I suppose it's a reasonable trade-off.

>I need a server that has good uptime and good performance.

Then a single EC2 instance is not a good option for you. Terrible up-time, and terrible performance.


Can you supply more details - maybe I am missing something. My EC2 instance has been up for 249 days now and my node.js webserver instance seems very responsive. I still think it's a reasonable trade-off in terms of cost. My time is expensive, and to be honest even a few hundred dollars a month extra in server cost is not important. This is a research project, not a commercial website, so my needs may be different than most.


I may not be hitting the points that macspoofing was trying to make, but at least in my experience, you can get much better value with a different host (like DigitalOcean or Linode) where the setup time is minimal and the performance benefits are substantial. However, if your priority isn't perfomance/dollar, then the trade-offs are subtle and insubstantial and EC2 is fine.


We did a test of an EC2 data centre setup vs. our existing physical data centre setup and the largest issue - that is, if you ignore the 3x cost - is the network latency and general quality of the local network in the availability zones.

No amount of optimization could eliminate the 100-150ms penalty imposed by the EC2 network vs. our dedicated hardware. The local network was congested and "noisy" in the sense that ping times were highly variable and had high packet loss, and the number of hops to the internet at large were high, and the baseline latency to the world was also high.

As for instance lifespan, we had numerous instances just "disappear" and then needed to be recreated. We were running a hundred or so for our test so YMMV.


I would have thought build slaves would be a great fit for AWS, since you can boot them up as needed, and turn them off when not needed(Night).


We run some Virtualbox builds on them which AWS doesn't support. Our build slaves are pretty busy (I'd say jobs running 16+ hours a day). There isn't really that much possible cost saving towards an OVH server :)

Also: HDD performance on "basic" Amazon is slow and RAM is expensive :(


For larger companies that usually don't rely on VPS providers and the like, AWS can still be a compelling offering for new ventures, as you don't have to commit resources (capital) to in-house infrastructure for a project that might not work out (as it's opex, not capex, just shut it down if it fails).


NeoCities is currently using OVH. We were using Hetzner but we ran into issues when our server was the victim of a DDoS attack, and Hetzner responded by null-routing our server's IP address for a few days. OVH has better DDoS mitigation strategies (supposedly), so that's why we're switching.

I've used AWS before in corporate work, and I have to say I was very unimpressed with it. The prices for what you get are exorbitantly high. I've heard people say "they are affordable for corporate standards", but my reaction to that is just that their previous hosts were even worse about it. Every hosting solution I have had other than AWS has been cheaper.

More importantly to me than price though is the knowledge. I really don't like that AWS is a "black box" of mystery meat. I don't know how most of the systems are implemented under the hood, which means I can't predict what the failure points are and what I'm implementing. The way I would compile capabilities of AWS systems together was through anecdotal information via blog posts. We would have servers fail and be given no explanation as to why. And many of the interfaces are proprietary, which means that moving to an alternative is not an option. Not to mention the APIs are not particularly stellar (a lot of XML). The only options for persistent storage are network drives and local disks that go away on shutdown, which is not a particularly good choice of options.

With OVH, I get a server. I know what a server is, how to back it up, and what its fail points are. If OVH does something I don't agree with, I can move to another company and have exactly the same environment.

I'm not saying AWS is useless (again, I've used it for corporate environments before), but it's hard to justify the high cost when you're on a budget, especially when you can't even determine if the tradeoff is worth it.


My current startup is using AWS for everything and I have to admit I was eager to get my hands on it since it seems to me that familiarity with AWS will be a good thing for me personally and professionally.

I almost get a sense that people are signing up for AWS because, well I'm not positive about this, but it seems like its trendy. Possibly some startups don't realize AWS is just providing you with pre-installed systems that you can easily install yourself? I don't think it's a bad decision necessarily because depending on your size you may not want to devote any time to configuring servers. Maybe some people who have made that choice could set me straight?

My gut is telling me that, for my current situation, the main benefit of AWS - the automatic scaling - will be quite expensive that by the time we actually do need to scale. So we will be probably looking elsewhere for hosting at some point int the future. Much like the article suggests.


What about OpenStack? OpenStack seems like the best of both worlds with being able to manage both your own hardware as well as burst to your OpenStack host's resources on demand. There are multiple OpenStack providers like Rackspace, HP, and many more. This means that if you don't like one provider, you can easily move to another OpenStack provider without being locked into 15 different AWS services. You may need to schlep your physical servers to a different datacenter, but that is still easier than decoupling your service from AWS.

From experience, I have seem that the price of performance on AWS is much higher than companies that buy their own hardware. Knowing what resources your service needs as a baseline can be helpful when picking which machines should be reserved instances, but still you may as well just buy your own hardware if you want the best perfomance/price.


AWS is a great place to start if you're not yet sure what resources and scale you need. You can play with various solutions and easily scale up.

It makes developing so much more efficient when you don't have to make major choices up front, and can buy yourself some breathing room by throwing temporary resources at most performance issues while you review your architecture.

That either stabilizes to a point where you have an architecture that you can implement cheaper and more efficient using more traditional hosting solutions, or you come to a point where you really need AWS's flexibility.

One caveat though: don't make your architecture too dependent on AWS-specific services until you are 100% AWS is the right choice for the long term.


Compared to custom colocated clouds, you scale, code, and build your stack completely differently. I could not do half of what I do under any PaaS/SaaS.

I avoid disk at all costs (nearly unattainable amounts of RAM on PaaS/SaaS), if disks are hit they must be SSDs, treat everything immutably, concurrent/distributed computing, assume hardware is plentiful (192+GB ECC, 24+ of new xeon cores, etc). I scale completely differently than most. They really get you on RAM, I can build whole servers for what it might cost for a month of PaaS/SaaS.


I often hear that the best way to use AWS is to host your 24/7 stuff elsewhere and use AWS for the spikes. This makes a lot of sense, but I always wonder what the recommended (ie most cost-effective, especially in regards to bandwidth costs) place to host te 24/7 stuff? For example, moving a ton of data between EC2 and S3 is free (for bandwidth; ignoring requests costs), but moving 10TB out costs $0.12/GB which seems quite costly...

I guess the sweet spot is to use external hosting for your web apps and such and AWS for any large spike-prone batch processing: moving data into S3 is free (though obviously moving data out of wherever else you're hosting probably isn't), use EC2 to process it (possibly on spot instances!) and then move the results (which are much smaller than the raw data for a lot of use cases) back to the 24/7 hosts?

Though my question still remains: where do HNers recommend to host these servers knowing that AWS will be used to pick up the slack and handle irregular/unpredictable workloads?


I currently spend ~$2000 on Softlayer for six servers and use about 30TB of bandwidth. On AWS I would have paid more just for that bandwidth.


And you can pay much less than half that via custom server builds and colocation. It is just a matter of how far down the chain you want to go, given your expertise and sensitivity to hardware costs.


He doesn't actually have to keep replacement parts in the datacenter or have staff close by to the datacenter to go and perform replacements or new installs, or worse - pay >100$/hr. for remote hands with colo.

Over time it's certainly more expensive to rent, but you get to cancel and move on to better hardware when it comes out, without having to worry about re-purposing or selling old servers.


I don't keep hardware spares for my 300+ infrastructure as our hardware provider has 24 hour turnaround on warranty replacements.

As for re-purposing, I have tons of uses for older hardware to do background computation or other jobs. I suspect I can extend the lifetime to 5+ years on most of it, which is quite good in my opinion. You just need to design your system with modularity in mind, which you should be doing regardless of your hosting choices.


Nice post. It is important to note that these tend to be cyclical. As start-ups go through various stages of their life cycle, PaaS/ IaaS providers update their offerings and technologies mature/ invented, the appeal may shift between these options. I think it makes it even more important to build your technology stack in a way that is:

1) easy to deploy, migrate and update (using standard deployment technologies) and 2) least dependent on a specific vendor (GAE ;)


OVH is not accepting any new orders. They claim to be sold out of all nearly all server types.

And that in a nutshell explains why AWS is a safer choice.


There are numerous other very good dedicated hosts that are alternatives to OVH. The pricing will be slightly higher, but OVH is dirt cheap to begin with compared to AWS. 1tb of transfer with Amazon will cost you almost as much as a nice e3 v2/v3 xeon server with 16gb to 32gb of memory and 10tb to 33tb of transfer.


OVH actually supports running the Proxmox virtualization distro on their servers. That means you can easily get a 32GB dedicated server with raid1 SSDs (around $100/month here in Canada) and spin up VMs to your heart's content. Proxmox also supports running your host nodes in a cluster, which allows for live migration. And if the math isn't already ridiculous, keep in mind that all the running OpenVZ containers (which proxmox supports) actually share a single kernel, and thus share a good chunk of RAM.

That being said, OVH is notorious for lack of support, and my experience so far (6 months) suggests that using them is not without risk. So at the moment I'm automating everything so that if an OVH engineer does decide to accidentally pull the plug on my server(s), I can failover in an hour or two.


While that certainly seems like a good idea on the surface, it creates a horrible single point of failure for your entire setup. I certainly hope you get more hosts than one and distribute all your VMs across them. You'll have zero failover in case of host failure.


Amazon's win is elasticity, moving your servers up and down often. It's not as big of a win if you have a known quantity of resource utilization over a long time period.

Actually, there is a win to be had there too. If you can spin down your instances with load in an intelligent way, you can save A LOT of money using a combination of reserved instances an on demand instances.

However, if you had a program that was smart enough about dealing with load and spinning up/down instances and managing cost relative to reserved instances, on demand instances, and spot instances, that could save a ton of money.

That kind of optimization is tricky so it's a lot easier to just switch providers like the OP.


1. A Correction to that post is there isn't MANY provider that are around the same price. He said Hetzner. And that is like the ONLY other provider for the same price. And in many cases OVH offers better value then Hetzner.

2. The problem the post mention about OVH not being elastic. That is simply true with every other dedicated provider. ( Actually StormOnDemand offers Dedicated at per minutes pricing ) . But OVH should have their Public Cloud ready in October. Which means you get a Hybrid of Cloud and Dedicated.


There are more provider like http://www.redstation.com/ but we have only experience with Hetzner & OVH, so i cannot say something about the other ones.


I've always found their bandwidth to be by far the most financially obnoxious aspect. $1,200 for just 10tb of bandwidth. You can get far more than that standard with any number of tremendous dedicated hosts on a $150 box. Digital Ocean charges a mere $0.02 for overages, by comparison.

I don't mind paying a premium for the easy systems and integration capabilities that AWS makes possible, but paying such extreme rates for bandwidth (when Amazon no doubt pays next to nothing per gb of bandwidth), is a cost too far.


I think these are good points! I've been held back by AWS prices as well, especially during bootstrapping they are rather high.

The downside you mention at the end, regarding setup time: we use CloudVPS, a Dutch based company that keeps upping its service in the direction of AWS (currently, when your billing status is OK, new VPS-es are setup without human interaction, not milliseconds but still fast enough for most use cases, for new customers you're running a free trial within a working day or so).


AWS was really cool back in 2007, but the truth is their pricing has not come down in line with the decreasing cost of computing over the years and now its pretty expensive.


Another comparison between AWS and VPS hosting. AWS is a Lego with many pieces, if you just use one piece (EC2) you may be better off with the cheaper alternatives.


This isn't even comparing AWS and other VPS, it's comparing EC2 with a dedicated server.

But actually from what I've seen in the wild, a lot of people just use EC2 without the rest of AWS for just general server hosting, so it's a useful reminder not to do this unless you don't care about the bottom line. (And who doesn't?)


This sounds quite a bit like the way you're supposed to use AWS--you spike out your services quickly, figure out how and where you need to grow, and then move to a different service that provides that at a cost-effective level.

I can't imagine building a complete business model around AWS, but using it to begin the growth period seems reasonable.


I am planning to move from AWS to Linode mainly because of performance. My app is CPU intensive. I think for such apps you need to take high end EC2 instance.. I tried with small and medium instances but found them quite slow.

With linode 8 core small instances, I could handle 2-3 times the traffic. However from management perspective AWS rules.


I was in the same position not too long ago. While I don't think I can make any real recommendations because I don't know your specific requirements, I highly suggest checking out Hetzner and OVH dedicated servers. I found that the ping time from hetzner to customers in the US did not make a difference for my purposes, and I can get a much much beefier server at hetzner than linode.

To be fair, the negatives I have experienced so far are: hetzner's management console is pretty poor compared to linode's (but it gets the job done), and linode is a self-serve almost instant provisioning while hetzner seems to take about 12 hours.


If your app is CPU intensive then why wouldn't you look at dedicated ?

Switching to Linode is always a terrible idea considering how disgraceful their security and business practices are.


I use linode as well and, given that I follow the industry at least as much as the average HN user, I'm very surprised I haven't heard of these 'disgraceful' practices.

Could you please elaborate?


> Switching to Linode is always a terrible idea considering how disgraceful their security and business practices are.

Can you please elaborate on this? I just signed up, so I'm curious.


>> ... or move it to your own server as we did.

I'm curious ... have you factored in your power costs? People costs (or opportunity costs if your existing staff is re-allocated to server admin tasks)? Additional cost of space for your on-prem setup? Have you factored in the cost of potential downtime? Single points of failure?


There is a dead spot between using EC2 on demand and paying for the 3 year reserved instance, both of which I've found to be practical.

At both ends of that spectrum, however, I've found the pricing to be fairly reasonable. It just might not work for a startup.


How is the 3 year reserved instance practical given Amazon tends to cut prices significantly in a 3 year span? I've seen 1 year terms make sense but never 3 year.


Talking with our account manager, he mentions that the 1-year term is what most people go for anyway - you won't get caught short with long-term price drops, and you have more flexibility when business demands change. Overprovisioned capacity is less painful when there's only 6 months left rather than 30 months...


>there are also downsides when moving it to your server, more system administration, you have to build your own firewall, take care of security & backup, et

Startup idea right there. But then if I thought of it so quickly, somebody probably already does this.


Yea, we have a global public IaaS cloud that puts a real Cisco firewall / load balancer in front of your subnet(s): https://nacloud.dimensiondata.com/


Does anyone have any experiencing with OVH's dedicated cloud offering?

I'm looking at this as an option vs a small AWS deployment. Seems to offer a lot of the flexibility of virtualization at a much better price/performance point than AWS.


When the company gets big, the best deal is.. surprise.. running your own DC with an AWS-like system for the devs. Much cheaper, also much faster..

Of course, using old school deployment is a mistake (slow, pisses off devs, etc.)


Does anyone offer the equivalent of AWS Security Groups? Anyone offer free intrusion detection scanning? For me security groups is a killer feature.


AWS is ridiculously expensive. The startup I was in was spending like $100,000 a month on it...


Way to give us no useful info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: