Hacker News new | past | comments | ask | show | jobs | submit login

Bigger servers just get divided up into more VMs. As long as it's cost effective to scale vertically, it will continue to happen at the cloud platform level.



It'll be interesting to see what these high core count servers do to The Cloud though.

Fifteen years ago a lot of medium-sized organizations had a full rack of single and dual-core servers in their offices that cost half a million dollars and consumed >10KW of electricity day and night.

That made it attractive to put everything in the cloud -- spend $250K on cloud services instead of $500K on local hardware and you're ahead.

But their loads haven't necessarily changed a lot since then, and we're now at the point where you can replace that whole rack with a single one of these high core count beasts with capacity to spare. Then you're back to having the latency and bandwidth of servers on the same LAN as your users instead of having to go out to the internet, not having to pay for bandwidth (on both ends), not having to maintain a separate network infrastructure for The Cloud that uses different systems and interfaces than the ones you use for your offices, etc.

People might soon figure out that it's now less expensive to buy one local server once every five years.


I think it's always been vastly cheaper for the hardware. But it gets more complex when you look at the TCO. With the in house server you need at least two, in case one fails. Although now probably everyone is running their own mini cloud with kubernetes, even in the cloud - and that should make it relatively cheap.

Then you need someone to plan, provision, troubleshoot, and maintain the physical servers. So at best a full time fully loaded position which costs the company roughly 2x the salary. And that's only if you know your workload so well that you can guarantee the shape of your hardware usage 3-5 years out. Rarely possible in practice.

I'd say always start with the cloud, and you'll know if or when you could do (part) of it cheaper yourself.


> With the in house server you need at least two, in case one fails.

That doesn't really change much when the difference is a four figure sum spread over five years.

> Then you need someone to plan, provision, troubleshoot, and maintain the physical servers. So at best a full time fully loaded position which costs the company roughly 2x the salary.

Would it really take a full time position to maintain two physical servers? That's a day or two for initial installation and configuration which gets amortized over the full lifetime, OS updates managed by the same system you need in any case for the guests, maybe an hour a year if you get a power supply or drive failure.

If the maintenance on two physical machines add up to a full week out of the year for the person already maintaining the guests, something has gone terribly wrong. Which itself gets balanced against the time it would take the same person to configure and interact with the cloud vendor's provisioning system -- probably not a huge difference in the time commitment.

> And that's only if you know your workload so well that you can guarantee the shape of your hardware usage 3-5 years out. Rarely possible in practice.

Most companies will do about the same business this year as they did last year plus or minus a few percent, so it's really the common case. And if you unexpectedly grow 200% one year then you use some of that unexpected revenue to buy a third server.

Where the scalability could really help is if you could grow 200,000% overnight, but that's not really a relevant scenario to your average company operating a shopping mall or a steel mill.


Certainly if you have someone already on the payroll who can take responsibility for the hardware part time, that makes it a lot cheaper. That's a different skill set though, so it's not true for every company.

With respect to changing workloads, I wasn't thinking so much about scale, which I think isn't that hard to plan for, but more about changing requirements. If you add, remove, or change a piece of your stack the cloud gives a lot of flexibility. Add memcached, no problem, spin up some high mem instances. Need more IO on the database server, switch to an instance with fast SSDs, or a bigger instance. I think those kinds of changes are common and hard to plan for. Until it happens you probably don't know if you are disk, network, memory, or CPU bound.

Once your stack is sufficiently mature and not changing much the workload gets a lot more predictable. The cloud is really good for starting out. The danger is it's also really good at locking you in, then you are stuck with it.


> Certainly if you have someone already on the payroll who can take responsibility for the hardware part time, that makes it a lot cheaper. That's a different skill set though, so it's not true for every company.

True, though all the physical hardware stuff is pretty straight forward, to the point that anybody competent could figure it out in real time just by looking at the pictures in the manual. Configuring a hypervisor is the main thing you actually have to learn, and that's a fundamentally similar skillset to systems administration for the guests. Or for that matter the cloud vendor's provisioning interface. It's just different tooling.

> If you add, remove, or change a piece of your stack the cloud gives a lot of flexibility. Add memcached, no problem, spin up some high mem instances. Need more IO on the database server, switch to an instance with fast SSDs, or a bigger instance. I think those kinds of changes are common and hard to plan for. Until it happens you probably don't know if you are disk, network, memory, or CPU bound.

I see what you're saying.

My point would be that the hardware cost is now so low that it doesn't really matter. You may not be able to predict whether 8 cores will be enough, but the Epyc 7452 at $2025 has 32. 256GB more server memory is below $1000. Enterprise SSDs are below $200/TB. 10Gbps network ports are below $100/port.

If you don't know what you need you could spec the thing to be able to handle anything you might reasonably want to throw at it and still not be spending all that much money, even ignoring the possibility of upgrading the hardware as needed.

> Once your stack is sufficiently mature and not changing much the workload gets a lot more predictable. The cloud is really good for starting out. The danger is it's also really good at locking you in, then you are stuck with it.

Right. And the cloud advantage when you're starting out is directly proportional to the cost of the hardware you might need to buy in the alternative at a time when you're not sure you'll actually need it. But as the cost per unit performance of the hardware comes down, that advantage is evaporating.


I don't think the hardware is that trivial. I used to think that, but I've learned a lot more respect for the people who understand how to troubleshoot it, how to purchase compatible components, and spot troublesome products and brands. It's a whole career, it has its nuances.

But you make some good points. In general I agree it's cheaper to vastly over provision than to use the cloud. And you can do things like build an insane IO system for your database, which you can only sort of do in the cloud.

Of course this is an advantage for hosting internal company stuff, for web facing things you may need to place hardware in remote datacenters, and then you do need people on location on call who can service it. You have to generally have much larger scale for that to make sense. Even Netflix, because of the variability of their load still use a combination of cloud and their own hardware.


> I don't think the hardware is that trivial. I used to think that, but I've learned a lot more respect for the people who understand how to troubleshoot it, how to purchase compatible components, and spot troublesome products and brands. It's a whole career, it has its nuances.

I didn't mean to suggest there isn't a skillset there. And that's really important when you're doing it at scale. The person who knows what they're doing can do it in a fifth of the time -- they don't have to consult the manual because they already know the answer, they don't have to spend time exchanging an incompatible part for the right one.

But when you're talking about an amount of work that would take the expert three hours a year, having it take the novice fifteen hours a year is not such a big deal.

> Of course this is an advantage for hosting internal company stuff, for web facing things you may need to place hardware in remote datacenters, and then you do need people on location on call who can service it. You have to generally have much larger scale for that to make sense.

On the other hand you have to have rather larger scale to even need a remote datacenter. A local business that measures its web traffic in seconds per hit rather than hits per second hardly needs to be colocated at a peering exchange.

It's really when you get to larger scales that shared hosting starts to get interesting again. Because on the one hand you can use your scale to negotiate better rates, and on the other hand your expenses start to get large enough that a few percent efficiency gain from being able to sell the idle capacity to someone else starts to look like real money again.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: