Hacker News new | past | comments | ask | show | jobs | submit login

Honestly you can go very far with unmettered data plans. We consume 2GB/S, maxed out, which ends up being 5184TB a month. We have millions on users a day, all on a streaming video platform.

It cost less than $2K/Month.

The cloud is crazy expensive. Private servers are beasts, and they are cheap.

Of course, for this price, you don't have redundancy and horizontal scaling.

You also don't have to maintain and debug a system with redundancy and horizontal scaling.




> We consume 2GB/S, maxed out, which ends up being 5184TB a month. We have millions on users a day, all on a streaming video platform.

> It cost less than $2K/Month.

The solution in this article is serving on the order of 100TB/month for $400/month including a high speed global CDN, their API and database servers being hosted reliably, and redundancy and backup being handled by someone else.

Your solution is hosting on the order of 1000s of TBs of month (ignoring the database and other aspects of this website), but the price is an order of magnitude higher. You’ve also given up all of the automatic redundancy and hands-off management, and you don’t have the benefit of a high speed global CDN.

But more importantly, you have significantly higher engineering and on-call overhead, which you’re valuing at $0.

If anything, that only makes Polyhaven’s solution sound more impressive.

> Of course, for this price, you don't have redundancy and horizontal scaling.

Which is a huge caveat. The global CDN makes a big difference in how the site loads in global locations. Maybe not a big concern if you’re serving of static files with a lot of buffering, but they have a dynamic website and a global audience and they said fast load times are important.

> You also don't have to maintain and debug a system with redundancy and horizontal scaling.

But you have to do literally everything else manually and maintain it yourself, which is far from free.

All of these alternative proposals that value engineering time at $0/hour and assume your engineers are happy to be on-call 24/7 to maintain these servers are missing the point. You pay for turnkey solutions so you don’t have to deal with as much. Engineers don’t actually love to respond to on call events. If you can offload many of your problems to a cloud provider and free up your engineers for a nominal cost, do it.


A python that knows python and linux is enough to support the up time needs and evolutions of this particular product. Actually a part time one.

The entire team is composed of half of one dev.

I'm 100% sure it's way cheaper than anybody that has AWS on resume.

Of course, some things have to give, like the global CND, and some data guarantees.

Everything is a compromise. It's all depend of what is important for your project.

EDIT: also, my comment was not meant to oppose the article, but rather confirm the view that you should calibrate your setup to your project. Doing so will lead to great savings in hosting, and project complexity. A lot of projects don't need the cloud.


> The entire team is composed of half of one dev.

Who is on-call 24/7/365, never takes vacation ever, and is always available to fix the website?

It’s weird how much HN hates jobs with on-call requirements, but every time cloud services come up the solutions always involve forcing someone to be permanently on call to save a few hundred dollars per month in hosting costs.


Nobody is on call 24H.

If the system has a problem during the night, the users will wait until the morning.

The world doesn't stop because a streaming service is down for a few hours. It's not a medical service, or something thousands of businesses rely on.

It just looses a bit of money and users are grumpy for a day because they had to wait until they could access new content.

It's ok.


> Nobody is on call 24H.

This one dev is literally on call 365 days a year and can never be away from a computer on vacation. If he leaves, the project has no one.

How is that not a problem? Surely you can see that this isn’t reasonable for anyone who wants to run a business, or any employee who doesn’t want the website to be their life.

> If the system has a problem during the night, the users will wait until the morning.

> It just looses a bit of money and users are grumpy for a day

If you’re running a website where extended outages are no big deal and you don’t care about lost revenue, then it’s not really a valid comparison to the typical business website.

Your situation is unique, not a model by which other companies should follow.


> If you’re running a website where extended outages are no big deal and you don’t care about lost revenue, then it’s not really a valid comparison to the typical business website.

I think you're severely underestimating how many businesses make a significant amount of money from their website, but doesn't actually have full-time developer available. An extended outage would cause significant revenue loss, but it's typically not a problem because outages are surprisingly rare when you (1) have a very stable traffic pattern and (2) you don't spend a lot of time adding features and refactoring. Pretty much every cloud outage we've seen was caused by a human configuration error, not fault machines.


> I think you're severely underestimating how many businesses make a significant amount of money from their website, but doesn't actually have full-time developer available.

No, I’m well aware. But there’s a simple solution to this problem: Don’t try to run and maintain your own servers. Pay a little extra to use cloud hosting and let it be someone else’s problem.

I take issue with these calls to setup and maintain your own custom solutions and servers, while also suggesting that the cost of engineering and maintaining such a custom setup should be ignored.

Running your own servers and not having developers is a recipe for an endless stream of contracting invoices that are going to cost far, far more than just using a hosted cloud solution.


I'd be on board with "pay a little extra to rent dedicated servers", but "move to one of the big-three cloud providers" doesn't sound like a sound financial decision for the case presented.


> pay a little extra

The whole idea is that it's not a little extra, but 2 orders of magnitude.


> No, I’m well aware.

No, apparently not.

> But there’s a simple solution to this problem: Don’t try to run and maintain your own servers. Pay a little extra to use cloud hosting and let it be someone else’s problem.

But that's only if it IS a "problem" in the first place. You have defined it as such, although Bitecode themselves said that for them, it simply isn't. (To paraphrase: "If the site is down, then it's down; so what? We'll fix it when we're in the office again.")

Just plain ignoring whether something is "a problem" or not is hardly being "well aware".


> An extended outage would cause significant revenue loss, but it’s typically not a problem

This just seems like a bad decision from a business perspective. You are willing to endure a significant outage that will cost a lot of money but not pay to prevent it? Machines can and will fail.


You keep moving the goal post.

It seems it's criminal to run a service on the cheap, there must be a terrible human being behing it.

Well no, the dev is not attached 365 days on its computer.

A freelancer is hired part time for the duration of the vaccations. It cost a full dev salary for one month, taking in consideration training time, that's all.

> If you’re running a website where extended outages are no big deal and you don’t care about lost revenue, then it’s not really a valid comparison to the typical business website.

Most services can actually go down once a month, and be a viable business. You are not google or facebook.

In fact, most human service goes down for days: bakeries, lawyers, teachers, plumbers.

The fact your think internet services should be up all time is only in your head. Humans adapt perfectly.

It's not that a big deal. Most of our softwares are not as important as we want to think.

If you really want a 99.99999% up time, you going to increase your service quality by 10%, and your service cost by 1000000%.

The funny thing is, the downtime of ourservice has not being more than github's downtime in the last few years. So honestly the freelacer is mostly hired to have drinks on the house. Because monolythes are very reliable in the first place.

> Your situation is unique, not a model by which other companies should follow.

Every situation is unique, I never, ever stated it was " a model by which other companies should follow". You did.

There is no such thing as "a model by which other companies should follow". You must adapt to the situation and goals. Engineering is about compromises.

My post is simply stating the reality that you can get very far with good old tech.

And a lot of projects don't need the cloud, or high availability. Yet they pay premium for it.


> A freelancer is hired part time for the duration of the vaccations. It cost a full dev salary for one month, taking in consideration training time, that's all.

You pay a full dev salary for one month every time someone wants to take a break?

It’s baffling that anyone can read an article about someone spending $400/month on cloud services and then start proposing things like this as an alternative.

Engineering labor is expensive. Cloud is surprisingly cheap once you factor in engineering costs.


One month part time once a year as an additional cost is way, way cheaper than anything else.

> Cloud is surprisingly cheap once you factor in engineering costs.

No, it's not, since you need somebody qualified to operate it. And such qualificatied employees is very expensive. And you will need them on call anyway, since it will break, just in different way than a bunch of private servers.

I'd argue the cloud would be more expensive, even if hosting were not, because you need a more expensive team to run it.


He never said they don't care about lost revenue. I am willing to guess they considered the lost revenue, weighed it against the cost of having someone on-call 24/7, and considered that the lost revenue was cheaper.

If you look at how frequently sites like Reddit used to have downtime, it doesn't seem to matter too much for consumer products. Having half a day downtime once a year might be completely acceptable.


> This one dev is literally on call 365 days a year and can never be away from a computer on vacation. If he leaves, the project has no one.

How does the cloud solve this?


You can host on AWS and still get hours-long downtime, as has happened recently.

Whether the cost of trying to add another 9 to your uptime is worth the marginal benefit is for each company to decide. Each 9 gets exponentially more expensive. A lot of companies who think otherwise actually can afford (and will, sooner or later, be forced) to be down once in a while.


"forcing someone to be permanently on call"

The company I work for has a lot of stuff in the cloud. We seem to have quite a few position's worth of people permanently on call.

The percentage of "data center"-type on-call situations has perhaps gone down somewhat since moving into the cloud, but it has not gone to zero, and it was never the majority of problems anyhow.

It seems like you're sneaking the idea in that if they would just pay a lot more money, the person on call wouldn't have to be on call. I'd like to know what cloud you're using, because it doesn't seem to be any of the ones I know of. If your service is critical to your business or project, you've got someone on some sort of call (maybe not overnight call, which is the really rough bit), period, or you've got a business that can disappear at any second.


I think it's bizarre you're giving cloud marketing material to a person, whose in a position to know their own situation, as-if you were describing their situation.

Like.. what are you expecting to happen? You'll just gaslight them into thinking they dont know how their business works?


I’m discussing the linked article, which is what this comment section is for and it’s what the parent commenter was comparing to.


Whether or not a site is hosted in the cloud, it will break from time to time. S** invariably happens, no matter what. So, even if you host in the cloud, you're going to have problems, but they will be different problems. A developer to backup and support the site will be required, one way or the other. Case in point: polyhaven.com (the subject of this article) is not reachable as I write this.


One has to presume that they have priced this and found the non-cloud version over-all cheaper.

Labour isnt expensive if you're operating towards a minimum needed to function, and your systems are sufficiently operationally stable.


> Labour isnt expensive…

Engineering labor is definitely expensive.

The entire $400/month bill for this linked website will only get you 2-3 hours of consulting time. They’re getting an enormous value by just offloading the work to someone else and not having to worry about it.


That is based on the id a cloud base service would not need a team to make it work.

I'd argue it actually need a more expensive team.


You have to do an apple-to-apples comparison though. If you're comparing a single colo'd machine vs a 3000-EC2 instance fleet with a load balancer, api nodes, database nodes (and requisite db admin team), and Kafka and DynamoDBs somewhere in there, then the cloud is going to be more expensive to manage.

Barring in-depth research (which I'd love to read if someone has any links), it's not clear on a 1:1 basis what's cheaper. Paying for someone's time to research hardware and talk to vendors, run POs for them, figure out where/how to install them (Equinix is expensive), and RMA hard drives as that comes up; versus not paying for that and instead paying a cloud vendor for that privilege. Throw on top a changing hiring landscape (how much 'sysadmins' cost vs 'devops') and it really depends on the size of this hypothetical fleet that we're trying to manage, and how complicated the backend of the site is. If there's no real backend to speak of, Cloudflare's CDN for static assets is going to be way cheaper, and available now, vs anything you could possibly build from scratch, that would maybe be ready in a couple months.


if only there were an `if`


Does it work for video as well?


Thanks! Exactly that! I always tell that customers and their response is:

-But google/facebook/amazon...

-But uptime needs to be 99.999

-But everyone uses cloud

Most businesses are not a trading-market, have less then 100 peoples (aka you are probably not another amazon), and no bonus using a cloud/kubernetes etc.

But it's the same old story, in the 00's i used the ~same arguments against buying OracleDB ;)


And you can tell them all of those are possible, but they need to have a massive budget, not just for the monthly bills but also to hire experts to set things up with those constraints.

I worked at a company once that, from higher up, said that they had to have five nines of uptime. We had some really good cloud engineers there (one guy set up a server / internet container for the military in Afghanistan; in hindsight he said they should've just sent a container of porn dvd's), and they really went to town. For that five nines uptime, you're already pretty much required to set up your infrastructure to use multiple availability zones, everything redundant multiple times, etc.

Of course, the actual software we wrote was just a bunch of CRUD services written in nodejs (later scala because IDK), on top of a pile of shit java that abstracted away decades of legacy mainframes.


Either your compute demand is highly elastic, or your revenue and profit scale with usage, otherwise the Cloud is probably not for you. At least not in the longterm or for running websites.


> uptime needs to be 99.999

Isn't AWS down like every two months for a few hours? That's far off the 99.999% mark. No one can guarantee 100% uptime and sometimes it's even better to have that under your control (eg. have a dedicated server and a backup one from different providers).

My point is that, if you want the highest possible uptime, you shouldn't rely on a single (cloud) provider.


any single one of those arguments is cargo-cult actually, isn't it?

All means, no end.


Well most of them are Non IT businesses, so they just know what peers/family etc told them what todo (or what they do), then read some (oneside-)sponsored media and now they ~know whats best, it's Azure with WindowsServerDatacenter-edition, SQLServer-enterprise and McAffee antivirus ;) but who can be mad about it? It's just not their field of expertise or even interest.


fair enough, however they juggle the buzzwords.

If they didn't but cared about their business (they know I hope) and make hard requirements to IT, that would help. IT should then come up with solutions to those real-world problems. We're not talking just about hobbyists, do we?

That's what Eric Evans talks about in DDD as I got it.


> fair enough, however they juggle the buzzwords.

Oh absolutely true, you don't want to look completely clueless in front of people who take your money to setup an infrastructure ;)

>and make hard requirements to IT, that would help

That would make stuff so much more easy. Often it even lacks even a inventory of used applications/hardware....network.

Just one example:

Made a plan for new hardware and network (new cabling etc), then i walked around the workshop and there was that dusty machine running...i asked what it is...a dos-machine with...wait...TOKENRING. That machine was an integral part of the whole workshop ;) However we made a virtual FreeDOS machine and buy'd a software converter for the machine protocol, the 25yo cnc-machine needed a new card (ethernet instead of token-ring, very lucky we found that thing) so there was that one little DOS-machine no one though about who could stop the whole "modernization".


> you don't want to look completely clueless

That's why I give them money, so they bare with my cluelessness on their field. Would do it myself otherwise.

Or the way round: "I hire smart people for a lot of money, why would I tell them what to do" (Steve Jobs)

> that one little DOS-machine

The donkey does the work and the horse gets the fame.


> But it's the same old story, in the 00's i used the ~same arguments against buying OracleDB ;)

But no matter how logically convincing your arguments were, most of the time upper manglement just went on buying Oracle, right...?


Get two servers, stick them in different cities. $4k a month and you have resilience. $6k a month and you have resilience and double your throughout.

Some stuff makes sense to put on iaas, dns often does for example.


4k/mnt for 2 Servers? That's a little bit to much, 200euro for two servers at hetzner is perfectly fine ;)


I say server, presumably it’s the entire solution of however many servers, storage, network etc.

If it’s 2k/month for a non resilient solution, it’s in the order of 4k for a resilient solution (you need to relocate assets both ways but it’s in that order of magnitude)


You can have redundancy and horizontal scaling with private servers and still end up paying less than what you would on AWS and the like.

I have some clients who use AWS and others who prefer colo and/or dedicated servers from traditional datacenters. The latter group can afford to over-provision everything by 3-4x, even across different DC's if necessary. DC's aren't yesterday's dinosaurs anymore. The large ones have a bunch of hardware on standby that you can order at 3 a.m. and start running deployment scripts in minutes.


Not including extra human cost in the analysis is just disingenuous. I think to manage private servers of that size you would need at least two extra experts totalling at least $20k/month.


> I think to manage private servers of that size you would need at least two extra experts totalling at least $20k/month.

What? You set up the deployment once, and then you only need to touch it when things go horribly wrong, which is every couple of months, or to make minor quick tweaks and run some updates. Let's be generous, and say you need 10 h/month, which is about 1/16 of a person-month. And if things go horribly wrong, everybody drops what they are doing to fix things, anyway, no matter if you're on AWS, dedicated/colo or run your own data center.

When you significantly change your architecture/deployment, then you need to put in more time again, but if you build your code with need to scale and such things in mind from the get-go, then that won't come up much or at all.


> What? You set up the deployment once, and then you only need to touch it when things go horribly wrong, which is every couple of months

Right, which is exactly why people pay extra for cloud managed services.

If things are going “horribly wrong” every couple of months then you must necessarily be on call 24/7 and never take vacation or time away. In practice, you need at least two people to manage on-call coverage so you’re not completely uncovered if someone gets sick, decides to take vacation, wants to travel away from a computer and so on.


Things go horribly wrong with AWS hosted stuff as well. And a lot of companies have a single-point-of-failure AWS person. While you're not wrong in general, nothing you just wrote is specific to running on dedicated servers vs AWS.


We're talking volunteer-run projects, though. Who cares if it's not available 24/7? Best-effort is good enough. Those managed cloud services also fail often, you just have no information and no recourse about it.


Things go horribly wrong every year or so. The site goes down. It's fine. This isn't Facebook.


What?! Maybe if you hire SV-skilled engineers on location in Sillicon Valley, but you can easily serve 2GB/second (on infra worth $2K/month) with one sys-admin dealing with it, and for way less than a whopping $10k/month.


> with one sys-admin dealing with it,

And if that one sys-admin wants to go on vacation? Or travel away from a computer? Or takes another job?

You can never have “just one” admin handling a server and being on-call 24/7.

Would you really want a job where you could never, ever be away from a computer because you’re the only on-call person? This doesn’t work.


> Would you really want a job where you could never, ever be away from a computer because you’re the only on-call person? This doesn’t work.

Works fine for me. $20K a month for two people doing f*k all is insane.


Why would it be necessary to have an engineer on call 24/7 ? If you do your risk calculations and an outage of 12 hours is acceptable in the expected frequency you just let the engineer have a nice evening and night an deal with the outage in the morning. If outages are only to be expected once a year an you can tolerate 48 hours of outage you don't need any on call engineer. Most outages are cause by changes. You can test those and plan putting these in production and have elevated monitoring after this to catch problems early. Only problem remaining is hardware outages. And those are very rare as long as you do decent lifecycle maintenance. As others said before. Not everyone is Google or Amazon and needs 99,999% uptime.


But that AWS deployment doesn't manifest out of thin air either. Kubernetes and AWS knowledgable engineers aren't any less expensive.


Are you hiring? What infrastructure engineer are you paying $10,000/month for just to manage 4 servers? LOL


They didn’t mention a fleet size but 2GB/s is a single commodity server.

You don’t even need a single employee to manage a single server…


Which "commodity server" could serve 2GB/s(16gbps) to public internet?


Pretty much anything. How about a Dell R340 with a dual-10G NIC and some SSDs? That's not commodity, that's cheap, a commodity server would be a dual-Xeon but that's overkill for serving 16gbps.


I think that is the problem with modern day's Web Dev. ( Sorry )

With Cloud and SaaS. They are so abstracted from Hardware that their knowledge on basic hardware and server, everything from CPU, Storage I/O and Network are close to zero.


At Netflix we’re doing close to 400Gbps on 1U commodity hardware, and pretty inexpensive.


And that is 400Gbps encrypted!

Side Note, aren't they 2U?


Yep! All TLS.

The production boxes are 2U, but it can be done in a 1U box.


How many DevOps do you need to manage your cloud? How much do they cost?


You need high skilled people who are comfortable in AWS or other cloud offering as well. Have to care of tons of things, set them up etc. They are not one button setups in real life when things get complicated.


What do you imagine these experts would do? Rebuild the servers from raw materials each month, and hand-code all software in ASM weekly?

In the real world, once most hosting platforms are up and running, the maintenance overhead is pretty low.


No, the system is maintained by one single part time dev. That's the entire dev team.


> extra human cost

Where? Costs vary hugely across the world




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: