Hacker News new | past | comments | ask | show | jobs | submit login

> No better reliability?

Pretty sure EC2 instances and EBS volumes have a lot more redundency than a single server. You really need two colocated servers to replace a single EC2 instance. Still probably cheaper but also a larger time investment.

If the difference between AWS vs colocation is an additional FTE then AWS is cheaper.




I'm always nonplussed by the "additional FTE" argument, its obscenely overestimated -

"""We are going to say we used four hours of labor. This includes drive time to the primary data center. Since it is far away (18-20 minute drive), we actually did not go there for several quarters. So over 32 months, we had budgeted $640 for remote hands. We effectively either paid $160/ hr or paid less than that even rounding up to four hours. """

Software and systems configuration wise you aren't really going to be doing much different timewise then you would by doing loop de loops with AWS configuration stuff anyway. Tftp etc is just not that tough.


About 7 years ago we brought in the ELK stack for security log ingestion and basic analytics. It took at least three FTEs to maintain the cluster, along with constant issues of queries crashing clusters, or provisioning/building/repairing the dozens of servers and storage units to handle the several terabytes per day of ingest.

To be able to throw that behind an infinite, horizontally scalable mechanism would have saved us a lot of pain and troubleshooting.


How much of those 3 FTEs would you have needed anyways, though? that's the point-- to compare apples to apples.

Also, dozens of servers is past the point we're talking about, I think.

Bare metal and your own infrastructure makes a lot of sense really small (a few servers); and it may start to make sense again at some point really large.


About 7 years ago I did the same. 2 engineers, 2,500~ physical machines (6 were elastic search nodes).

We were highly effective, sure, but like, it’s not as hard as people seem to claim as long as you’re paying for the datacenter space already.


Tftp?


Tftp is commonly used for PXEBoot scenarios.

You have a centralized PXEBoot server (basically a DNS + TFTP Box). All servers you manage auto-boot to PXEBoot, then download their image from the central server, then automatically provision themselves through Puppet or Kubernetes or whatever the heck you're doing.

Throw down a few network power-adapters (to turn on/off in times of emergency), IPMI for network remote KVM, a VPN to provide security, and you're set to run your own small cluster.

-------

If you need more servers, you buy them up, give them a PXEBoot thumb-drive, and add them to your network.


The centralized PXEBoot server is a single point of failure, at least how you presented it.

You also need expertise in DNS/(T)FTP/PXEBoot/hardware/Kubernetes/Puppet/IPMU/KVM/VPN/etc.

It's like that joke about the expert called in to repair expensive factory equipment. He comes in, takes a piece of chalk out of his pocket, marks the piece that should be replaced with an X. A month later they get the bill: $100000. The factory manager wants to see a detailed invoice for that huge expense. They get it 1 week later:

$1 for the piece of chalk

$99999 for the expertise needed to know where to put the X




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: