A student of mine did a rather good thesis on de-clouding. In the
process she discovered the term "cloud repatriation" a fair bit of
literature on the movement to bring control of data and hardware back
'on-prem'.
She also noted that when you search on these terms the main engines
(all run by cloud service providers) return poor results not congruent
with the scale of the phenomenon. They are dominated by the opposite
message obviously heavily SEO'd up to the top, plus shill pieces
pushing cloud services but presented as "critical". Dig deep if you
want to find the real scale of the "anti-cloud" issues.
Her main conclusion was very interesting though. That the big issue is
not finance, reliability or control - but de-skilling.
As companies move their operations out to the cloud it's not the
disappearance of hardware from the premises but the loss of
skill-sets. Later they don't even know who to hire or how to write
the JD to bring people back in.
A good example was the broadcast industry. Entire branches of that
industry doing post-production, colour, transcoding, and whatnot moved
it all out to AWS. After price hikes, they wanted to go back to
running their own services. But they can't. Because nobody knows about
how to set-up and run that any longer - especially specialist things
like buffers and transcoders. I mean, try finding a sys-admin who can
just do simple tasks like set up and properly configure a mail server
these days.
Interestingly, I took a ~5 year hiatus from writing code right smack in the beginning of the "cloud phenomenon," and upon returning, realized how much things had changed, with both skill levels and specialization. Before my hiatus, a dev could/would single-handedly run a site for a small/medium business. Handling everything from server procurement to FTPing the files to the server and even server maintenance. When I got back, this was "a big no-no" and mostly frowned upon. Everything was deployed by "git push" (fancy FTP?), problem with a server? just deploy a new one, and if an AWS region goes down ... it's ok, everyone else is down too and we have no idea when it will be back.
That last sentence was something you never said to a boss back in '07-08. You would always know what was wrong and how bad it was within a few minutes of the issue, at least 90%+ of the time anyway. You could probably even guess how long it would be down for within ~10-15 minutes of discovering the issue.
FWIW, I think small/medium size businesses were/are probably the most affected by cloudification since those had the "most to gain" in the beginning. Then again, those were the types of businesses that I had the most interaction with until 2014-ish, I don't know what it looked like from a big-business perspective.
> Handling everything from server procurement to FTPing the files to the server and even server maintenance. When I got back, this was "a big no-no" and mostly frowned upon. Everything was deployed by "git push" (fancy FTP?),
While I do look back with nostalgia on the days of editing files in Notepad++, FTPing them to a live server, then quickly switching to my browser to refresh the page and make sure it works...
Those days are in the past for a reason. No, git is not "fancy FTP". Yes, version control really does add much-needed structure to collaborative projects and change tracking.
You can still run a small business web server on a colocated server and FTP files to it if you want. You just won't find any modern engineers who want to touch that with a 10ft pole. The world has moved forward since then, and for good reason.
I was specifically referencing an old fad (might still be a thing but I haven't run into it in years) where production is a remote and a dev pushes their main branch to it to deploy. Yes, version control is amazing and it was something I used a bunch before my hiatus, but back then, it was usually SVN IIRC.
That's an interesting take. But nah, I like some things the way they are now vs. back then. I love having cheap/easy CI that deploys to prod for me. I love having IDEs that are actually geared towards whatever language I'm writing in this year. I love having access to cheaper coloc space because most people aren't using it. I love having skills that most people don't even bother learning anymore when digging into performance issues. ...
There is much I love about today and I look forward to the future. I was just stating my observations. And yes, `git deploy prod` is a fancy FTP, but probably even worse than FTP if the build actually happens on the production server (which is something I saw more than once) and causes a production outage (also saw more than once).
> Using source control tools as part of the deploy process really is a valuable innovation. It's not an "old fad".
Its only innovative in an 'Apple released it and now everyone is on the bandwagon' way.
I've worked at companies that were using CVS for config file management in the 90s, the concept was already in play, it just wasn't as mainstream and didn't have a trendy infrastructure as code branding.
Good reasons for some people. But it turned out that those reasons
(which you avoid explicating) were not so good for some people after
all. Methods upon which you now "look back with nostalgia" had value
which endures, namely control, simplicity, low cost at small scale,
and a basis in standardised, accessible knowledge. These things are not
valued much any more, but it's my opinion (and of others) that is a
mistake.
> But it turned out that those reasons (which you avoid explicating) were not so good for some people after all.
I lived through part of that transition - I'll try and explicate through my own experiences.
The biggest thing automated deployments solved was human error. No more "oops, you forgot to FTP the JPEGs in binary mode" or "accidentally unploaded into a nested www/www folder". The transition was initially to tools like Ansible and Fabric which automated the transfer steps but had to be triggered manually. This also saved me a lot of time and pain.
However, this still left a gap where it was still possible to push broken code into production, so Continuous Integration (and/or Continuous Deployment) became fashionable. The benefits of these were you'd never push code that fails tests to production, the depth and breadth of tests being left up to you.
Incidentally, these developments coincided with my working with ever growing team sizes. There's no way I could have worked with 5 people with everyone S/FTP'ng stuff to prod. Neither would have Fabric + git been sufficient for four 5-engineer team working on the same software product. That's yet another benefit of the new way of doing things: it scales.
I think the additional benefits of checking-in your configuration and deployment steps into a version control system are self-evident.
Except realistically AWS is down maybe 2 hours a year and if you blow an on-prem switch with no configuration backup or a database server hard crashes while syncing its backup, you might be down days.
> if you blow an on-prem switch with no configuration backup or a database server hard crashes while syncing its backup, you might be down days.
Switches are easy and cheap to put in a fail-over config. DB's are also running atleast in a primary / standby config everywhere. Besides if your running a single DB instance in a cloud VM the same thing can happen.
And don't get me started on the trouble you can get in with for example GCP SaaS PostgreSQL...
In a former life (before clouds) we had a SAN go down because of a backplane problem. It was down for days while being diagnosed/fixed and then in read-only mode for another couple of days while the disks were in repair mode. I know people say things like "you should have a backup device" or "switches are cheap" but sometimes it IS the backup system that goes down and sometimes that system is VERY expensive.
Your average person doesn't listen to twitter, they listen to the mainstream news.
And mainstream news will be either "company X offline for hours" if its just you, or "Amazon cloud outage takes multiple companies off line for hours" - the latter is better.
Average person hears on the news that company X or cloud Y had an outage, at most they have some anger towards the company for a day or so until the next thing in the news cycle comes along.
In my experience, an average person seeing that one of their commonly-used tools is down due to a big tech company failure is just going to proclaim "snow day!" and not really mind at all.
>>That last sentence was something you never said to a boss back in '07-08.
Modern IT is swapped with CYA / No Accountablity
Cloud is not the only area this presents itself, everyone is more concerns about not being blamed for an issue than they are with resolving the issue. The more vendors you can point the blame to the better in their eyes...
> it's ok, everyone else is down too and we have no idea when it will be back.
AWS has never gone down across all regions at once. So we can do something about going down. You build the system to be multi-region and be able to evacuate traffic from one region to another, so when us-east-1 goes down again, you simply route around the damage and use resources in us-west2 until us-east-1 comes back up. People don't appreciate how much backend engineering goes into keeping something like Twitter up at all times.
Put it another way. Little guys go down when us-east-1 goes down. The big boys stay up no matter what.
> After price hikes, they wanted to go back to running their own services. But they can't. Because nobody knows about how to set-up and run that any longer
Who could have possibly predicted that the trap-shaped box was a trap?
The Trough Of Disillusionment is fast approaching. I'm surprised it's taken as long as it has. But then it always seems like the biggest, most expensive mistakes we make as an industry defend themselves, since nobody can bear to admit to themselves the magnitude of resources they set on fire in the process.
Very heartening to know other people are thinking about this. For about 4 or 5 years I've been warning about an equivalent problem in the HPC space, whereby even if on-prem is cheaper, companies will be forced to adopt cloud due to the lack of skilled bare-metal sysadmins.
I should note that these warnings aren't accompanied by much sympathy; companies and academic institutions both do a profoundly terrible job at incubating the skills they need, then bleat at their local governments about new graduates not having the skills to be employable.
It will get worse when the manufacturers of the best performing hardware will be the cloud providers themselves :) I am afraid Amazon will slowly drown out independent CPU (and eventually other gear, but with CPUs the playbook is most evident) manufacturers.
Today, Intel (and probably also AMD et al.) offer them exclusive deals on and even exclusive models of their hardware, due to the insane volumes "hyper-scalers" procure. In parallel, Amazon offers their own CPU arch (Graviton) cheaper than x86 cores to users of their platform in order to make it more attractive - at the insane margin AWS operates at, it can make these instances a loss leader, no problem. As a result, those in the x86 market that do not have a cloud business attached will grow weaker and weaker in comparison, and most cloud-native developer mindshare will shift to the cheaper option (since most engineering work in cloud envs is about reducing cost, period), which is AWS/GCP/... custom CPUs. And you, the random shmuck with an appetite for powerful on-prem hardware, will be left with worse products to buy for a higher price.
I'm not sure this is actually true. Intel and AMD are not thrilled that Amazon and Google have so much market power as their buyers. They are willing to give you a lot of other value-added products and a TON of engineering help along with their CPUs if you are buying a cluster. They will also give you a decent price (not as good as they have to give to hyperscalers, but good), and try to establish a long-term relationship with you.
Also, cloud environments are rarely cost reductions. They are very frequently more expensive than on-prem deployments, but they are used because they are easy to spin up and down and they allow you to engineer things faster.
The 'cool' thing - from a vendor's perspective at least - is that running your applications locally gives you a lower confidence interval when production is a different chip architecture than developer machines, so you have to spin up more pre-prod machines to be really sure your code actually behaves. Or use some other strategy that all ends in the same place: more servers to concurrently run more revisions of your code.
> companies will be forced to adopt cloud due to the lack of skilled bare-metal sysadmins.
More likely that they're just not paying enough, and those skilled sysadmins would rather go to any other company that will pay them appropriately for their skills.
This is just another take on the "nobody wants to work anymore" rant that happens when companies can't understand why nobody is accepting their open positions.
Companies are not paying enough, but we also do not have as many opportunities (for SysAdmins) to learn things.
Paid Internships, Jr Admin Roles, etc are fewer and far between, companies are looking for "experienced" people in a time where alot of the experience is retiring.
Dont get me wrong there are plenty of us Gen X's out there with lots of experience but most of us are also massively over worked with limited opportunities given for us to train the next generation, so pay is only 1 part of this problem, head count is the other
Even if you find a company that has good pay, they almost always under size their IT dept by 25-50%, and rarely promote internal training or continuing education, or offer entry level admin positions with the goal to skill up people.
I think if you want to go back into history and look at things, we had sysadmins who were not paid enough, but fell into "give a man a little power" trap and showed a side of themselves that we would rather not see, fed and reinforced by getting blamed for anything bad that happened 'on their watch'. The cumulative generational trauma on both sides of that relationship sets us up for a situation where outsourcing IT to someone who can't sass back or harass us about our deployment plans seems like a good deal. Yeah it costs about the same and it'll interrupt our roadmap but I get to 'stick it' to the IT department. Fuck those guys.
Now it's been a while and people are starting to feel nostalgia for the old days.
That is the perception (an incorrect one) that alot of devs have, even though they have never done, and refuse to do any sysadmin work.
It was never, has never, and could never be about "power". I have always been a generalist, with one foot in both roles of dev and admin.
I am very good dev in 3 programming languages (not including scripting languages like powershell and bash.. php, C#, and JS are my strengths), and passable in about 8 or 9 others (go, python, learning rust now, etc). I hold multiple certs and years of experience in sysadmin roles.
There are very few of us out there, people with extensive experience in both. I can tell you, once you have seen both sides of that wall you come way with the clear view that neither understand the other.
DevOps was proposed as a solution to that, and if Devops would have been implemented as a Management sytle not a role it might have accomplished that goal but sadly most orgs just eliminated the valuable perspective of sysadmins, and as you said "outsourcing IT to someone who can't sass back or harass us about our deployment plans" which from my experience many devs need to have their plans pushed back on because often time they are widely inefficient, insecure, ill thought out, and expensive
I have to wonder, if many of the cyber security incidents we have seen over the last 10 years could be avoided if Dev's lost the chip on their shoulder, and listened to the sysadmin more, instead of just rejecting their point of view as a "god complex" like you seem to
It’s not incorrect. Inaccurate maybe, but not incorrect.
You know why all internet traffic goes over port 80 and port 443? Entire generations of admins who learned that their job was to say “no” instead of “how”. By the time the fingers got peeled away from that issue the damage had already cemented. Similarly ORMs also got a healthy nudge from DBAs who decided it was their job to protect the business from ever doing anything new. Today the priesthood of the DB is all but gone. Devs are expected to do those roles because otherwise nothing gets done.
And similarly, the Cloud doesn’t say “no”, it says “how much”.
Are bad things happening now? Bad things always happen. I read Applied Cryptography while I was still in college, and have worked at a lot of startups and in other small projects, so I am biased toward thinking devs should not be making other teams babysit them. Advice and control are very different things.
Edit: I’ll note this isn’t my opinion, but my opinion plus decades of listening to devs vent about people out of earshot. Sometimes to the point of sedition. There are some big frustrations out there and creative people steer solutions away from frustrations. It’s not an accident we are where we are. People voted with their feet.
>>You know why all internet traffic goes over port 80 and port 443? Entire generations of admins
Now we have conflated network admins and sysadmin. Different things.
the reality is that was largely ISP's not sysadmins that caused that by throttling, shaping and in some cases out right blocking traffic.
Throw in some laziness by devs as well, it is widely inaccurate to lay that out the feet of sysadmins.
> Similarly ORMs also got a healthy nudge from DBAs who decided it was their job to protect the business from ever doing anything new.
Lots of ORM's were written by people that were DBA's because they were tired of devs that did not know how to construct a proper SQL Query, and thought that Select * was the always the correct way to return data.
This idea that devs had to ride in to save the industry from evil DBA attempting to block everything is a complete revision of history
>>my opinion plus decades of listening to devs vent
I am not sure how a circle jerk is proof of anything? devs in many cases may have gotten rid of sysadmins, but now a new beast is emerged that is making life hell for both sysadmins and devs alike. The "Cyber Security Analyst" and if the devs thought sysadmins saying no was a problem, you have not seen anything yet. Unlike the sysadmin cyberSecurity will be backed by compliance, insurance companies, and standards organizations. They will have actual teeth, and actual power something the sysadmin never had
You're right about attitude. Internally, the Google SRE attitude is to never say no. Instead, what's drilled into you is "Yes, but have you considered...". Still, that only works when the sre org has the support of management, or that there even is an sre org to be supported. Anyone smaller and organized different doesn't have such luxury.
Every architecture I've been responsible for has been better because of having conversations with operational people in which they tell me how divorced one of my ideas is from operational reality. Nobody would build a cluster that way because X.
To quote TFA: "Some things are simpler, others more complex, but on the whole, I've yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud."
Something I've observed for some time as well. If Ops was truly outsourced to the cloud service providers then why is devops such a buzzword/trend, hmm?
The complaint in the comment above was that on-prem was chaper but they couldnt hire people with the necessary skills. Obviously it is not actually cheaper if you don’t factor in hiring the necessary skills.
I suppose that is just another way of expressing my point; comparing hardware on a like-for-like basis, on-prem is cheaper. Once you factor in the hiring cost and adjust for the availability of those skills (since that is never truly reflected in the salary, in my experience) - then cloud becomes "cheaper" not in money but in opportunity-cost and reduced staffing-related business risk.
Generally, with cloud, you still need people with required skills. Engineers who write Terraform ("DevOps") aren't cheap. They certainly cost more than the old-school sysadmin who will set up a bare metal server with shell scripts.
curiously at the small chair I work, the professor goes against that flow and we do our small cluster all on our own. We also don't have much money (small), so dedicating probably half a year of state-funded PhD-time paid off (still quite academic around here) as we now have an environment newcomers can get work with in week 1. On top me and a colleague have a niche but interesting skillset pxe-booting a slurm cluster, running RoCE and NFS. Our softwaredistribution (as in "compiled sensibly"...) is arguably better than on some parts of the HPC-center (continent-wide academic TOP10 iirc) nearby. Also a first year PhD has some interest in that and will probably carry this on.
I don't have a lot of insight in academic funding and I know the meme that PhDs don't get paid a lot, but is it really better to use presumably very highly trained PhDs to do the grunt work of systems administration rather than actual research? Honest question here, it just seems logical to hire dedicated technicians for the not-research work?
there is no money for dedicated techs nowadays. Also hiring a competent sysadmin to do on-prem HPC-Linux might prove challenging. And while it is grunt work sometimes (but more so is academic research in lab, which to 90% could be rationalized by a labtech...), it's not too stupid if the people programming the system know how the system works. Aside from racking severs, that's essentially what SRE does, no?
> hiring a competent sysadmin to do on-prem HPC-Linux might prove challenging
Anecdata: I have 2 customers who have had open roles of this type for more than a year each. I'm pleased to report that they are edging their salary offers up into "reasonable" territory - but the fact remains that someone with that skillset would ordinarily be able to get paid twice as much at any of the 3 major clouds.
yes, pay is obviously an issue (likely also for the HPC-center - though I would take a job if they agreed that I can hire/apprentice people properly (a lot of the grunts education focusses on clicking through menus and nothing more today...) and then they let me alone doing whatever the current people did. Probably a fast road to burnout though)
Also noone is doing this then eventually (deskilling is real) and as I really don't want to work for any of the evil empires...
> She also noted that when you search on these terms the main engines (all run by cloud service providers) return poor results not congruent with the scale of the phenomenon. They are dominated by the opposite message obviously heavily SEO'd up to the top, plus shill pieces pushing cloud services but presented as "critical". Dig deep if you want to find the real scale of the "anti-cloud" issues.
Are you really suggesting that search engine providers are manipulating their own search results to bury some sort of hidden "truth" about a people leaving the cloud? Suggesting that people need to "dig deep" to find the real scale of the issue sounds a lot like the "do your own research" line that gets attached to dubious conspiracy theories.
The simple explanation is that your student simply anchored her personal beliefs to an idea that this was a large movement, but her light research didn't match her pre-conceived beliefs. This is how conspiracy theories are born.
Regardless, this claim doesn't hold up to even basic scrutiny. I just Googled for "cloud repatriation" and all of the results are think pieces about why moving out of the cloud is a good thing.
> A good example was the broadcast industry. Entire branches of that industry doing post-production, colour, transcoding, and whatnot moved it all out to AWS. After price hikes, they wanted to go back to running their own services. But they can't. Because nobody knows about how to set-up and run that any longer - especially specialist things like buffers and transcoders. I mean, try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.
If you can't find sysadmins with basic competencies or engineers who can build systems as simple as buffering and transcoding, I can almost guarantee that the real issue is that you're not paying enough for the position.
Either that, or the applicants can recognize from a mile away that your company is not the kind of place that skilled engineers actually want to work for.
Don't mistake your lack of candidates for a lack of skill in the market.
It's less likely that search engines would manipulate this and much much much more likely that the SEO'd commercial content would be dominated by "move to the cloud!" type stuff because that's what's easy to sell these days.
"Don't move to the cloud!" is hard to make money by selling to places not in the cloud.
"Move out of the cloud" is harder to sell to places in the cloud than "move to the cloud" is to sell to places not in the cloud, because (a) bigger orgs with big budgets to make money off of tend to be older and not in the cloud fully in the first place and (b) moving out of the cloud is less turn-key - every client is going to be different and likely lower-margin than pushing clients into the same set of existing packaged cloud services.
That said, I'm not aware of any companies from people in my network that are actively moving out of the cloud... so I also don't buy that it's a big trend.
> Are you really suggesting that search engine providers are manipulating their own search results to bury some sort of hidden "truth" about a people leaving the cloud? Suggesting that people need to "dig deep" to find the real scale of the issue sounds a lot like the "do your own research" line that gets attached to dubious conspiracy theories.
I don't have any knowledge of the particular topic, but you're suggesting that firms maximizing profit is a lunatic conspiracy theory. May not be what you intended, but your toxic comment was a bit nutty.
I think every place I've worked has acknowledged at least one happy accident and moved on. The thing with antitrust laws and anti collusion laws is that when you get big enough, you're not allowed to just notice such things and ignore them. Not unlike how larger animals and people have to be more careful where they step.
I find the "you're not paying enough" argument a little strange. By definition, not every company can pay top of market and get the most skilled engineers.
If the average engineer doesn't have a certain skill, I think it's fair to say that it's hard to hire for it, because it will be for the average company.
Indeed. It's already getting difficult to hire a competent DBA who can end-to-end manage a MySQL on a bare metal; things like replication, backups, setting up data pipeline to data warehouse and what not. I'd give it about 5-8 years before DBA gets de-skilled.
Similarly, managing servers through Chef/Puppet used to be a thing just a decade ago. Now with the onset of serverless, Lambda etc., it's slowly fading away.
The total number of databases in the world has not gone down though, rather the opposite I suspect. It's not that there aren't any competent DBAs anymore, rather it's just that the cloud companies have better offers for the DBAs out there than most small/medium enterprises are willing to pay.
Nobody knew how to use the cloud when it first came out, and AWS and GCP are both fairly obtuse. IAM alone...
Hell, for that matter, nobody knew how to set up a transcode server for live streaming 30 years ago - it's not some ancient art where we lost 50+ years of experience.
I don't believe that people who can learn to operate cloud services can't learn how to rack servers and install software. Ten to fifteen years ago I was in companies where ten percent of the software team was managing the datacenter. Now ten percent is managing kubernetes and AWS and all that. I've seen the cloud be better for smaller shops since you no longer have to have that server management if you only have one or two people, but I haven't seen it "deskill" things so much as "differently skill" them in larger orgs.
> Her main conclusion was very interesting though. That the big issue is not finance, reliability or control - but de-skilling.
> As companies move their operations out to the cloud it's not the disappearance of hardware from the premises but the loss of skill-sets. Later they don't even know who to hire or how to write the JD to bring people back in.
So in that respect it's similar to outsourcing any other aspect of your business - there are many examples of hollowed-out suppliers of manufactured products that no longer have the ability to build things themselves.
Yes I think that's right. It's a general pattern, Outsourcing
disempowers in unexpected ways. The "finding", as it were, is that for
some reason people in tech are surprised by this.
They don't readily see that moving to the cloud increases entropy. You
can't easily go back the other way, and so maybe companies should
understand cloud as a one way process and think much more about the
value they have and will be losing.
> She also noted that when you search on these terms the main engines (all run by cloud service providers) return poor results not congruent with the scale of the phenomenon. They are dominated by the opposite message obviously heavily SEO'd up to the top, plus shill pieces pushing cloud services but presented as "critical". Dig deep if you want to find the real scale of the "anti-cloud" issues.
That's the case for a lot of different things, ranging from tech to politics to pretty much anything, and doubly so if people are selling something or otherwise have a strong interest in pushing something. It's just so much easier to extol the virtues of something rather than point out the problems with something, especially when the problems are not obvious from "common sense" at a glance.
"Searching for something" is often a very biased way of finding out about many things; it's not intentional, it's just that the "pro" and "anti" camps have vastly different levels of commitment to their cause, and you often need a lot of knowledge of context and nuances to make a fair judgement; any hack can say "X is brilliant!" but it's much harder to say "well, X is good in some cases, but [..]"
I largely agree, but setting up mail server these days is way, way, way more complex than 10 years ago. It’s not people losing skills, it’s technology itself becoming overly complex.
I disagree. It's a little more complicated (due to SPF/DKIM/DMARC), and that's annoying. However, I have set up multiple mail servers in the past 10 years, and I don't find the complexity to have increased that much. It is more complex, but not "way, way, way more complex".
> when you search on these terms the main engines (all run by cloud service providers) return poor results not congruent with the scale of the phenomenon.
I know HN comments bias heavily toward anti-cloud, but the suggestion that all of the search engines are conspiring together to bury some massive underground anti-cloud movement (that conveniently confirms anti-cloud beliefs but is unfortunately just hidden by evil search engines) is laughable.
I have never heard of 'cloud repatriation' until this very thread. So if that's what I'm looking for but I don't know what it's called, I have to use other terms, which are likely to be SEO'd to hell and back.
they however need the quotes. Otherwise you drown in noise about the great cloud. Have you tried searching for performance figures without disabling amazon, azure and google in the search? You won't find any.
Still it's probably just the general noise of the fully oligopolized internet :).
I started my career as an ops guy supporting web hosting(around 10 yrs back). I learned a lot about Linux, automation and scripting along the way and became really good at setting things up(web/application servers, php, ffmpeg, mail servers etc.). I became pretty good at python too, also learned ds and algos(I didn't have a CS background).
Moved to "devops" profiles since sys admins and regular ops guys stopped being valuable. Picked up puppet, ansible, aws/gcp. Moved to a big company, got bored out of my mind and decided to look for something else.
Questions from the first round of an interview(they call the round - * basics) in a small company - Advanced aws and k8s(no Linux or anything else in my resume). I cracked the round but was expecting it to include at least some Linux questions. Next was the scripting round which had nonsense questions involving a lot of calculation(because ops folks only know basic scripting). Cracked that too.
What I wanted to say is that there is no value in knowing Linux(to get a job, nothing beats knowledge on basics for debugging stuff once you land a job) and complex ops stuff since there is no market for these. Most of the people who were good at complex ops stuff moved into management roles seeing a lack of market for everything they knew.
I am planning to leetcode some more and move to a dev profile after a year or two since there is no upward career path for all the ops stuff. There is too much to prepare for devops interviews since there is no standardized interview template either(advanced Linux, advanced networking, advanced k8s, cloud providers, everything in my resume [a ton of stuff that I'm slowly removing one by one], ds and algos, leetcode].
It is not that there aren't people out there with the required knowledge, we are just removing all the seemingly useless stuff from our resumes.
I am not sure when your student did the research, but it's been quite a few years (2017) since I used the 451 Research original *repatriation* article (as part of a larger set of data), when I was conducting an alternative architecture/skills/cost analysis, for a global company, in a private vs public cloud alternatives material. I could not immediately find the 451 Research link, but here is an identical copy of the most important picture in the article [1], which I found online. This most definitely needs re-validation.
I see this problem a lot where a powerful company has the $$$ to do SEO and basically push your search results in a way that is advantageous to them. For example, I moved to a place with black widows. Wanted to check the mortality rate. First post I find is one of those blog posts by a pest control company claiming a certain mortality rate. Search further for official resources and that number is wildly lower.
Same thing with Jerusalem bugs. Wanted to check how careful I should be (I hike at night). First post a pest control company claiming that a bite can cause a fever. Further research into trusted sources mention no such thing.
I hate what Google has become. I miss the days where the first result in Google was valuable.
Pro-tip: if the post explains something and end with ‘conclusion:’ they’re pushing a product.
> the big issue is not finance, reliability or control - but de-skilling. ... they don't even know who to hire or how to write the JD to bring people back in. ... nobody knows about how to set-up and run that any longer ... try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.
So that is where the next boom will be: a sysadmin-on-demand service. Quick, someone create a start-up which connects those with the required skills to those who need those skills, charge a 30% fee and become the next greatest thing since sliced bread.
> A good example was the broadcast industry. Entire branches of that industry doing post-production, colour, transcoding, and whatnot moved it all out to AWS. After price hikes, they wanted to go back to running their own services.
Do you know what exactly increased in price? AWS usually doesn't increase prices, so I'd be interested if that was an exception or if that's a reason just given to cover another issue they had with AWS.
I'm definitely encouraging publication in a journal (maybe dealing
with industry resilience). Also I want some students to come here and
submit things via Show-HN, but they have to want to do that
themselves.
> try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.
Was just thinking last night that I need to stand up a mail server to send/receive email from my self hosted products. Then I thought of the leg work involved and said words I’ve never though I would say: “I should just use SES and be done with it”
I would make the case, especially in the broadcast / entertainment industry, that is was not the cloud that resulted in de-skilling but rather outsourcing / off shoring.
There are famous examples of large media companies screwing over their internal IT depts through out the 2000's and 2010's
I'm not sure I agree. It's not de-skilling, it's not needing to hire a skilled/specialists to implement/operate. If you're using the cloud, you almost never need to consult or assign work to a network engineer. IME, reliance on DBAs is also way down.
> I mean, try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.
Doing this correctly is mostly googling and using reference material to make sure you’ve set it up correctly, which means that anyone with decent Linux knowledge can do it.
I would not trust anyone who claimed they could set up a mail server with no external references.
Having just spent the better part of the last year building a service that deploys to multiple cloud providers, this choice makes sense (or at the very least, I can understand the root why). Beyond cost concerns, there's a staggering amount of complexity and low-quality UX/DX around this stuff that I don't see changing or improving any time soon (if ever).
Once I got into the thick of it, I actually started to have the thought "what would it look like to just run co-located boxes like the old days or do this on prem?" The answer was of course "waaaay simpler."
While I don't think we'll see an industry-wide shift back in this direction, I do see it happening on a non-trivial scale. Especially as things like economic instability and censorship wax and wane and wax in the coming decades.
In the early days of cloud providers, their services were both easy-to-use and cost-effective for many growing companies.
Now, cloud providers offer a giant hairball of specialized services, built on top of other specialized services, built on top of yet other specialized services... such that no single person can comprehend all of them, let alone manage them cost-effectively. Moreover, as all these intertwined services become deeply entrenched in IT infrastructure, migrating away from them becomes incredibly disruptive.
The financial incentives of cloud providers are not aligned with those of customers. The more a customer is sucked and trapped inside those giant hairballs, the better for the cloud providers.
To paraphrase the Dark Knight, cloud providers may have been the heroes at first, but they have lived long enough to become the villains.
A good definition of The Cloud is just "servers in a datacenter" so the discussion is best framed as Private Cloud (on-prem or co-located servers) vs Public Cloud (AWS, etc) or some combination.
People with expertise in running private clouds and using public cloud services can judge each case pretty easily, by taking into account the organization's technical needs, plans, budget, abilities, etc. And for non-experts, the question is even simpler: use the public clouds if or until it becomes a technical or financial blocker. And in that case, hire an expert.
There's really not a lot of room for controversy. It seems most of the arguments come from people without the relevant expertise, who only know how to develop in public clouds, responding defensively. Like people with little programming experience flaming each other over which language is "best".
There's no single "best" cloud option, since they're just different tools for different things. Sometimes either way would work, in which case it comes down to personal preference, and sometimes there's a clearly better option for a specific organization's needs. Framed correctly, it's a "boring" question just like most technical questions should be.
DHH’s preferred architecture is a rails monolith with a single database behind it that you keep scaling vertically. I believe both Hey and Basecamp are still using that architecture.
I think, Simon Warldley summarised it good today. (0)
They (think, they) have a niche use-case which is a bad fit for the cloud.
These niche use-cases exist, but they are niche, and most companies, who think their use-case is that niche so it doesn't fit the cloud,are probably wrong.
Chances are you belong to the vast number of companies that should simply put their workloads in the cloud and be done with it.
> simply put their workloads in the cloud and be done with it
I've worked all sorts of cloud environments for about ten years now, and with metal servers before that, and I'm not all that convinced that the cloud is saving all that much in terms of engineering effort. Or if it does, I haven't experienced it.
Sure, no one gets woken up at 4AM because your server's disks are failing, but instead you get woken up at 4AM because all your cloud native infrastructure is doing $funky_stuff. And yeah, no need to install and maintain servers any more, but now you need to write a whole bunch of code to interface with all the cloud stuff and maintain that. Lots of open positions for k8s engineers "dev-ops" engineers to make all of that work.
I run my stuff on Linode VPS's, which is "the cloud" as in "renting servers", but it's not "the cloud" as in all the "cloud native" stuff we have now. My experience is that sometimes the cloud makes things easy, and it makes things harder. On balance, it seems we just swapped one set of problems for another set of problems, and we haven't really gained all that much in the end.
I don't think Hey is running a "niche edge case"; it seems fairly standard stuff to me. That Tweet comes off as quite dismissive.
The tweet thread came off in the completely wrong way to me, and suggested that the person writing it was pretty ignorant about how software engineering actually works. Are you suggesting that everyone should be using [insert dev fad here] for all but a few special cases?
Have you considered that sometimes (really, almost all the time) people run code that is more than 2 years old? Have you considered the costs of things like serverless when you have a reasonably constant load?
I am a huge fan of cloud as a platform for time-shared server rentals. Cloud for "value added" products like serverless and whatever new database they cook up starts to get really funky and weird, in addition to being extremely expensive.
Wardley seems to have a narrow perspective on things, then. Lots of large companies build their own roads and power plants. Hell, I put solar panels on my roof.
In fact, there are very few companies over a certain size that haven't built a road. This kind of overhead is generally expected once you have scale, because the alternatives are far more expensive. Datacenters are similar.
Just to extend that analogy: real estate and construction companies tend to build a lot more roads than other companies. Even small construction companies have built a decently long road on some project, and the smallest have built a long driveway. Specialist road building companies that build a ton of roads exist, but all construction companies can build a good road.
By the same analogy, tech companies should not run from building server racks and/or datacenters. It may not be your company's core competency, but it should be one of your competencies to understand how to run a computer. It really doesn't take that much work to run 10 computers.
People keep using the power plant analogy as if somehow there isn’t a massive shift towards solar and batteries, which are literally building a power plant on your roof.
No one is really "building" anything when they run their server though; it's all built by hardware vendors, you get the software from software vendors, etc. You're just operating it.
I don't really like using analogies for this sort of thing as I think it muddles things more than it clarifies, and quickly leads to discussions about the analogy rather than the topic at hand, but if we must use one I'd say it's more analogous to a business deciding to purchase a company van or lorry. For a lot of businesses, this clearly isn't needed and renting one when you do is fine, but at some point it just becomes easier and more cost-effective to have your own (depending on what you do with it, nature of your business, etc.) in spite of the extra hassle of having to maintain, clean, etc. your vehicles.
I felt the tweet thread was dismissive as well. Hey is running pretty standard Rails things IIRC and I wouldn't consider them niche. They like to make money and they know they can make more of it if they run on baremetal. I think more companies are going to move back to a more hybrid model of cloud + baremetal in the next 10 years. Especially the ones that need more power, flexibility, and cost savings from their infra.
You nailed it on the head. The problems have just moved from one camp to another. We make our life easier because we can talk to an API to get our network, compute, and storage but the complexity in doing so has a pretty high price as well. The current devops trends are mindbogglingly complex compared to most of software that is getting deployed with them.
> These niche use-cases exist, but they are niche, and most companies, who think their use-case is that niche so it doesn't fit the cloud,are probably wrong.
Conversely: I find that the inverse is true very often.
People are very quick to throw their current systems away in favour of the cloud due to a myriad of reasons; be it sticker shock, promise of less headcount, the idea that you can hire people easier or simply because of a Gartner report that your CTO happened to read on a flight.
This causes migrations to the cloud that kinda don’t make sense, they’re borne out of ideology or “not being left behind” and sometimes it fails catastrophically.
We would do well not to cargo cult, but reasonably deduce and argue what our objectives truly are.
A couple years into my career I came to the realization that working in the tech side of a non-tech business is not necessarily about providing what the business needs. It’s about convincing the business to pay for what you want for your own reasons (resume, prestige, continued employment )
I have been on many teams that could’ve easily been in maintenance mode and not changed anything for years. The business would’ve hummed along stably and profitably. But it’s hard to justify your job in IT if you say your roadmap is to just keep doing what you’re doing. Nevermind the people in between business and IT who really contribute to neither.
So we get these pendulum swings and huge projects (move everything to cloud, move everything back, make engineering do ops, make a dedicated ops team, micro segment everything, flat network everything) just so IT management can convince the business they’re providing some value.
Gartner is the “independent” third party that allows IT folks and tech vendors to convince the business they are doing the right thing.
And yeah I’m sure some of these pendulum swings have come at great cost to the business, even halting it entirely (SAP)
Some of Gartner’s analysis can be spot on, but god some of it is so bad I can’t help but cringe. Their recent push for low code/no code everywhere and calling this the “final decade of applications” is in the later category.
There’s going to be mountain of companies going out of business in 5 years due to following Gartner’s low/no code advice.
Don't forget that the already beleaguered world of government IT also listens to Gartner. But government doesn't really go out of business when that bad advice leads to bad initiatives, it just becomes even more dysfunctional.
That entire thread is just a dumpster fire of bad analysis. He dismisses everything that doesn’t fit “serverless” as a niche and doesn’t mention cost of maintenance, transition, or lock-in in serverless.
It’s just a super lazy take from someone who likes to use serverless because he doesn’t like to maintain things. It’s essentially an argument to live in a hotel because they can do maintenance at scale better than you.
> don’t forget he built some of the early PaaS / cloud providers
No shit, taking advice from a car salesman on how often to buy cars and what cars to buy is about the dumbest thing you can do.
His entire livelihood and career was built on selling this story that you need to pay others to run all of your infrastructure for you. He’s shilling IT illiteracy rather than pushing for more sustainable on-prem solutions.
Have you ever used something like Wardley Mapping to do the analysis?
Most on-prem solutions I've seen aren't sustainable, massive Capex costs, teams of people that could be better used in other ways just to keep trivial things applications running
IDK the guy but the whole tone of the thread sounds like that he is biased/have some interest.
If you go back 4 years in time, everything in 2022 was going to be serverless. Reality check: it didn't happen, the only "fad" that actually materialized for almost everybody is Kubernetes (which actually helps with going back to bare metal).
Just put everything in cloud, use closed-in vendor solutions and overpriced eager fees as its typical VC way to just burn money cause "developers cost more" (maybe in California).
I was big cloud enjoyer in like 2016-20 but then some hacker news post open my eyes to the truth. They are trying to milk everyone and hook every possible person on some free credits then bump prices and get fat profits and ofc make as hard as possible getting out of the platform.
I just don't care and move to smaller providers for VPS and database + cloudflare for dns and pseudo s3 (still in almost free tier).
I don't think it's as niche as people want to believe. The monetization models of cloud compute is clearly having compute act as a loss leader. The real revenue is picked up on the borderline egregious egress charges which are, coincidentally I'm sure, the hardest thing to estimate. Every other provider needs to follow this same monetization because then their choices are the top line being an eye-popping figure or making no money.
> think their use-case is that niche so it doesn't fit the cloud, are probably wrong.
My sentiment leans toward host your own for a number of reasons, but it's challenging to make good decisions. Can you trust the person who is telling you that you can do it? Their org chart goes away if they lose this argument, so of course you're going to hear from them how we can do this.
If the team you have has been doing a decent job, maybe you should listen to them. If they've been doing a poor job, then you have to ask why they're suddenly going to be able to provide a level of service you haven't been enjoying for years. My company went through one of these fishtails. I think everybody is happier that we ended up in the cloud, even those of us who will only begrudgingly admit it. As I said in another thread, I think the Hype Cycle for The Cloud will at least give some orgs with a dysfunctional IT department an opportunity to clean house and then start over in a couple of years as informed consumers. Or find a new kind of service provider that distances itself from The Cloud salespitch.
Static on a daily rolling average. You’ll be hemorrhaging money if you think moving to the cloud will save you money because you have a daily trough.
The only places I’ve seen it work are:
- massive seasonal demand with negligible usage during off-season (think TurboTax if it weren’t part of Intuit)
- brand new product development where you don’t even know what you need yet or you are going through exponential growth that you can’t get ahead of. (This stage should only last like 6 months though)
You are doing measuring wrong (look at distributions, not a number), and moving to the cloud wrong (rearchitect with elasticity and serverless in mind, don’t just put it on instances).
I was hoping those lessons would be a bit more common by now.
Lift and shift proved itself to not work well. Its not cost efficient for vast majority of businesses with legacy software.
To use cloud efficiently you have to build for it. Startups are much more efficient in that regard in comparison to monolithic applications running on premise.
These niche use-cases exist, but they are niche, and most companies, who think their use-case is that niche so it doesn't fit the cloud, are probably wrong.
Talk about broad, sweeping statements. Care to substantiate?
One under discussed issue I’ve noticed is how the size of The compute sold to the end user does not increase at the rate the underlying hardware improves at.
It’s reasonable to presume there’s now an entire generation of developers who simply don’t know what’s possible in state of the art hardware.
Due to being kind of on the poor side, using AWS, Azure or GCP for personal projects isn't entirely viable for me. I still do utilize some VPS vendors for when I need a decent uptime, but most of the time it's just a GNU/Linux box that I have SSH access to and I configure everything else myself.
Most other things, I run in my homelab, which is just a few old computers repurposed to act as servers for whatever I might need (especially for storing larger amounts of data, e.g. backups), including things like acting as CI/CD nodes, or test environments etc.
I still write "cloud native" software (for example, 12 Factor Apps) because those make the portability of everything pretty well, as well as use something like Docker and some lightweight orchestration on top of it (like Docker Swarm or Hashicorp Nomad) because it's like having the ability to make configuration for what you need (ports, storage, memory/CPU limits, scheduling, restart policies etc.) rather easily and also version it, more easily than Ansible lets me (which I still use for configuration of the server nodes themselves). Basically like multi-node systemd, once could say.
Either way, it's nice that we still have the ability to base our workloads on open source solutions and learn about all of this stuff on consumer hardware (though certain configurations might be badly supported on choice OSes, like some BSD versions with driver support).
All of this is nice and sounds rather obvious to many over here. I think the HN's hivemind has already chewed through most of the arguments they are making.
What I want to see is what they are going to replace the cloud with, how the numbers compare, and what tradeoffs they have had to make. Preferably a year after they make their transitions. An honest comparison that makes it possible for other companies in a similar situation to clear their doubts (in some cases “clearing doubts” would mean “we're staying in the cloud”; I'd like this kind of honest).
Thus I'm staying tuned, with a grain of doubt. After all, it's possible that the transition off cloud will take time and may even fizz out without achieving much. Oh, and I'm wondering what's the cost of moving the data itself off cloud is going to be.
My naive sense of things is compared to 10 years ago, now that we have k8s and can more easily deploy what we want in arbitrary environments, the cloud proposition becomes more about on-demand, elastic compute, and less about specific services. But that also because of k8s, we can also just have static set of compute and solve 80% of the problem with whatever kind of service we need.
This always seems an odd argument for an industry where for most of the last 20 years early money from optimistic VCs has almost been falling from trees. Compensation for technical people has been rocketing even in businesses that haven't actually built any net-value-generating product after several years. Clearly the investors with serious money have no problem with offering huge amounts of it in the hopes of funding the next potential moonshot among the 99 other failures. Would the kind of capex necessary to buy and manage enough infrastructure for most of these businesses to find their market and start scaling rapidly even move the needle at this point?
I think the cloud can reduce the operations team even at their scale if serverless is used on GCP. Many of the serverless offerings on GCP just get it right. I have never had to think about machine sizes, clusters, etc when using BigQuery. My team and I focus on the SQL. Cloud Run gives you a web or API server without having to really think about machines other than number of cores and memory size. I am currently designing a data pipeline architecture that is 100% serverless to move away from Airflow/Composer. Why? Because my team and I can stay focused on the code and logic of moving data around instead of the operations of managing a managed product.
Would this work on AWS or Azure? Theoretically yes, but I personally think their serverless offerings are hard to use and don't make things easier.
The developer experience for AWS serverless offerings, like Lambda, is awful. You can spend hours going down a rabbit hole of API gateway configuration, IAM roles, security policies, VPC endpoints, etc. that gives you very little in terms of useful error messages when it's not quite configured right. But, yeah, once you do set it up, the per-request cost is quite cheap. Still, I think if you consider the lost developer productivity dealing with slow deploys and environment/configuration issues, the costs are much, much higher.
Azure Functions is as easy to use as writing an "Hello World" console application. I have a few services running like clockwork for a few years now, very happy with this service overall. The one thing that complicates matters is Microsoft's eagerness to deprecate libraries, frameworks and best practices every now and then.
This is kind of tangential, but back in the early-mid 2000s I remember there being a little utility that allowed you to download individual files from inside of a zip file over HTTP. I want to say that it was made by 37signals but I might be completely mis-remembering the whole thing. Does anybody know what I’m talking about?
I would enjoy real world (apples to apples) comparison of the cost of cloud vs self hosting. There are many speculations and approximates, but some hard facts would go along way.
I wrote one of the big speculations and approximates. In real life, it is very much workload dependent, but the balance skews a lot more in favor of self-hosting (or cheap hosting companies) than IT leaders (used to) think.
I think the three best reasons to use the cloud are:
(1) If you have a very bursty workload,
(2) if you need compliance with specific government regulations that are expensive to certify, or
(3) if your app is a hobby project or otherwise fits inside the free tier.
A lot of applications fit into one of these categories, and, anecdotally, people running them are very happy with the cloud. Most CRUD apps today do not fit into those categories.
I have also seen companies have a lot of success with hybrid cloud deployments, where they run a datacenter for their base load and scale out into the cloud when they need to do more work.
It’s 100% work load dependent. Cloud makes dealing with complexity cheaper in many cases by letting you outsource admin to managed services, and is great if you need elastic scaling. Bare metal on the other hand can be many orders of magnitude cheaper for compute and especially bandwidth. Cloud bandwidth pricing is completely insane, up to 1000X more than bare metal hosting.
There is no one size fits all advice. You have to model your work load. Take scale into account too. Cloud often makes far more sense at a small scale than a very large scale since labor costs can amortize. There comes a point at which hiring admins may be cheaper than paying cloud costs.
It's heavily dependent on your scope and how much you value your time.
I could spin up a postgres docker container locally, in shorter time than it took me to write this comment. On the other hand that wouldn't provide readonly replicas, standby-replicas, cross regional, point in time backups where i can just click on a date and the database will restore, access controls connected to my company directory. All that would take months to engineer properly myself and after that continuous monitoring and replacing of disks.
Maybe you don't need all this, maybe you do, it's all a part of your workload and effort equation. It's not possible to just take the cost of a VM and multiply with number of hours in a year.
A million dollar per year RDS bill buys you, list price, a multi-AZ deployment with 1024 state-of-the-art CPUs and 5TiB of memory per replica, which strikes me as much more than a mere tens of thousands of email accounts would ever need.
The price I quoted is for the m6i RDS instances which use the very latest Xeon that Intel has ever marketed. You are free to speculate as to whether the Graviton is more SOTA than Ice Lake Xeon, but I know which one I will choose for straight line performance.
Weird take since Rocket Lake is identical microarchitecturally to Ice Lake. The difference is scale: there are no 2S RKL, and the biggest part has only 8 cores. It's a workstation/small business server part.
we just bought the same (-some memory) for 200k dollars. Dark fibre costs maybe USD2000 (hard to come by numbers) per 50miles/per month and then those 4 servers at each location don't really need a fancy server room...
That's a pretty good deal on the hardware. List price of a Lenovo ThinkSystem with 2x 32 core Xeon SP Gen 3, 512GB RAM is $100k each, that $1.6 million for the quoted RDS setup, and you still need a place to rack it up, electricity, and cooling.
well, it's supermicro for us but indeed a good deal (2x 7713 and 512GB memory at 15k each - somehow the vendor (big reseller) was 30% cheaper than all others). Still list prices are a a bad joke with other OEMs and the rebates are real...
Confusingly, on-premise does not always mean "in the same building". Oftentimes it also refers to dedicated servers in a shared datacenter (aka colocation), or a separate datacenter that only serves a single company if that company is large enough.
- Most businesses don't provide Internet Services to customers, so they wouldn't need fast (internet) links and servers for this
- However their own employees are typically consumers of services, that can be hosted either on the cloud, or on premises (which turns the argument on its head)
- Even for businesses that do offer services to customers over the internet, the alternative to the cloud doesn't have to be servers on their premises for this. They can be bare metal servers, either collocated or leased on datacenters. This is just fine when their services don't anticipate explosive growth. It doesn't even mean they have to sacrifice fancy "infastructure-as-code" tooling, though it does mean that they have to think about scaling in more terms than just cost.
Or you can just architect your application as a hybrid that can surge into the cloud as needed. Lots of ways to go about it rather than just chucking everything to the cloud and never thinking about it again.
> Can’t wait for their post in 3 or so years about moving their workloads back to the cloud.
What leads you to believe that paying a premium to someone else to use their computers is a solution that fits each and every single case?
Even AWS is clear in how their present their offering as something that allows projects to grow and scale quickly while avoiding upfront costs buying and hosting their own hardware.
> You don’t pay the cloud providers (just) for their computing power.
I feel that your comment is a red herring. Using someone else's computer does not mean taking advantage of computing power alone, and nothing more. In fact, the point of paying someone else to use their system was never computing power, specially nowadays.
AWS's shared responsibility model is quite clear in stating that end users are ultimately the ones responsible for security and compliance. So your points are moot.
"Flexibility" in this case is so vague that it's meaningless. I mean, is there anything more flexible than owning the entire infrastructure and software stack you chose all by yourself?
That's not at all what the shared responsibility model means...You are responsible for your application's security, not of the underlying infrastructure. That's why it is shared...
The HN community 1) likes building and hacking, and 2) has nostalgia for old technology. This makes them feel a lot of affinity toward "doing things themselves" -- server infrastructure, running desktop linux, avoiding dependencies in their code.
The reality, though, is that the steady march of progress encourages us to outsource what we can to people who are better at the thing that is auxiliary to what we do. I don't grow food because I'm bad at growing. I don't repair my car because I'm bad at auto repair.
I do build software, and my company builds a very specific type of software to solve very specific problems. I'm happy to focus on that, because that's how we make money. Other people are much better at building infrastructure than I am, and so I let them do it for me. If there comes a time when the cloud offerings are either worse than what I can build or too expensive, then I trust someone else will come along and fill the gap in the market before it becomes worthwhile for me to do it.
> I don't grow food because I'm bad at growing. I don't repair my car because I'm bad at auto repair.
These analogies are really bad. It’s not as if this is binary. Nobody is suggesting you fab your own chips or build your own data centers.
But you do cook (presumably) most of your own food, and you drive your own car. You capture most of the benefit of economies of scale and specialization, and then you do the last 5%.
This is what people are talking about. AWS is like eating out for every meal or taking an Uber everywhere. Sure it’s convenient, and has its time and place, but it’s probably not the best default option.
> The HN community 1) likes building and hacking, and 2) has nostalgia for old technology. This makes them feel a lot of affinity toward "doing things themselves" -- server infrastructure, running desktop linux, avoiding dependencies in their code.
It's not nostalgia, it's a cynical wisdom of surviving the "move fast and break things" mentality with which so many businesses shoot themselves in the foot. The more you can do in house, the less impacted you are by externalities.
> The reality, though, is that the steady march of progress encourages us to outsource what we can to people who are better at the thing that is auxiliary to what we do.
Outsourcing isn't progress, it's a business strategy that involves shifting responsibilities to a third party. Done right, it's an effective way to build on previous work to achieve an otherwise intractable business goal. Done wrong, it devolves into a shitfest as your success lies at the mercy of some entity whose interests are not necessarily aligned with yours.
> I don't grow food because I'm bad at growing. I don't repair my car because I'm bad at auto repair.
Because those things require serious investments in time and money. But there's plenty of things you can quickly pick up that make no financial sense to outsource. You can buy your own groceries and cook your own meals. You can change your own tires when you get a flat, or change your own oil when the dashboard light comes on. Not everything that can be outsourced should be.
but cloud is not outsourcing what you know how to build. afaik it requires you learn new things and doing things in new ways, permanently. And now you re stuck learning these proprietary things that are intentionally incompatible with each other and care about lock-in and money and such
you don't grow food because it s cheaper to buy, it's all about economics (which takes care of division of labour; the word literally means the division of things in a house). Free money essentially means people who do random shit remain unpunished
This is true about HN, and a very healthy perspective on cloud. It is very much the minority opinion that using the cloud is like using a repair shop for a car. The vast majority of people think they are saving money by doing it for one reason or another, rather than effectively outsourcing dev competency.
Unfortunately, the competency you are outsourcing is rare and getting rarer, meaning that infrastructure products are getting very expensive (and profitable).
You're understating the case: you don't grow food even if you're good at growing, because your value add as a niche software dev probably brings you much greater remuneration and benefit to your customers than if you were a farmer.
She also noted that when you search on these terms the main engines (all run by cloud service providers) return poor results not congruent with the scale of the phenomenon. They are dominated by the opposite message obviously heavily SEO'd up to the top, plus shill pieces pushing cloud services but presented as "critical". Dig deep if you want to find the real scale of the "anti-cloud" issues.
Her main conclusion was very interesting though. That the big issue is not finance, reliability or control - but de-skilling.
As companies move their operations out to the cloud it's not the disappearance of hardware from the premises but the loss of skill-sets. Later they don't even know who to hire or how to write the JD to bring people back in.
A good example was the broadcast industry. Entire branches of that industry doing post-production, colour, transcoding, and whatnot moved it all out to AWS. After price hikes, they wanted to go back to running their own services. But they can't. Because nobody knows about how to set-up and run that any longer - especially specialist things like buffers and transcoders. I mean, try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.