Hacker News new | past | comments | ask | show | jobs | submit login
The Best DevOps Is NoOps (medium.com/buttercloud-labs)
52 points by fhirzall on July 31, 2017 | hide | past | favorite | 62 comments



Maybe I'm old school, but I really hope developers give a serious thought before jumping into this vendor lock-in trap.

This is specially concerning if not scary, when you start to "outsource" the backend business rules to something like Firebase or other BaaS systems.

Using these as PoC or for an MVP, I'm 100% behind it, but using it on production ready products, it's a disaster waiting to happen, as it basically puts your company product under someone else's rules, and if those rules changes or worse, if these companies go bankrupt, migrating to another system could be the death of your product as well.

I'm not against BaaS, I think they're very useful, for prototyping, for micro-services that don't directly affect the main product business rules (image processing, chat app, etc..) but putting all your "gold eggs" into a vendor system should be taken after a serious thought of the pros and cons.


100% agree with you.

No one accounts for portability. The common mantra is that if portability becomes a problem you will be rolling in VC cash so you can just afford to restart everything from scratch and shrug it off. Basically defer the complicated decisions to later on.

That never accounts for the "grey phase" which a lot of products seem to slide into permanently which is where they are just about scraping by with no investment at all. At this point, the decisions affect your bottom line badly because scaling up customers means an instant cost and geopolitics can mean sudden revenue decreases.

You need to plan for a partial success and a partial success can't handle vendor lock in.

I'd argue, after spending the last 5 years working with AWS, that the learning curve for locking yourself into a vendor has a zero return as well. One product change and start again.

IaaS yes. Anything else, no thanks.

Edit: also some of the IaaS providers make standard services that compete with their own products difficult. Look at SES. If you run IaaS in AWS you have to jump through a lot of hoops to run an outbound mail relay. Their solution: just use SES; it's really easy! It's not!


The message of the cloud native software stack, most often built around Kubernetes, is that cloud portability enables you to avoid vendor lock-in.

A look at the stack: https://raw.githubusercontent.com/cncf/landscape/master/land...

(Disclosure: I'm the executive director of CNCF and helped make this image.)


Using cloud as a glorified hypervisor isn't really getting the benefits. To be cost effective you really do need to be using the value-added or layered features, and those are proprietary.


I disagree (obviously). There are huge advantages to using the data stores from your cloud provider, but you can pick Postgres or similar options that avoid lock-in. If you use DynamoDB or Spanner, you can work to avoid non-portable features. Many other services like SNS can be replicated with software like NATS.

That said, switching from one cloud to another will never be trivial, so it may be worth using some proprietary services up front if it gets you to market faster. The key thing is to have your eyes open about the choices you make, to ensure you have the ability to switch (or, perhaps more importantly, threaten to switch) in the future if you need to.


Sure, but if you restrict yourself to lowest-common-denominator features so you can migrate between Spanner and CosmosDB then you are paying for features you aren't using and are at a competitive disadvantage to those that can leverage them.


Agreed, at the very least, anything I've made in the last year can be containerized and retargeted pretty easily from there.


I learned this lesson very early on in my dev experience. My app's database was Parse. Parse no longer exists as a service. Before that it was based on StackMob, from which I moved to Parse because StackMob was shutting down. Parse gave a hell of a lot more notice, but I was done with this project at that point.

Now a feature of the app just doesn't work. I won't be rewriting it for another backend.

Guess who doesn't use Firebase. This guy.

As a side note, I currently use RethinkDB. RethinkDB shut down last year. That has had no negative effect on my current project, which could run for a very long time on the last stable version of RethinkDB. Since it was open source, now it is even under development again.


I've seen a tutorial for "make your own self hosted firebase with feathersjs", have you considered just finding a self hosted alternative to parse and altering your application or writing a wrapper?


It is my understanding that Parse itself is now open source, so I could self-host the genuine article. The reason I don't is that I really am done with that project. In fact, I'm done with Android apps altogether. I currently am working on a much better project (website).


> it basically puts your company product under someone else's rules, and if those rules changes or worse, if these companies go bankrupt, migrating to another system could be the death of your product as well.

This is basically where standards has to come in. Imagine if electricity was not standarized - you'd have to buy into one frequency or another. Then you'd face the same problem as with using BaaS!

I say, there needs to be standardization, so that commodities can be commodities. Companies like to pretend they are some special snowflake that sells something unique and un-replicable. That's not at all the case, and i wish that more people call them out on it. Making sure that standards exists for a particular commodity offering (e.g., apis, or via some sort of RFC), or don't use them at all.


> Imagine if electricity was not standarized - you'd have to buy into one frequency or another. Then you'd face the same problem as with using BaaS!

Ever tried traveling around the world (or even just around Europe) with an electric device? Now... once you picked up an adapter at the hardware store, how difficult was it? "Not much of a problem" for most folks, somewhere between "a moderate pain" and "ruined my device" for the few with special issues.


A lot of power supplies are strictly 110V or 230V. They will fry when plugged in a different country, with the adapter.


GCP is kind of a good guy here, pretty much every documentation links to RFC or a standard definition.

Yet also, believing you can standardise everything is a fairytaile. Is your own infra standardised?


Cloud services don't save you money, they buy you focus.

There are very little companies today that need custom buildouts, but that doesn't mean we shouldn't keep an eye on portability.


to play devil's advocate...

The article seems to be written in the context of small startups.This is an inherently risky (and risk tolerant) environment.

The biggest risk is usually internal. Fail to make a good product, or the product isn't popular... If your chance of total failure in the next 3 years is alread 25% or 90%, then adding 2 or 10% risk can be reasonable. This is an abnormal situation.

A related concern from a few years ago was FB & Twitter "as a platform". In some cases, the risk paid off. Take Tinder, for example. Tinder doesn't work at all without FB and plan-B is probably terrible.

That's not a risk an existing business could take. Tinder could because they were a startup. Using FB profiles avoided the empty profile problems most upstart social networks have.

Dating is a network-effects problem. So, a 50% (or whatever) increase in profile completion rates could have been the difference between success and failure.

If "NoOps" can genuinely reduce the area of competence a startup needs, and let them focus on application building (or whatever)... that can be an edge. Taking risks to gain an edge is something a risk tolerant startup can do.

Of course, small companies become big companies and these decisions leave a legacy.


> Maybe I'm old school, but I really hope developers give a serious thought before jumping into this vendor lock-in trap.

What vendor lock-in do you mean though? If you're using AWS, Google or Heroku, they all support Node, Python and PHP for instance and have several options for file storage, SQL and NoSQL. Migrating away is always going to be painful (although you can make this easier for yourself with abstractions in your code) but you can still host on cloud services in ways that don't tie you in.

I agree if you start heavily relying on features that only one company provides you might be in for some trouble but hosting + coding it all yourself carries significant risk as well.


I get the feeling the OP was talking more about things like Firebase than EC2 or Cloud Compute. Of course, GCP and AWS will allow you to do things that lock you in, but as you stated, you don't have to do that to use their other products.

The idea that some people are using other companies backend systems to build their own company that completely relies on those systems creeps me out.

If I hire a backend developer who creates an API that talks to my DB, then that developer walks away, at least I can continue with my business while I run around finding a new developer. I don't need a new codebase and I don't need to shutdown operations. If a hosting company goes under but I host my own code there, at least I can redeploy elsewhere and get on with life. If I use a BaaS company and they go under/discontinue my hosting/etc then I can neither move, or hire another company to continue the work. I have to start again, and for anything significant, that's probably going to kill my company first.


Salesforce has been running essentially a BaaS for nearly two decades and they show no sign of slowing down.

IMHO their platform is garbage compared to something like Firebase, but no businesses seem to be worried about portability. In fact most are doubling down on building teams to build and support Salesforce.


"In fact most are doubling down on building teams to build and support Salesforce."

I'd sooner attribute that to Stockholm syndrome, in much the same way that Oracle shops tend to double down on Oracle products - not because Oracle products are actually good, mind you (they're absolutely abysmal, in my observation, and even the ones acquired from Sun are rotting corpses at this point, Java being perhaps the sole exception), but because it's cheaper to keep buying in than to break out of the vendor lock-in.


Sounds like we agree then! I'd rather stick to generic technologies so I can migrate away if needed. If the service being offered gave a big advantage but didn't have any realistic migration paths then I'd be very cautious. All I mean is that between the major cloud providers there's a decent overlap of common services you can use such that the advantages are hard to ignore. I think we're at the stage where doing everything yourself with some generic VPSs or your own machines isn't economical or productive if you value your time in a lot of cases.


really hope developers give a serious thought before jumping into this vendor lock-in trap

... and these are the people who refuse to write SQL incase they get locked into a particular database...


I thought solutions like Cloud Foundry and adhering to the "Twleve-Factor App" methodology help eliminate the risk of vendor lock-in.


This is so misguided I don't even know where to start, and it's the same line of reasoning that has been degenerating the meaning of DevOps for a while.

1. There is no such thing as NoOps (when something's in production, whatever needs to be done to keep it running qualifies as Ops - does your serverless platform ensure your backups can be properly restored and your application doesn't start crashing left and right in the middle of the night because of bad data and/or user input?).

2. So much advice on this subject from companies that have no legacy, and from companies that _will_ have no legacy (because they'll run out of money in a couple of years). This kind of advice means nothing in the real world.


"NoOps means that application developers will never have to speak with an operations professional again."

I really hope this is a joke. Application developers have a hard tendency towards "getting the job done" without thinking of optimisation and scaling, which will lead to gigantic costs. Ops people are not only for maintenance, they are also the ones thinking about scalability including costs. If you get rid of this layer you will end up running your business at a much higher operational price tag than you should and you will lose money.


I spent some time at a startup where I believe (thanks to profiling) that I produced a first-pass patchset demonstrating bug-for-bug parity that would save them 10-20% of their substantial AWS bills due to efficiency improvements / removing some very bad early decisions.

Of course they made those savings by retrenching me instead (and hiring more cheap juniors when the time came) as they were too scared of their own code. I believe two years later my code reviews are still in the queue \o/


Hey pmlnr, author here. I think what the post is trying to say is that the ops team will still be at the company, but the communication overhead across teams will be less of a burden. Your ops team will still be thinking about cost, scalability - but your developers can focus on shipping features. We also mention that this is a good approach for early stage startups looking for product market fit, i.e scalability and server costs still haven't become an issue at this stage.


No offense, but the concept of a DevOps culture was created specifically to get rid of the silos you've just described as being a good thing.

Without the communication overhead, your Ops team won't know enough about the product and what it does to appropriately plan and implement the infrastructure. If your developers don't know what the infrastructure can look like, they'll be making guesses at what resources are available and may just end up building and shipping features that cause major problems in production. The result? Generic build outs that cost more and run worse, all because teams don't talk to each other and nobody understands the requirements.

If your company is in such an early stage that all you're doing is prototyping, then sure, it really doesn't matter so much. The second you're going into production, you'd better get a competent team who can deal with their own infrastructure (even if it's IaaS), and communicate with each other or it's going to be painful to just keep going, never mind grow.


> "Your ops team will still be thinking about cost, scalability - but your developers can focus on shipping features."

> "i.e scalability and server costs still haven't become an issue at this stage"

What you're describing is a hackathon demo, not a product. When you expect something to go hockey-stick, so simply can't afford not planning ahead with scalability.

Many doesn't and you can see them fail to cope with the stress an unexpected interest pokes their services. (Think mastodon, or any random blog that gets the Hug of HackerNews).

Developers _should_ be aware of this. Shipping features should never be ahead of serving what you offer live.


Every single product including the ones that go hockey-stick start out as prototypes that don't need to be prematurely optimized for scaling. That doesn't mean you shouldn't plan ahead with best practices for performance / etc, but the bulk majority of startups will not reach product-market-fit with the first version of their product. It takes months and years of iteration to be able to put up a hockey-stick growth curve.

Offering up a refined, live version of the product is non-negotiable and completely achievable with the tools mentioned in the post, but adopting services that simplify your infrastructure processes gives you the focus that an early stage product needs.

I'm 100% with that you should control your infrastructure if you're an established company, brand, or product, but we're coming at this from a startup's standpoint where speed of iteration & shipping features make a huge difference in the eventual outcome of the startup.


Sorry if I misphrased it, I didn't mean premature optimisation: I wanted to mean planning for it.

EDIT: my biggest question is still the why always focus on new features compared to stability. Think from the point of the customer - eg. when you're using something -, and grab, let's see, skype: they are sacrificing everything stable in order to catch up with the competition. I'd be extremely surprised if they are doing better this way instead of focusing on being rock solid with a small feature set. The latter results in a small, but steady growth - not the one VC prefers for certain - because people fall back to it when the other shiny fails, and, eventually, just stick to it, because 'it just works'. Don't get me wrong, incremental new features is good, but the working core product - the one that is making the money - should always come first, ahead of the shiny. Obviously my logic assumes the product is making money.


shipping features is always at odds with having stable product. the former is the goal of engineering team, the latter is the goal of operations team. devops is culture of communication between those, not some magical hybrid of both that makes the site run.


I see two problems:

1. „Ship, ship, ship!“ – Yes, a speed advantage over competitors - especially with a new innovative product - is a key factor for startups. BUT: does anybody really believe in the transition from prototype to a scaling product? The article talks about „Startup Lifecycle with DevOps / NoOps“. Show me a business that has done this transition; planning ahead for a rewrite and budget für new admins.

2. Cloud-lockin. This should be taken very seriously by any startup that wants to live longer than a technology cycle. If you choose to build your platform on top of cloud technologies, you give up the control over functionality and storage. IMO any tech business should be able to handle at least web and application server architectures for their platform (I agree that mail is something different that should be left to mail providers).


Fully agree with this. I see people all the time recommending a VPS, a Digital Ocean droplet, an AWS EC2 box etc. for a company website/app/service because it's "easy" and "any developer worth their salt should be able to admin a server" to save maybe tens of dollars a month. Heroku can be an order of magnitude less effort for example.

I can manage a server manually but I don't want to waste my time doing that. It's never going to be as robust as a cloud service that has a team of staff doing it for you either. Anything that requires me to SSH in makes me cringe now to be honest; it's just way more low level and manual than I want to get involved in when I could be coding.


> I can manage a server manually

Ansible? Puppet? Chef?


Yes but it's still much more effort than a cloud service that doesn't require you to write, test and maintain scripts like this. The less I have to care about security updates and what state is stored on a disposable server the better.


I've done my fair share of work helping out with botched cloud migrations. The reality is that it is absolutely essential to be intimately familiar with the platform you deploy with.

As long as you are in the business of deploying software, you have to know what you are deploying and how. You can call this person ops, devops, whateverops, or just the guy with most knowledge about Linux.

You are going to end up troubleshooting stuff, and the more far away you are from the hardware the more dependent you are on your tooling and the people who know how to use it. (Especially storage. Don't get me started on storage.)

Anyway, the point is that if you're going with Lambda then need someone who knows Lambda. They may be easier or harder to find than people who know Linux but don't wait until it's too late as that's going to be expensive. And conversely, invest time in learning the platform before you use it.


sigh

Disclaimer: I'm an ops guy (technically SRE by job title).

There's so much more to Operations than just running pure infrastructure. It's not only about bare metal and application server configuration or maintaining your CI/CD pipeline.

Data life-cycle (backups), capacity planning, incident management, monitoring and KPIs are just some of the items from the top of my head.

I'm not saying that developers can't do that, it's just... if they do, they are doing Operations and you effectively have Ops in your organisation.

Ops is not only about installing and managing LAMP stacks..


My favorite example of a NoOps organization https://www.reddit.com/r/cscareerquestions/comments/6ez8ag/a... . Now also a NoDatabase organization heading towards NoCustomers. Having some ops guy in the loop is like having insurance against a class of easily avoidable but company ending/maiming mistakes.


I've been saying this for a while.

If you are spending less than $25k/mo on Heroku or equivalent, you're not ready to move off it yet.

A lot of times when I talk to people with a high Heroku bill looking to move to bare metal, I end up being able to optimize it by 1/3rd or more just by picking dyno types, scaling appropriately, and consolidating workers. You can't really do that when you're hiring headcount.


You can't really do that when you're hiring headcount.

I'm curious, why not? Can't you do it on a regular basis, like refactoring?


Because "refactoring"/"consolidating workers" in headcount means firing people, and that's pretty cruel to do on a pure cost-cutting basis in a small company :)


How can you compare the monthly price versus VM types/capabilities?

What does my bill say about the performance of my app in an e.g. Amazon environment?


he/she means you can probably get away with just adding more dynos or whatever to your PaaS to get the performance you need, because the costs of moving out will be significantly higher (opportunity costs and time spent)

eventually if you're spending thousands per month on a PaaS it might be time to reconsider


A) you can't

B) Nothing in particular

It's not about the application or any technical concerns. It's purely a business decision - above $25k a month you have room to hire people and still save money, below that you really strictly do not.


It also doesn't take into account the startups that can't use said serverless architectures, like Fintech or Heathtech. You inherently need to understand how data moves through your platform when you have tight compliance restrictions. It's why both Firebase functions and Lambda aren't HIPAA compliant. unsurprisingly the old school tech like RDS, EC2, EBS, VPC are. Having a sweeping statement that if you're an early stage startup you need to be No-Ops is a bit silly in this context. It shows you not considering what you're trading for that platform. I'm more for having better tooling for orchestrators like Kubernetes. You still get low level control, but most boring ops can be automated away pretty easily. You also don't trade portability across cloud vendors. If your dev team is already using docker/docker-compose then its not much of a step up to deploying a service in Kubernetes.


Looks like an advertisement piece for their services.


Yes, it's just an ad. Never have to speak to operations is naive nonsense - chuck it over the fence and pray that site failover, backup/restore, etc just happen by magic. Cloud is great, serverless is great but TANSTAAFL.


Whenever I'm reading one of these ads masquerading as blog posts I always close the browser tab the moment I come across the 'here at x company..' statement.


It seems odd that Google App Engine isn't mentioned, since that's been NoOps for many years before the others came along.


If you think being a "software engineer" means you only write code and ops/security is none of your concern, you are terrible. The best software engineers understand the importance of operations and security for the code they release to production.


And if the went so busy putting as steering wheel on the skateboard they might have some time to thing about how the seatbelt fits and bumpers. Speed Kills.


NoOps means that application developers will never have to speak with an operations professional again.

Unless is is the middle of the night and your site is down. Then you really want to be able to talk to an operations professions. Idiots.


While a completely stateless and auto scalable infrastructure is desirable even by Ops people themselves, reality doesn’t always allow for this because: Not everything can be event driven. Persistent data will always have to be managed by somebody.

Plus, NoOps conceals the risk to have a de-evolutions of devs into even dumber IDE users that can’t even type “sudo systemctl start mysql” at the terminal.

NoOps as a company wide phylosophy can’t be tenable.


> Plus, NoOps conceals the risk to have a de-evolutions of devs into even dumber IDE users that can’t even type “sudo systemctl start mysql” at the terminal.

You can say the same thing about ops people being "dumb command line users who can't debug assembly language."

Higher abstractions and fewer reinvented wheels are one of the major overarching goals of computing. If I never had to type "sudo systemctl" ever again for anything but a fun antique restoration project then I would be very happy.

That said, I completely agree that for any company other than maybe some early-stage startups to go completely NoOps today, in 2017, is not a good idea.


"A few examples of NoOps platforms that fit the above criteria are Heroku, Amazon Lambda, and Google Firebase."

I don't know about Lambda and Firebase, but Heroku is not "NoOps" in my experience. You still have to deal with dyno configurations and linking things together, and even have to deal with security updates every once in awhile (Heroku's "stacks" are not supported forever).

Meanwhile, this sort of vendor lock-in is a great way to murder a startup before it even learns how to walk. These services are not cheap; hiring a "devops engineer" or a proper sysadmin will almost always pay off in the long run, since they'll be much cheaper (and much better at their intended purpose) than the likes of Heroku when you actually do need to scale beyond the prototyping stage.


Half of my work is to develop and maintain a devops console for the application. I could indeed have spent that time building new features, but we are not necessarily interested in what else we could add, but rather in what else we could leave out. In der Beschränkung zeigt sich erst der Meister.


Seriously people developers are NOT paid for piling up features. I do not even know where to start but this article and I guess the guy representing the company totally failed to understand few concepts here and there.


DevOps: "Captain, we need to divert power from the warp core to the forward shields."

"Make it so!"

NoOps: "Make us go."


We do move to NoOps in the classical sense, but are not there yet. But serverless is a good step.

With the ever more complex delivery pipelines it does make sense for someone to build delivery infrastructure. And devs are a better investment when they write app code than delivery infrastructure.


"i hate to break it to you, but you still have to monitor shit in heroku, you knuckleheads."


> NoOps means that application developers will never have to speak with an operations professional again.

So basically the exact opposite of DevOps?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: