Hacker News new | past | comments | ask | show | jobs | submit login
So You Wanna Go On-Prem (lusis.org)
256 points by friendzis on May 23, 2016 | hide | past | favorite | 102 comments



I honestly thought this was satire for the first half of the article. When did working on SaaS products exempt people from understanding how to deliver software? Should we just remove the first [S] from SaaS?

I see this attitude a ton in conversations with startups. A founder describes their whiz-bang thing, a question comes up about how this works in larger environments, usually followed by a mumbled reply about virtual appliances. Virtual appliances (and to some extent, docker containers) are not the solution, for so many reason, I might run out of space in the comment field listing them. The short version: OS updates, security updates, networking issues, customer-side diagnostics, size, and support for the customer's specific virtualization platform. Docker is great if your customers all use docker and you have the update process sorted out, but that is probably a small fraction of your total market.

In other words, build actual installable software that runs on some set of supported operating systems. Make a DEB, an RPM, maybe an MSI. Build an installer. Have a nifty splash screen. Add desktop links. Don't lose revenue because you can't be bothered to figure out omnibus, nullsoft, or bitrock.

If you are building software, keep in mind that customer environments are insane and should be treated as hostile. Every bit of your software and packaging needs to be paranoid, defensive, and respond well to failures. When something goes wrong, customers are not your QA team (you have one, right?). Don't make them run a thousand commands for you. Build actual diagnostic features into the product. Some organizations (hint: they have lots of money), don't let your icky code talk to the internet. Offline activation, offline updates, and offline diagnostics are super important to counting these folks as customers.


The problem with maintaining the install is that it really slows down your deployment of feature and evolving the ecosystem. It's not just a matter of extra know how. SaaS is more than just ease of use, it's a very very rapid and competive delivery of new features day in day out. Companies are adopting SaaS not just because they don't want to maintain the infrastructure / install software. They're adopting SaaS because they want the latest and greatest so they can compete better against their colleagues that don't have the latest and greatest.


Vendors have responded to this challenge for a while with Long-Term Support builds to complement the standard product cycle. Get a major set of features that your on-prem customers want, freeze the features, and keep 12, 18, 24 months between major feature updates.

Most companies of any complexity are going to be unable to integrate into their business processes all of your latest and greatest features as fast as the fictional SaaS dev team can release them (then revise, then revise again).

For most of these businesses (just read, most businesses), your software will be less of a time suck if it has a moderate release cycle with a predictable, well-advertised lead out to new changes, allowing IT departments to evangelize and pre-educate on the new capabilities.


Yes, that model works, but it is going to use up resources that simply will not be an issue if you're running entirely as a service.


You just have to charge enough to make it worth it.


I guess so, but the whole point here is that there are a lot of hidden costs you might not have considered if you've operated 100% SaaS.


My wild-ass-guess starting point is at least $2.5m/y with a 5 year commitment to make it anywhere near worth it.


> so they can compete better against their colleagues that don't have the latest and greatest

But there's nothing exclusive about SaaS and often our rivals' logos are already on the SaaS vendor's pitch-deck

Buy for equivalence, build for advantage. SaaS provides out-of-the-box solutions for basic functionality like HR or payroll and allows a company to deploy its programmers where they can build an advantage over their competitors in the domain of business logic.

SaaS is the new MS Word; just buy whatever everyone uses and get on with using that to do the things that actually make money.


As a consumer of a number of on-prem installs, I'd just like to add something positive in what will probably be a lot of negativity regarding selling your product on-prem.

The on-prem situation is evolving just as the SaaS/cloud world is evolving as well. Selling an on-prem product can be very lucrative if done correctly, and that can mean intentionally not selling to customers who don't have their act together. As more sophisticated customers install "private clouds" with OpenStack and the like, you can potentially have a SaaS on-prem use the OpenStack API natively and have a much better sales/maintenance story rather than boxing up a static VM and trying to get it or the customer to scale horizontally using static VM images and lots of configuration tweaking.

I do agree that monitoring/logging can be problematic, but sometimes there can be somewhat standard OSS offerings that can be plugged in if the customer already has a managed monitoring platform running keeping tabs on lots of other things. The more adaptable (literally adaptable -- adapters to various solutions) your software is due to good design, the easier it will be for you -- or even the customer -- to adapt it into their stack.


We went the other way, and it was far easier. Actually, our product is an Atlassian add-on, and Atlassian itself went from OnPremise to Cloud.

It was, however, a long migration for both Atlassian and the add-on vendors. It was necessary because the market wants to try products withut installing them. Now Atlassian is able to make cloud-level changes to their software: They can use CD, live analytics, mutualized authentication for all tenants, etc.

However, their API for cloud addons is, and will be forever, completely different from the API for onpremise addons. It's a major difficulty for the ecosystem. Many vendors extract a .jar that they reuse in both onpremise and cloud addons, but the cloud runtime that connects this .jar to the cloud API is at least as big as the jar itself (because we don't benefit from the host's services such as user management, storage or logging).

OnPromise, which we started with, was however easier for us than Cloud. The customers, even enterprise ones (we have about 28 of them) are in charge of Ops, it cuts off half the work for us. And OnPromise is 3x more revenue for us per year, with the downside that it's a one-off, non-recurrent sale by default.


One thing I would call out: don't do CI to these people. Eat the pain and cut honest-to-goodness versions. Ideally, cut them on the quarter, so that your customers can comfortably expect a New Version to be available at a specific time.

I used to be on the other side of this desk, so.... trust me. It makes it easier when we can all refer to a specific version. You can throw us specific patches or whatever. From the developer standpoint, you're not delivering a continuously delivered system, you're delivering a "server app" of a specific version.


I think you mean "don't expect to do CD to these people".

CI is "developers check in code and everything is integrated daily".

CD adds "and then we push it out live if tests pass".

I've worked on software products that attempted CD, and it's not gotten a lot of great acceptance. Customers don't really want constant notifications about having to upgrade, except for very serious fixes. And I don't know many IT departments in big companies that are like, "sure, go ahead and auto-update".

CI, on the other hand, is wonderful. You don't have the mainline off in this broken state for weeks while you're adding features.


Whoops, yes, you're right. Misnomer. >.>


> don't do CI to these people. Eat the pain and cut honest-to-goodness versions.

You say this as if these are opposing things. If you're not doing CI, you're not delivering software properly. On-prem or SaaS is irrelevant.

Making release 'versions' is slightly different than continuous deployment (maybe that's what you meant?). There are many ways to do it, I'm a fan of having a 'release/1.2.3' branch (and having the CI system/build scripts automatically pick that version number up and embed it in the source and all output files), but some people use tags to do the same thing.

QA team can take the release builds for testing, and dev can merge in bug fixes (either specifically from a gitflow bugfix branch or by integrating in the master branch, depending on your workflow and what other work has happened since the branch was created). Once you have a final GA build, you pin/save it, and give it your customers.

Your CI system should absolutely be building the final package you give to your customers (whether it's a zip/tarball, .dpkg/.rpm or setup.exe). If someone has to do something to "get it ready" for customers after the CI build is done, you're not using CI properly.


From experience, on-prem customers won't accept continuous updates anyway. They have multi-month change freezes.


Sometimes more - if your product is used in an internal app that has a 6+ month release cycle, they'll be updated in sync (if they're updated at all)


Stable versions is the correct way to do it, but it certainly has a cost. You have to support a version with bugs you fixed months ago. You will have to backport security patches and other critical fixes. If you have refactored the code since the stable release merging fixes back to the release branch is much more work. So people will resent the person who made extra work for them by refactoring. Soon everything has the wrong name and functions are 1000 lines long because nobody dares to refactor.


> You have to support a version with bugs you fixed months ago.

Technically, you do this with any serious SaaS. You cannot just update your service without regarding to existing integrations, and guess what? Those companies are already integrated with you. Having to integrate again is an additional cost, and I promise you, at that moment, they will investigate other options. After all, they are already having to spend additional time/money to update.


This is a great post. We've been talking to a lot of people about going "on-prem". This post is right on in that the deployment is the relatively easy part. The ongoing operations is where you will get killed.

Luckily, for those that are thinking about doing this, for the end customers "on-prem" many times means single-tenant, private cloud (usually AWS) which makes things slightly easier (as the op mentions).

The vendors that do it right make their multi-tenant cloud app and "on-prem" environments as similar as possible. So that means containerizing and using an orchestration layer like Kubernetes or Mesos everywhere (multi-tenant and single-tenant). Then you need to make sure you have a secure and scalable way to access all environments for upgrades, patches, support. We open sourced an SSH framework specifically for this purpose[1].

Bottom line: no special snowflakes.

Edit: for more info feel free to reach out: http://gravitational.com/vendors.html

[1]: http://gravitational.com/teleport/


> for the end customers "on-prem" means single-tenant, private cloud (usually AWS)

Nope. On prem means in their own private data center. Also possibly airgapped from the outside.


Depends on the customer. Often, the motivations that push them to demand on-prem can be fulfilled just as well by a less-demanding single-tenant SaaS solution; you just have to convince the customer's IT department that such a thing exists.

Sure, there are some (mostly in defense, finance, and a few other security-sensitive fields) where physical on-prem is non-negotiable, but in my experience customer demands for on-prem are far more likely to be driven by outdated and/or overly-conservative IT practices than by real need.


For some, but depending on markets (say DoD) on prem means air-gapped. Basically in last 5 years less than 10% of our customers agreed to that sort of a managed AWS VPC deal.

A lot of the time, the direct customer doesn't even want the hassle either, they want a solution that works. But they are forced by rules and regulation, and their own customers.


What you're describing is a private, separated installation on the Internet. In all cases "on-prem" is inside of a customer's datacenter/office.


I've learned to live with the meanings of things getting mangled. Once marketing requires that a definition change it's pretty much game over. You can try to fight it but quite often it's the marketing department and customer working together to change how something is defined because it's easier than changing outdated policies or thinking. At that point you're just a stick in the mud, pining for the good ol' days when clouds were just called "other people's computers".


Which is what this article does too - they very specifically distinguish between what they call really "on-prem" and "single-tenant", and one of their suggested strategies is to make them go for the latter if at all possible.


I mean, the article linked describes the two scenarios both as "on-prem."


If you're big, AWS is a huge waste of money.


If you're big and are a typical company bad at IT, AWS is a massive cost savings. Most HN readers probably aren't very familiar with the costing and the line item levels of bureaucracy in Fortune 500 enterprise IT that makes AWS bills of $200k / mo for maybe 10 lightweight websites (< 200k http requests / mo) on an intranet look like a stupid cheap bargain. Maybe the oldest crowds do, but past customers of mine have readily paid $5 MM / month for basically a couple Wordpress sites that are on 24/7 deathwatch by human eyes constantly because they've failed to ever get basic monitoring working or it's cheaper to have a human watch than to pay for resources that properly monitor the site for outages and simply reboot. The places that deploy and operate applications (and their infrastructure) like it's 1994 dominate much of the Fortune 100, federal, and state governments. For every hip project with containers and configuration management there's 20 legacy applications operated with legacy methods.

These companies want to buy software to throw over the fence to the same people and processes because they've invested decades in their creation and innovation / eliminating waste is so damn difficult in such a culture.


I don't see how a company terrible at managing IT will magically improve with AWS. Maybe they won't get killed as badly by EMC and shift wastage to various AWS services.

I'm not knocking Amazon -- just saying that you can deliver most enterprise workloads cheaper.


Because they will be outsourcing a big chunk of that work they do inefficiently (backups, monitoring, provisioning, etc.) to Amazon, which is much better at it. Hence the proliferation of things like hosted AWS Outlook system [1]. Anyone in our Bay Area bubble of efficient IT would say keeping that running for a 2,000-user business is probably... what, a half of a full-time-equivalent of workload, averaged over long periods of time? Less? But Amazon can charge $4/user/month (in this hypothetical company that's round about $100K/year) and it is totally a steal, because a lot of companies will not be able to keep it running it anywhere near that budget.

And that is the market they're going for - note that the free trial is 25 users. This is not a product for the small company that just doesn't have the capital to invest in their homegrown solution - that's Google Apps for Work.

[1] https://aws.amazon.com/workmail/


Never mind that $100k is probably reasonably close to what you'll be paying your IT guy in the Bay Area anyway, and the Amazon service is 24/7, probably better than even the best person you could hire, and never even reads the Google recruiter emails.

And that's just pure cost. If you're in the Bay Area bubble, you probably have much more valuable things for your systems guys to work on.

Point being, even for a Bay Area company, even at twice the pure cost of personnel, this looks like a pretty good deal.


Enterprises are abominably bad at IT as a rule and horrifically inefficient partly because they have made terrible decisions they are stuck with for decades or longer. The raw costs they pay to bloodsucking contractors that are nickel and dimed year after year is systemic, for example. Are you familiar with DoD contract price structuring by chance? Furthermore, are you aware of how desperate people are to deploy software onto something like AWS? There's a lot of reasons why the AWS office is in Virginia and is at least 50%+ security engineering in background. Trying to secure enterprise applications on-premise with the tools of the 90s is frustrating and typically an exercise in massive time wasting when most companies didn't even think about automation of work because the money was pouring in so much.

The amusing irony of this all is that IT is a cost center typically yet by trying to cut costs you wind up costing yourself so much over time.


Could you provide some data to back up your claims? I've seen quite a few comments like yours claiming that the cloud service providers are so expensive, and yet I've yet to see anyone proving it except for the few exceptions.


I think azernik's reply in the other point of the thread illustrates very well how well scoped, scalable SaaS workloads are very amenable to cloud. His example is Amazon Workmail, my real-life example is Office 365, where Microsoft can essentially offer Exchange, Lync, SharePoint and Office apps for less than it would cost us to deliver Office apps + Exchange -- and make a margin.

That's old news, as Forrester reported back in 2009. https://static.googleusercontent.com/media/www.google.com/en...

It's just a matter of doing the math. It's very situational, and unfortunately I don't have anything that I've done that I am at liberty to share. I can say that in my case, at least 75% of apps we've looked at were rejected for AWS/Azure because they clearly cost more (25%) to operate, or savings were very marginal and our data about stuff like outbound bandwidth requirements was insufficient.

Keep in mind, my employer operates two shiny, efficient datacenters. We drive significant vendor savings based on purchase volumes. For a startup with no capital, AWS/Azure/etc is a no-brainer.


Not true. There are big and sophisticated companies (the likes of Apple and Netflix) that use AWS.


With the big consumer facing apps.

Show me where they run their Oracle Financials in AWS.


First hit on Google

https://aws.amazon.com/enterprise-applications/oracle/

Also things like SAP HANA are now on AWS.


For us, "on-prem" means "in our private AWS cloud, with no dependencies on the Public Internet". For critical services for engineering, we want to know that your service is not going to stop working if Github or some CDN goes down or has latency problems. We want to know that you aren't going to toggle some feature switch on your service and expose us to some new bug. We want to know how much headroom your software has, so we know if it can scale with us.

I think this concept of "on-prem" being "in our private cloud, firewalled from the internet" is pretty well-understood among consumers of AWS or other IaaS products.


Yes, that is true. I meant it more that a new class of "on-prem" is being created.


Teleport looks really cool–I can't wait to try it out. Thanks for releasing it.


This topic is mildly amusing because I remember when all (or most) software was sold as "on-prem". Then everything moved to SaaS...and now everything is moving back to on-prem!

I used to work for Atlassian which made most of its money at the time I left in 2013 from on-prem software. It was low-touch, high-volume sales to people downloading and installing software on their own servers. They still sell a ton of it too.

If your stack is a hodge podge of web servers, shell scripts, and cron services, I think better advice would be based on how to make your application more packageable before moving to that model.


> I used to work for Atlassian which made most of its money at the time I left in 2013 from on-prem software. It was low-touch, high-volume sales to people downloading and installing software on their own servers. They still sell a ton of it too.

IIRC most of their apps are Java based so the JVM ends up being their VM to normalize the installations. That would definitely simplify things and you see it even today with on-prem apps running in containers to normalize things (ex: that's apparently how NPM runs).

> If your stack is a hodge podge of web servers, shell scripts, and cron services, I think better advice would be based on how to make your application more packageable before moving to that model.

Live by the cloud, die by the cloud!


No, "everything" is not moving back on-prem, just like "everything" never moved to the cloud.

But a great many things did move to the cloud, even more were conceived there, and practically all of them are going to stay there. You can successfully run a whole business of a very respectable size without managing even a single on-prem device, and that's great, and that's not going to change.

The push back to on-prem isn't a move, it's an expansion. Old/large/government orgs which for different good and bad but mostly immutable reasons can't let their data off-prem is getting interested. It's a new market, not a change in the old.


I agree it's not a dichotomy. However,it's also not a "new" market and it's not just dinosaur enterprises buying on-prem software.


Or you could copy tableau and shove it all into a vm


That's how Github Enterprise is distributed for on-prem. Virtual appliance.


Which made GitHub enterprise a nightmare to operate compared to Atlassian's products.


Can you elaborate? (Dev here with not very much ops experience.)


No, this article is literally about why you should do everything in your power to never go "on-prem".


For those who haven't come across it before and have already dockerized there is a nice solution for some of these issues from replicated (http://www.replicated.com).


We use Replicated, and it was insanely easy to set up. We had an on-prem version ready to go in a few days. Definitely recommended!


We do what we can to ease the pain of going on-prem for modern SaaS companies :)


Ahh... my favorite subject! Can't resist :) SaaS companies are leaving a lot of money on the table by insisting to "making the world a better place" from their US-East AWS region ;-) The world, especially the enterprisey world of lucrative contracts, is much larger than that.

Gravitational [1] specializes in taking complex multi-server SaaS stacks into private cloud environments and providing ongoing management for them.

[1] http://gravitational.com Full disclaimer: I work there.


In a similar vein, BitNami makes money licensing full-stack open source components with an integrated installer. I used it/them for a few years and while I wasn't smitten, it got the job done until something better came along. If you want a one-click installer for your mvc + db app, it might be worth looking at.


Strange that the author mentions two ways of doing this (managed here vs. managed there), and spends the rest of the article saying how those are not in any way fun, but never mentions the obvious third option: As Is.

I sell a self-hosted source-included license of Twiddla, as a single breakout of the current codebase, with the understanding that no further updates will be provided once we're all satisfied that you're up and running correctly. You can do whatever you like with the code, and I'm happy to sell you additional consulting time to help with what you're doing. But you're essentially on your own.

It pretty much dispenses with every concern in the article.


How do you deal with updates? Do you just sell them a newer version, like with MS Office major upgrades in the past?


No updates. Hence the "As Is."

They're buying the version of the website as it exists at the point of time when they buy it. The assumption being that it works for them now, it should continue working for them in the future, regardless of what direction the .com goes.


What about vulnerabilities then?

Or is it supposed that the customer use it behind a firewall / VPN solution?


In this case, we're talking about software built on the Microsoft stack, so most of that is Windows Update's job (patching servers manually isn't really something that needs to happen in this world).

As to vulnerabilities in the handful of 3rd party libraries that we use? In the 10 years that Twiddla has been around, we've had exactly zero cases where we had to patch something from our end for security reasons.

I guess there's something to be said about avoiding the tall skinny (and wobbly) tower of 3rd party dependencies that seems to be the norm these days in web app development.


> In the 10 years that Twiddla has been around, we've had exactly zero cases where we had to patch something from our end for security reasons.

That is quite good compared to the rest of the industry. Nice work!


If you're deploying a SaaS system on premises, because a particular company needs it, then it's quite likely that this is also the same type of company that would have a standard procedure to expose only a single port of that server to a specific list of internal computers only on a network level.

The kind of companies that don't have everything behind a firewall / VPN would also just use SaaS directly, instead of requesting the hassle of an on-premise setup.


Every time someone tells you "You can buy a server for 2000$ why pay Amazon 4500$?" show him/her this article.

I've done on prem in the past, it's a real pain and I still remember it as a horrible experience.

That being said, I think it's pretty easy to combine. One of the things we had in the earlier days is hosting the CI and other build/compile servers @ the office and all the rest was in the cloud.

Even if you have some customer requiring something hosted on prem you can still host ALL OTHER clients on the cloud and this client on prem etc...

Good post.


Most of the points in this post. apply to hosting installs for your customer in "the cloud" too.


This is a subject near and dear to my heart, albeit from the "enterprise" side of the table, not the scrappy startup side.

I've spent a non-trivial amount of effort trying to convince software companies to spin an on-premise version of their software. Sometimes with success, and sometimes not. I understand that it's quite attractive to rely on AWS or GCE abstractions to the point where it would be nearly impossible to break it apart for installation on my boxes, but it seems like it's leaving quite a lot of money on the table.

There are perfectly valid reasons to require on-premise applications: in-house interpretation of regulatory requirements, protection of intellectual property, etc.

Of course, I have a right to ask and they have a right to say no, but I wish more companies would consider making an on-premise solution. (Slack, I'm looking in your direction. Please come take the money I'm currently giving to Atlassian.)


Given how much code is on github, I no longer think protection of intellectual property is a valid excuse. It may be a requirement from your boss.


That's not really how thinking about security works. "But everybody else is doing it" wasn't a valid argument in school, still isn't.

As a thought experiment, (ignoring the fact that it's not an appropriate store for such things, and only thinking about security towards third parties) how would you feel about a company storing your PII/medical records/credit card number in a private repo on GitHub?

That's how your boss thinks about the company's intellectual property.


Why would I feel any different if they stored it on Amazon cloud, github or on their own systems?


I'm sorry, but I have absolutely no idea what you're trying to say. Would you mind providing an alternate phrasing for your comment?


The problem is with licensing and trusted deployments. Having done the high level of the architecture for one possible solution, I can tell you we are about 3-4 years out before this is common practice.


yes Yes YES! One thousand times yes! I previously worked at a SaaS mobile device management company (apperian) that took the on-prem plunge to score big enterprise / government contracts. The ops / support team quadrupled and there were so many headaches. This article nailed it.


Oh god yes.

Just wait until you're attempting to support a distributed ML system through some fucking idiot sysadmin (well, "sysadmin") who god only knows how he still has a job, but HE CANNOT RELIABLY USE SSH. Because of their security requirements (ie no internets for you, including package servers), getting updated packages onto their 4 year old redhat installs were a nightmare. And they wanted to run very high cpu requirement distributed solvers inside 4gb single processor vms. The fun was just endless.


Jeff Meyerson's recent interview with Julien Lemoin discusses the reasons Algolia has not created an on premises version. [spoiler: it is contrary to the strategic focus on performance, the same reason Algolia runs on metal it owns.]

http://softwareengineeringdaily.com/2016/04/17/search-servic...


Having done On-Prem in a past life, this article nails it on the head. It's a difficult problem to solve and the support costs are very real. You need a team (or teams) dedicated to OP if you're going to do it right. It can be lucrative and a huge differentiator for your product, but it is far from easy.


Great article, rings very true.

I'd say if you're in the early adopter phase and you desperately need big enterprise traction then do the deal with the devil and go on-premise.

Otherwise run a mile.

Worth noting is that sometimes a sales technique is to agree to on-premise (since your competitors probably won't) and then look for opportunities to remove it from the deal further down the line - the customer may even come to the conclusion themselves that its not a good idea.


As an operations person who has dealt with deployment of different "on-prem" software in the enterprise please keep in mind that smaller organizations have different needs then the larger enterprise. For a smaller organization a VMware disk image (commonly referred as an appliance) works quite well. But for larger organizations with operations folks we expect to be able to scale individual pieces of the stack or use our own managed infrastructure (e.g. MySQL, Redis, RabbitMQ).

If you're targeting the larger enterprise you can expect to have a somewhat more technical IT person to deal with, and if its a technical organization people who are keen on scaling the application/infrastructure independently.


> But for larger organizations with operations folks we expect to be able to scale individual pieces of the stack or use our own managed infrastructure (e.g. MySQL, Redis, RabbitMQ).

Puppet professional services consultant here. I work with lots of "enterprise" teams on "ops stuff" on-premise.

Between 2010 and 2016, I observed a change in teams interested in trying to scale individual components. Back in 2010/2011 teams did expect to scale out and manage individual components of the stack. Today, nearly every team I work with is like, "Oh you ship your own postgresql server in the product? Awesome, I'd much rather use that (and make it your problem)." Same is true for pretty much every component in the stack.

I don't know exactly why this is the case, but I have two suspicions. First, every Ops team I work with is has too much on their plate. Way too much. They're having trouble finding people to hire. They're happy to offload a potential maintenance issue, particularly if they're paying for the software. Second, this might be a reflection of crossing the chasm to the mainstream market sometime between 2011 and now, so the teams themselves might be inherently different.

What surprises me is that it doesn't really matter what industry the customer is in today. Government, technology, retail, insurance, finance, whatever... No team is interested in taking these maintenance issues on unless they _have_ to. Making a product they don't need to tune and scale component-by-component is a highly valued feature. Both for SaaS and On-prem products.

I think this "feature" is the common root of all the things cited as issues in the article. Making this stuff easy to operate is _hard_, hard enough for SaaS and _much harder_ when on-premise.


It depends on what the organisation has standardised on.

If there are people who know PostgreSQL already and they have the tooling for it, then they want to reuse that stack. If the organisation has standardised on MySQL/Oracle/SQL server/something else, then learning about and supporting PostgreSQL are additional headaches.

If PuppetDB was to support multiple backends, then you would find that every organisation would want to run on their own stack.

Ops has always been overloaded. That is the biggest reason why enterprise support contracts are a thing.


> If there are people who know PostgreSQL already and they have the tooling for it, then they want to reuse that stack.

I haven't ran into these people in all of my experience on-site with paying customers. I'm not saying you're wrong, just that either I've gotten very lucky with my sample, or they do exist, but they don't really matter from a product perspective.

For example, I have run into a few Ops teams who interface with separate teams who have DB admin roles. Universally, the Ops team prefers the included DB "feature" because it allows them to avoid the overhead of dealing with the database. It's unfortunate the "overhead" in this case is collaborating with the DB admin team, but it's the reality I see on the ground.

> If PuppetDB was to support multiple backends, then you would find that every organisation would want to run on their own stack.

I'm not sure that's true. I'm sure it's not true for _every_ organization. However, assuming it is true, I think it's a good example of why supporting multiple backends for PuppetDB would be a poor product decision. It wouldn't provide much value to Puppet the company or to their customers and it would increase the cost of support and maintenance tremendously for both parties.

The same applies to any on-premise software product. This is probably why pluggable backend data layers as a feature are so rare in on-premise commercial software products. Why bother?


After years working at a company that has a 10 year old on-prem and had a saas app as well, here is my POV on this:

On premise vs saas is a trade off in the type of customer support.

With on-prem, you do have support complicated by customers have unique infrastructure, and varying degrees of knowledge and post within their organization. Some people are adept, some are not. Some support keeps you screen sharing for hours, some do not.

In general for on-prem, support is done via your customer support app, professionally, as you're usually dealing with an IT department.

On the other hand, SaaS customer support volune/time is a measure of how often you break stuff for ALL (some, of you're lucky) of you're customers at the same time). Often an issue is experienced by many rather than a single person. "Always" (ish) it's your fault instead of being an issue of a customers configuration or infrastructure. Often Twitter becomes the public stomping ground for complaints, as you get end-users going to Twitter rather than an issue bumped up from an end user took their IT before reaching you.

In considering of you want to support on-prem, a key point is to keep your software simple. For example, optionally (and easily) logging to files instead of <third party here> to reduce requirements on external systems (elastic search).

Making the app as simple and easy to run as possible is pretty important. Definitely very strongly consider if you need to rely on that saas app for your App to run.


> In considering of you want to support on-prem, a key point is to keep your software simple. For example, optionally (and easily) logging to files instead of <third party here> to reduce requirements on external systems (elastic search).

We have extensive experience doing this over here, and this is key. Keep things even simpler than you think they could ever be, because "simple" times gazillions produces terrific economies of scale.

During interviews I'm often asked if I have experience on a product "at scale", by which interviewers often suppose it can only mean some typically tiered load balancers/front-end code/backend code/data persistence and you spin up new dynos/AWS of what is really (taken as a whole) a single cohesive application to face demand, and ultimately get dismissive when I tell them I'm not working on a 1Mreq/min product... but there's a whole other orthogonal dimension to "scale" which most products have firmly locked to 1, and believe me, everything suddenly follows a square law when your code and architecture can have an impact on each dimension, especially the one that involves humans that don't have the level of intimacy with neither your product, team, or methods.


Doing on prem is hard but worth it. The sales cycles look different than SAAS but I don't see it going anywhere for a long time.

The key to this is a field team and a good support infrastructure.

Operationalizing it is half of it which is (echoing another poster here) the key to making it economical.

Our on prem stuff targets hadoop/spark clusters though so typically there's already ops folks in place who understand how our stuff works. This obviously won't be true for all scenarios though..


Depends on your customers, but a decent amount of the stuff complained about here is stuff you may need to do in your SaaS anyway even if it's not deployed on-prem. The article mentions that in the SaaS version you update daily and don't maintain 6-month-old versions, but that's often not really realistic. If customers depend on stable functionality and versions, the fact that you control the hosted install doesn't mean you can just break things whenever you want. Yes, you can more easily do careful, planned updates since you control the whole environment. And you can run off of one integrated code-base. But you often need to conceptually support "6-month-old versions" in the sense of having announced and planned upgrade and migration paths, supporting parallel versions of APIs, etc. Many of the problems are at least similar, though not identical. Now if you can use a Facebook-style "move fast and break things" approach with no care for versioning or legacy support, you can bypass all of those. But if you were in a market where you had potential on-premises customers in the first place, how often is that true?


Just use replicated. http://replicated.com. Easiest way to on-premify your SaaS app.


I wonder if there's an inverse 'So you wanna go SaaS' for on-prem only companies...


Going to SaaS can be quite an issue depending on the reliability of the company developing the software. One of the company I'm dealing with as a reseller had the bad habit of delivering releases with semi-critical issues and even regression bugs, consequently with my colleagues we decided to act as a buffer, doing a pass of QA ourselves, and depending on the results making the versions available or not to our customers. Sometimes it took one year before getting an good enough release. Quite annoying to do this ourselves, but at least our customers are satisfied with the quality.

Now a year ago the partner decided to deliver a SaaS version, which made completely sense commercially. Then the SaaS became public a month ago and we keep encountering critical issues, like features not working, translation files older than the last on-prem release, or even impossibility to log in because somebody installed a patch and never checked anything after (and never told us about it either). But this time we cannot buffer anything, so they are basically shooting themselves in the foot.


Things I learned after 10 years of dealing with enterprise customers.

1. They want on premise.

2. They want key account people to talk to.

You don't need multi tenancy. You often get your own server to run on AND a local ops guy. It's a bit like an exclusive cloud.

Yes, the deployments are a bit harder, since you often can't simply drop in a new version over the internet. This is why you need more ops to handle this all.

But if I wanted to sell to enterprises I wouldn't got into the cloud in the first few years after starting the company.

If the system works fine, it's a no-brainer to deploy it in the cloud and you already have all the ops people to maintain this...


Has any anybody here considered leveraging the on prem offerings of cloud providers, e.g., Azure on prem (http://www.geekwire.com/2015/microsoft-brings-azure-on-premi...) or Bluemix local (http://www.ibm.com/cloud-computing/bluemix/local/)?


> If they suck so much, why do them? > Money.

Yap.

> when you move to any sort of on-prem model, your operational and support problems are multiplied.

No kidding. Multiplied by orders of magnitude. Difference between looking up a query in Splunk vs directing someone possibly less technical to ssh in try to read logs to you via phone and figure out the problem from that. Or even worse, having to urgently put people on the plane to fly them onsite to solve the problem. And often when they return they can't bring any logs with them.


On-premise install can be marred with problems. I have a case where the customer's IT refused to grant access to their network. The PM on the business side had to start Webex, shared his desktop, and allowed outside access to do the install via his desktop. Talk about punching a huge hole through their firewall.


...and he would probably have been disciplined for it or at least had a stern talking to from IT.

This is a very common situation and in fact is the major reason behind them wanting on-premise solutions. You don't want your SaaS talking to the big wide world.

If you're thinking you should be able to do on-premise installations remotely... its already gone over your head!


CoreOS actually makes this significantly better with their Cloud Config capabilities. With CoreOS and Docker, we've successfully deployed this at a number of companies.

With that said, there has been plenty of support to be handled, and we need to get better at doing multi-node deploys, but we're learning. :)


Well, you're quite lucky to be allowed to use CoreOS. In our case, only a minority of customers didn't want to dictate the distro to be used.

In one extreme case, we had to install on SLES 10 (in 2014), without root or sudo, and then hand over the machines, since we weren't allowed direct access to the production system. And maintenance involved sitting with one of their sysadmins and telling him what to do :)


This is my life right now. Fighting customers for on-site installs. This hits too close to home...


As an employee at an enterprise that often asks SaaS companies for on-prem solutions... this is very on point. That said, we do pay a lot.


Going on-prem also means building a different system, it has to be more streamlined, self contained and serviceable.

The quality of this is tested once the customer wants to upgrade his system from version X to version Y.


We[1] drive the bulk of our revenue with on-prem. Many companies prefer to keep their SSH key management in house, and we actually count on that.

However, we've invested a LOT of time into making our distributable a simple daemon. It's very easy to manage and install. We offer two on-prem versions: one is Pro which provides the basics (starts at around $3,500), and one is Enterprise which costs a lot more and supports external Redis, multiple app servers, web servers, LDAP integration, etc. Sales between the two are split approximately 30/70 in terms of revenue. In other words, we make a lot more on the Enterprise versions, but we sell less of them and the support costs are definitely higher.

Just my 2c - but for us, on-prem has been a huge win and has taken us to profitability. In truth, however, we've probably focused too much on on-prem and not given enough focus to SaaS/cloud.

For a startup that's just starting to hit profitability, we've actually been very happy with focusing on on-premise over SaaS, in contrast to the article, but our product was (re)designed with both scenarios in mind from the very beginning. We actually re-wrote the entire product before offering an on-prem version.

In other words, just as described in the article: making technology changes like SQL to something else 'on the fly' might be hard but not impossible with SaaS... but with on-premises, it's almost impossible to work through hundreds of on-prem installations of some previously chosen tech, and the need for data migration, installs of new supporting packages, etc. I used to work as a tech consultant for a big company and one of our huge clients decided to upgrade all of their on-site journaling systems (running a PII-300!) from Windows to Linux. Remotely. Through an ISDN. Yes, this is possible if you have multiple drives available in the box with careful syslinux. Yes, it's also pretty insane. (We had about 95% success across around 10,000 stores worldwide.) Changing horses in the middle of the race is something to avoid where possible.

It's easy to see that you're probably going to be stuck with your early technology choices if you go on-premise, unless you really build fantastic, auto-data-migration tooling that never fails, so make wise decisions and stick with SaaS until you're certain that you are sticking with your tech choices.

(We don't differentiate between "on-prem" in your datacenter versus your VPC or (your) cloud. To us, you're installing/managing/running it and we're not, so, to us, that's on-prem. Other providers probably define this differently.)

1. https://userify.com (sudo and SSH public key management.)


Synology DiskStations are an affordable option (<$1000) for on-prem data storage accessible to your web application. They run a standard LAMP stack out of the box (basic database and file storage) and have redundant/RAID storage.

A few tips from experience: if you can get them to put it on a dedicated IP, even better; use Synology firewall to limit access to your servers IPs, or VPN (do not leave on public net, because ransomware); set to auto-update the OS (because the customer never will!); still have another backup (even a USB-attached 1TB solid state drive, but ideally another physical location); disable all unused services (to limit potential exploits); if customer allows, give them a user and you a user - enable logging and now you can demonstrate if/when you had to access their box for maintenance (feel good factor)


> Synology DiskStations are an affordable option (<$1000) for on-prem data storage accessible to your web application. They run a standard LAMP stack out of the box (basic database and file storage) and have redundant/RAID storage.

You're joking right? What enterprise would install a consumer NAS in their data center to use for "data storage accessible to your web application"?

[1]: http://www.amazon.com/Synology-DiskStation-Diskless-Attached...


Can you explain? Sure, they're marketed to consumers because their audience is largely there, but if configured securely (to only talk to your infrastructure etc) offer a very affordable option to store data behind a customer's walls.

"In their data center" implies you're thinking large enterprises. Small businesses/industries may have requirements to keep data on-prem but without enterprise budgets.

(Your link returns 404 for me)


> Can you explain? Sure, they're marketed to consumers because their audience is largely there, but if configured securely (to only talk to your infrastructure etc) offer a very affordable option to store data behind a customer's walls.

Couple of separate points.

First off forget "affordable". The customer that's paying for an on site version of something has a very different definition of "affordable" than whatever you're considering. Something that costs $500 literally rounds to zero.

Second, nobody, and I mean NOBODY, is going to let you plug your own box onto their network to watch some other box that you've deployed on their network. In fact, you're not going to have your app on your own box there either. The vast majority of customers will want to deploy your software on whatever version of RHEL they have a licenses for (or CentOS if they're "modern"). At best you'll deploy your app in a VM that (hopefulyy) plays nice with whatever VM kit they're using.

Third, that consumer NAS won't fit in a rack. Is it just going to be plugged in to a wall socket with a cross over cable connecting to your "app box"?

> (Your link returns 404 for me)

Sorry about that. I chopped off the rest of the gunk at the end of the URL and ended up taking too much off.


I think Synology has a rack mounted product, but it is still a terrible idea ;)


Ignoring the SMB market: If you send something like a Synology to an enterprise customer you will get a bad reputation. Better off just getting whatever rackable Dell or HP offers will serve your purposes. You also get the benefit of their hardware support which can be onsite with parts within 4 hours or less depending on the support you pay for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: