Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Read up on your GitHub Support SLA
193 points by zamalek on June 7, 2022 | hide | past | favorite | 135 comments
A few weeks ago we experienced an outage for a few hours, we got 500's from both NPM and Nuget private GitHub feeds. This essentially halted progress in our CI pipeline, so trunk progress ceased for a few hours. Downtime happens. What's alarming is that as an enterprise customer ($250/user/year) we only got a response 11 days later containing, essentially, these things:

* "Check again." If the turnaround on a support ticket is truly 11 days, we would have been facing a 22 day outage (as we'd expect yet another 11 days after responding "it's still happening").

* "You have no SLA."

* "If you want support, I can direct you to our sales team."

If, god forbid, Git access (or everything) had been down, we would have been scrambling to continue business, we wouldn't have had a number to call, and we likely would have had to pony up the cash.

I strongly recommend you take a good look you fall here: https://github.com/premium-support. Note that "< 8 hours" under "Enterprise" mean absolutely nothing, as they aren't guaranteed (per your contract). A more honest value in the column would be "N/A."

You have to get hold of sales to learn about premium support pricing, it isn't publicly disclosed. This is likely to prevent you from budgeting for premium support only in the event that you need it.

If your business continuity depends on GitHub Enterprise, and you don't have Premium support, you need to pay, plan, or change.




As Enterprise-plan customers we also had a support ticket recently ignored for 8 days (not hours) until I pinged on it. That's super unusual for paid SaaS. I can't think of any other vendor that ignored my support tickets for that long (not even "we're looking into it"), including the dreaded Google.


I had a paid support AWS bug ticket sit for 6 weeks before I found someone high enough on the chain to get it looked at.


> before I found someone high enough on the chain

Many orgs have difficulties with escalating issues outside of the support organization. AWS never _ignored_ a ticket I opened.


This ticket had zero replies or movement until I went nuclear.


What was the ticket about ?


This is why you need to have somebody learn Unix and figure out how to run things locally. Or just kick back and relax when the systems you elected to depend on, but do NOT control, go tits up. It’s really on you.


There's a middle-ground here, which is far more reasonable and seems to be a critical missing piece of OP's company: Disaster recovery. Ask, what's the minimum we need to do to keep the business running when our provider(s) disappear or go down for an extended period of time? And then implement those steps.

When you pay someone else to handle your data, there is a lot that can go wrong. GitHub could go down, they could lose (or corrupt) your repos, they could accidentally delete your account. The nice thing about git is that it's absolutely trivial to clone repositories. There is _zero_ reason not to have a machine or VPS _somewhere_ that does nightly pulls of all of your repos. When Github goes down, you'll lose a lot of functionality, but at least you have access to the code and can continue working on the most urgent things.

I'm not clear on the details but OP's issue seemed to be around a broken CI system. At its heart, CI is just the automatic execution of arbitrary commands. Every repo (or project consisting of multiple repos) _should_ have documentation for building/testing/deploying code outside of whatever your CI system is. If your source of truth for how to use your code is in the CI system itself, then your documentation is very lacking and yes, you are susceptible to outages like these.


CI build processes often require credentials, sometimes ones that are in some sort of twilight-zone¹ to devs. IIRC, Github doesn't provide a straight-forward way to clone those credentials.

¹e.g., the devs "don't" have access to the credentials, except they're in the CI workflow, so technically they do. But I've worked at a number of companies where security will happily bury their head in the sand on that point.


To the extent that that works, it's not really a middle ground, so much as choosing "run it locally" for the things you really need, and "kick back and relax" for the nice-to-have but nonessential stuff.

Because if the things you really need actually keep running when your provider(s) disappear or go down for an extended period of time, you're running them locally anyway, and might as well get the benefits of that effort all the time.


Heck, your entire host could go down, what do you then? If it's not a major infrastructure provider like Azure, AWS, or GCP (I forget if that's what they call theirs) then you're kind of SOL. Outages can and will happen, the question is, how bad is the next one? If they are too frequent, you have to evaluate if it's really the provider or your application, if it's the former, you may want to consider a new host, or get with your hosting provider and get them to figure out why you.



Our contract is up for renewal in a few days, so we'd likely struggle to pull off self-hosted in that time. Self-hosting has been the watercooler talk, but we're currently migrating away from Circle CI (to GitHub <facepalm>), so it was on the cards for next year.


You might find Gitlab as a selfhosted Service interesting.

Spin up a Ubuntu machine and install the Omnibus and you have the basic functionality running in about half an hour, plus another half an hour for the CI Runner.


Setting it up is the smallest part of any competent operation. Capacity planning, monitoring, planning for outages and recovery, updates, backups - including testing for recovery -,… is what takes the effort. Setup is almost always on the happy path. When things go wrong, you start over. But once you have a substantial investment in terms of data stored, that is unlikely to be an option. Now you need to figure out and solve the actual problem, which likely requires intimate knowledge of how the system works. Nobody acquires that knowledge by browsing the quick start installer docs.


It's already not done by Github. Invest the time and you will be on a better path.


I used to do OPs for a living, so I’m a bit aware of the tradeoff involved - and for me, the math doesn’t work out. We’d need three or four people with knowledge of the setup - I can’t be the only one since I want to go on vacation from time to time. Others have the same right. Sometimes people fall ill or are unavailable. We could scrape together enough folks and train them, but it’s honestly not something I want to track and invest time in. GitHub had its outages, but none that would affect us massively.

If you already have internal infrastructure and a moderately competent operations team for that infrastructure, the calculus for you may be different. Blindly assuming that I’m wrong is not a sign that you’re aware of the tradeoffs.


Your logic is unfortunately being cast into the reason-devoid abyss of HN commenters consistently overestimating the value of the lone wolf, "competent" Linux admin.

Say you don't understand opportunity cost in software development without saying it.

I won't delve too deeply on the obvious: most "competent Linux sysadmins" have a very over-inflated sense of their own skill set, and tend make for toxic team members.

Most software development shops are in the business of developing their particular software, not deploying and self-managing DVCS, much less hosting, monitoring etc...

Sure, could one person set up a Git/GitLab system? Absolutely. Can they operationalize it effectively? Not really... the bus problem is a thing and anyone that thinks tying the entirety of a system's uptime to one individual is an operational improvement over GitHub's outage SLA is deluding themselves.


Just joined a company that self hosts Gitlab (and everything else, there's zero cloud) but is 100% remote. So far everything has been seamless and there's a large enough infra team to solve these issues if they arise :)


As I wrote, it's a matter of what you focus on - I've run CI systems and internal git hosts for large organizations as part of my work. There are very valid reasons to do so, but cost alone is rarely a compelling one - an on-site enterprise gitlab license is roughly as expensive as hosted github seat, and the community edition is somewhat limited.

And it's definitely possible to run gitlab or any other git hosting solution on-site with little downtime. There's no magic or arcane knowledge involved. It just takes serious effort to do so - more than a single lone wolf sysadmin can provide. All their skills are worth nothing if they're sick and in hospital or on a beach holiday.


At some point you will need a pack of fierce sysadmins, not the lone wolf, as dangerous he might be. If you forget to scale your ops team, you gonna have problems. Guess it's a strategy thing: do I want to rely on a third party or do I want to manage my own people and processes for this. In any case I habe to assess the risks and plan ahead.


> At some point you will need a pack of fierce sysadmins, not the lone wolf, as dangerous he might be.

Maybe, but also maybe not. And then that still doesn't mean I want them to focus on running git/gitlab. I mean we're doing stuff that revolves around the rust compiler and we have operations people easily capable of running gitlab around, but their primary task is something else - they're building systems on top of that. Do I want to re-task - or even just side-track them - into running gitlab?

Once you reach a certain size, you can have an internal ops team that's responsible for providing internal infrastructure, but to what extend is that really different from giving github/gitlab money? They'll be about as far removed from the individual teams they're serving as github is. Is that really something I want to put organizational effort into, distracting the org from achieving the goal? It's all tradeoffs.


It takes like fifteen minutes to set things up. Every startup needs a competent Unix sysadmin.

EDIT: a Threadripper will do for CI. Quick as you like.


GitHub is far, far more than just a git repo. Issue tracking, project boards, commit status systems/check systems, deployment tracking and monitoring, fully fledged CI and deployment pipelines (actions/workflows) written in their own flavor, etc. All sorts of webhooks, complex arrangements of teams and access controls, cross repo, cross,-org and cross enterprise account linkages. Large object storages, container registries, and package repositories. And of course, the existing context of all this stuff; setting up an alternative != completely migrating and validating everything from the original.

Replacing all that with something as scalable, flexible and agreeable with potentially thousands of global developers is far more than '15 minutes' of work. Several orders of magnitude more.

Even on the git repo question alone, if you're an enterprise of some size, you'll have hundreds or maybe thousands of repos that could be potentially gigabytes in size (for any one repo) for code alone. Moving to a self hosted solution requires far more than just throwing some threadrippers and enterprise drives at the problem. And that's assuming the best outcomes.

A competent UNIX sysadmin would be the one yelling not to throw the baby out with the bathwater here, because they would know just how hard this stuff is at scale.


Again, a Linux/Unix admin is worth their weight in gold-press latinum.

Pop in a self-hosted GitLab install, configure SAML or AD auth for SSO. It's all GIT so importing all commits (and not losing history) isn't hard - just tedious.

For testing pipeline, use Selenium on a 32 core threadripper running linux, with 1/4 TB ram. You can get upwards of 400 headless chromes on that.

Throw in NodeRed for overall process automation (think: tying in disparate APIs with a low code environment).

I've done this, with exclusion of the selenium checks themselves (there was a qa team for that), in like 2 weeks.


150 lbs of gold-pressed latinum costs a lot more than GitHub Enterprise, and for a marginal improvement in uptime. And you need 3 of them if you want on-call, which you should if you're trying to beat GitHub's availability.


It's literally not all git. Every single thing I mentioned outside of the git repository itself is not git, and makes up a significant amount of services that would require disparate, specific replacements and buy in and compatibility with all of the developers, teams and units of a company. It's a vast, extremely costly amount of work.

Just throwing up a server somewhere running git and a few software packages is nowhere near the same thing.


Gitlab has all those non-git features, and then some. Migrating to it from Github might not be easy, but it's definitely worth it IMO to invest in the product that gives you more options, instead of getting locked in to something like Github.


Some deployments might be able to leverage GitHub Enterprise, which is a $231/user/year GitHub-in-a-VM-image. It's pretty much the GH source code (running through a modified copy of Ruby so the on-disk .rb files are scrambled).

https://github.com/pricing

https://docs.github.com/en/enterprise-server@3.2/admin

Active/passive HA is possible: https://docs.github.com/en/enterprise-server@3.2/admin/enter...


GitHub Support here. This does not live up to the level of service we strive to offer our customers. We would love to follow up directly to learn more about this incident as it appears we have opportunities to learn and improve from this experience. I understand the irony of asking to open another Support ticket, however, @zamalek please do open another Support ticket with subject “Hacker News follow up” and a Support Manager will reach out.


What a boilerplate PR response. Today it's Github, but tomorrow it'll be the same story, different managed service. OP's problem isn't even specific to Github, the entire "aaS" industry seems rife with over-promise, under-deliver.

Yeah, you want to "take this offline" and handle OP's case individually, and perhaps OP gets their demands met, some sort of service credit, or something.

But what the rest of us want is real systemic change, to where managed services like this actually give us real value, and not "marginal value, but nothing works like it ought to and you're going to spend a lot of time working around the bugs and gluing the parts together", where every bug report (which has to be filed as a support ticket) is met with "you're holding it wrong", or where the service goes down and the status page gaslights us, or …

Buying managed services instead of letting engineers do their jobs is today's "nobody got fired for buying IBM".


I'm not defending github specifically, I have no experience with the customer service and have no reason to beleive they are or aren't garbage. I'll assume they are garbage.

To be fair, what do you expect the first representative to do/say? They responded in 3 hours, and it's unlikely they can do much beyond escalate this issue internally. We can't expect them to say "We're going to do sweeping changes" at this point.

Github in the past has done sweeping changes for things such as youtube-dl. They created a large blog post about it, including having both programmers and laywers review every DMCA request, and allowing the most minimal amount of changes to comply, etc. That type of response takes time and coordination.

Even cloudflare with their CEO/CTO can't offer sweeping changes in a HN comment. There's layers to this. You can only really expect damage control from a HN comment.


They could have posted from a non-throwaway (or at least non-anonymous) account and identified a clear point of contact. As it is with such vague instructions, it would be far too easy for the issue to get lost in the Rube Goldberg machine of their support infrastructure again.

Disclosure: I have no specific experience with GitHub support, but I have experience with other support organizations and "send us a new ticket" can easily result in a repeat of the original bad experience. I'm not saying this would necessarily happen to the OP, but we also don't have any assurance at this point that it wouldn't happen.


> OP's problem isn't even specific to Github, the entire "aaS" industry seems rife with over-promise, under-deliver.

I agree with this perspective significantly more than I don't. My (again, smaller) disagreement lies with cost: the entire "aaS" industry generally is in the ballpark of $15-$30/user/year [edit: /user/month]; you are getting what you pay for. $250/user/year is in a completely different class.


At $30/user/mo, you're getting into a fair portion of a dev's salary, and "Run Gitlab" is competitive. But then instead of a support person who is going to give me the runaround, I have an actual engineer who has the power to actually fix things.

But it's also not "$30/user/mo", either. It's that, plus (my salary * time spent on support tickets and outages), plus storage costs, plus compute costs.


> the entire "aaS" industry generally is in the ballpark of $15-$30/user/year

That's $1-3/user/mo. That's very low for SaaS. You may get that for indie products or bulk pricing (for thousands to millions of users, e.g. end-users).

[edit: fixed in parent comment]


That was a typo, I meant /user/month.


OK, so just a note that GitHub Enterprise is $21/user/mo. It's a lot more than their base paid plan (which is very cheap at $4/user/mo) but just slightly above average for an offering of so many features.


Damage control mode: activated.

For a company as large as github (and microsoft), I would expect support metrics (mean time to response, max time to response) to be known to management.


We have plenty of tickets like this too - in the Enterprise Premium tier. One about SLA, about 10 for Actions misbehaving... you should really just have look at your own metrics and not give someone complaining to the internet white glove treatment. For the amount of money you take for Enterprise Premium you really ought to offer it to everyone without complaining.


Applying good faith interpretation.

It's possible that there is a bottle neck in the system that is invisible, possibly intentionally by someone in the middle, to people higher up in the chain. This is a way for them to break that blockage, and put some spot light on it.

What this does now is create a specific marker 'Hacker News follow up' that cannot be swept under the rug. There will be eyes on this particular issue, both inside and outside the company.


11 days is a very long response time, but do you really think a quicker response would have changed the end result? GitHub is a massively multi-tenant SaaS. When they have an outage there’s really not much customers or GitHub’s support team can do other than wait for the outage to be resolved.


Amazon and Azure have much larger multi-tenant concerns and are able to pull of an SLA.


Really? I've been ignored on Azure tickets for longer than their stated SLA. During outages, too.


Anecdata: we haven't seen that with Azure. At least you have an SLA, so they'll refund/discount you.


That's incredible to me. Their support is completely and utterly inept, AFAICT.

Just today, I've been asked for what availability for a meeting I have between the hours of 9pm and 6am (and no, that's not a typo!).

You can't paste errors into the ticket. Their support system doesn't even support all of printable ASCII. The reps don't read the ticket before responding. If you can't replicate the error in front of their eyes, it doesn't exist, and even if you can, they won't believe you. I've literally had them defend 56 kpbs as being "good enough throughput", I've had "What is your ISP?" on "we get 500 Internal Server Error from $service" (and … our ISP is you, Azure…) … yeah. The list of idiocy we've seen just goes on and on and on.

Their support team wants to talk about "how can we architect a cloud system that'll blah blah enterprise blah", I just want valid API calls to succeed. I'm an eng, I can do the building, I just need APIs that aren't hot garbage.

(The outage I allude to in the above comment is actually mentioned in passing in [1]. I had to link to that blog post in the support ticket! (So thank you, Scott Helme, for writing it.))

[1]: https://scotthelme.co.uk/lets-encrypt-root-expiration-post-m...


“ Microsoft Azure Application Gateway was no longer connecting to servers using a Let's Encrypt certificate”

Why would you expect TLS to work correctly on a HTTP load balancer that can’t do HTTP properly either?

Microsoft App Gateway is incompatible with Microsoft’s own SharePoint server for example because it doesn’t send a user agent header in the health monitor probes, which about 50% of all web apps require. Similarly it can’t send authentication headers either so it can’t monitor web apps that enforce security.

It’s a security product incompatible with security and encryption.

Let that sink in and then look at what you’re paying for this on your next monthly bill.


> Why would you expect TLS to work correctly on a HTTP load balancer that can’t do HTTP properly either?

Well, it doesn't exactly advertise that on the tin, now does it?

And we're not using it as a security product (I know there are people out there after "WAF"s and … stuff … but that's not us); AFAICT, it is Azure's offering equivalent to AWS's ELB.

(It is tempting to remove it from the architecture, but unfortunately, we're integrating with a third-party in this case that wants it that way. Everywhere else we need an HTTP proxy we use nginx…)


Most of my clients are big enterprise, and they insist on ticking the "WAF" checkbox. Unfortunately, App Gateway is pretty much the only Azure-native option for this. Front Door also has a WAF module, but it's even worse. Microsoft markets both as "security" products.

Speaking of Front Door: it's a CDN, TCP accelerator, and TLS offload performance product that slowed down web site performance in every test that I've ever done with it. It doesn't even begin to approach the features of some of its competitors. For example it still can't do HTTP/3 or TLS v1.3. Or Brotli, or ZStandard, or multi path TCP, or anything really that helps performance.


> do you really think a quicker response would have changed the end result

Communication matters a lot. A response within 24 hours means they would be ten days ahead. Yes, that is better than being 11 days behind.

Waiting is what the customers do, the provider needs to communicate, at the very least, that they are aware of the problem and looking into it.


Multi-tenant SaaS do have outages that affect small numbers of customers sometimes (e.g. a bug triggered by data specific to a customer). If their support isn’t paying attention to tickets, they may not even know there’s a problem or they may be slow to respond.


Really? $250/user/year, for how many users?

There was recently a thread about someone asking about password management solutions for an organization of 10k employees, and someone recommended 1Password, which I already find hard to believe it could ever make financial and operational sense. [0]

Now, these prices from GH can make sense if you are a team of 10-50, but I wonder what would be the point where it wouldn't take a bean-counter to say "can't we just hire someone else do that in-house, part-time? It would cost as much or less, but at least we would have a lot more control over the system and we wouldn't be at the mercy of a company that treats us as cattle."

I see on your profile page you are on Matrix. Would you mind if I add you there to chat (or add me @raphael:communick.com)? Seems like your company could have so many different cheaper and better solutions for their needs, I've been thinking for a while whether there could be a market in "open source strategy consulting".

[0]: https://news.ycombinator.com/item?id=31582161


> a thread about someone asking about password management solutions for an organization of 10k employees, and someone recommended 1Password, which I already find hard to believe it could ever make financial and operational sense.

Ha, these costs are a DROP in the bucket for businesses at 10k employees.

> "can't we just hire someone else do that in-house, part-time? It would cost as much or less, but at least we would have a lot more control over the system and we wouldn't be at the mercy of a company that treats us as cattle."

Good luck with this. You're missing things like:

- Maintaining the software (upgrades, etc.)

- Maintaining the hardware (you buying bare metal? cloud? who's upgrading servers? etc.)

- Hiring/firing costs

- Do developers want to work with a system that isn't GitHub and be productive?

- etc. etc.


Yeah it’s odd to read this.

Even $20 a month is not worth wasting time over. It is a margin of error compared to the salary of that user.

1Password for 10k users? What would that be, 80k/month? There are probably pretty big volume discounts too.

Payroll for 10k people is like $30m-70m.


This kind of logic I would expect only from a bean-counter, and a bad one to be honest... Why would anyone pay $80k/month to solve a problem that could be solved with 0.5 FTE?

Not just that... if I'm working at the IT department of this company (surely they have one) and hear about such deal, I'll have three thoughts:

1. Can I do it myself? Give me a raise and I'll take the extra responsibility. Everyone wins.

2. If they are throwing this much money out of the window, I'll go knock on my managers door and ask for a raise.

3. If they say they can't give me a raise for any bullshit reason, I'll immediately lose trust in upper management and I'll start looking for a job the next minute.


And that’s the sort of thing I’d expect from a programmer! The operational complexity of anything operating at scale extends far beyond the immediate cost. For example, play your scenario out: they give you a raise, you start managing something, and then you decide to leave? Then what? Or, you turn out to be incompetent and make a mistake that takes 24 hours to fix: that’s thousands of people unable to work! That’s going to cost the business a lot more than 80k.

Ownership is a burden, ownership at scale is a nightmare. Paying a third-party to own something for you is fantastic value in the default case, until you have a strong business case for taking on that ownership burden. “One of our nerds says he’ll own it for 50% of the cost” is a terrible option, it’s nightmare waiting to happen.

If I had 10k people, and I could pay $50k to offload ownership of some critical infrastructure to a third party, I wouldn’t even blink. That’s great value.


> sort of thing I’d expect from a programmer!

Well, this is still called Hacker News. Am I in the wrong place?

Anyway... you've created many strawmen here, where should I start?

> you decide to leave? Then what?

My "hiring" for it would imply defining a proper budget for it and a set of conditions negotiated a priori. It's not really "I will just do the job myself", I'm talking about "ok, you are willing to pay $80k/month to have this solved. Here are the 5 other different plans and solutions that we can implement and that will cost less than that, which one gets the go-ahead from upper management?"

> make a mistake that takes 24 hours to fix: that’s thousands of people unable to work

It's still coming out ahead of Github that took 11 days to solve?

Also, what is that joke about HackerNews overloading with traffic whenever there is a github outage? Or that one of how half of the internet GDP being tied to AWS?

Seriously though, the answer would be "you don't migrate everyone at once". You'd start with these migrations on a project-by-project basis, starting with the less critical projects on the new system and slowly weaning off on your dependency of the big vendor.

Bonus: by migrating your systems you will have some kind of redundancy. If GH goes down, the teams could use the opportunity to move to the new system. It it works, the teams gain confidence and can accelerate the migration. It it doesn't, it becomes an opportunity to learn something out of a sunk cost.

> If I had 10k people, and I could pay $50k to offload ownership of some critical infrastructure to a third party

Paying $50k is not giving you any guarantee that your business is robust. You are just paying for CYA.


it was tongue in cheek because you insulted the bean counters :)

There’s not much more I can say because you’ve just outlined an operationally expensive strategy without appreciating the costs. I recommend spending some time in a big organisation, it’s hard to appreciate the enormity of the challenge in orchestrating people until you’ve witnessed it first hand.

Amazon, for example, has thousands of employees dedicated just to running systems to support the other employees! And that’s out of necessity, because third-parties cannot meet their needs — if a third-party could, Amazon would absolutely use them in a heartbeat (as they do already in many cases).

After my time at Amazon, I gleefully pay for 1Password for my org because the thought of what you outlined, in a growing org, would keep me up at night.


> I recommend spending some time in a big organisation

My very first comment started with "I'm really not cut out to work for a big company". I am well aware that I do not want to do this.

> After my time at Amazon, I gleefully pay for 1Password

But that's exactly the type of cultural issue that I am talking about. So many people going to work at FAANGs and when they leave they think that mentality should be applied everywhere.

The first issue in this thinking is straightforward: I know that in your mind your company is the greatest thing in the world, but it is not Amazon. YAGNI applies beyond software development.

The second problem: FAANGs can operate like this because they are making so much money per employee that it simply does not matter to them. But this mentality when applied to a smaller company, can be the difference between 6 months or 2 years of runway. And every time that a company outsources this is a missed opportunity to learn how to do it more effectively. Instead of thinking "this would keep me up at night", I'd rather think "we are doing it the hard way, but that makes us more resilient and increases our chance of survival".


At $15k/month per employee on salary alone, 1Password costs <0.1%. I’d argue the complete opposite of your conclusion: the smaller you are, the more important it is to focus exclusively on the things that move the needle. Reducing my employee cost by 0.1% does not move the needle. If I spend my time debating the merits of self hosting a password manager, I’m not spending my time growing my business.

The reason businesses like Amazon can be successful is because they focus on what matters, and that’s what startups need to embrace too.


It's not just 1Password.

It's 1Password, it's Jira, it's Github, it's Salesforce, it's Tableau, it's Google Docs, it's Dropbox, it's Figma. It's all the services that have viable alternatives, but you just don't want to try out because... it's easier to think it's not worth it?

> The reason businesses like Amazon can be successful is because they focus on what matters, and that’s what startups need to embrace too.

That says more about our different views of what constitutes "success" than anything.


I’ve been that IT guy that thinks like you do many years ago. I get it.

But then I became the boss. And we pay for many of the services you listed and more (google docs, gitlab, tailscale, 1password, JIRA, zendesk, Okta, gusto, quickbooks, etc, common tools used by startups). We have 11 employees.

I’ve spent more time typing this comment than the time I’ve ever spent wondering whether they’re worth the cost.

And they all total probably $200 a month per employee or more. And they still pale in comparison to our payroll.

They’re all no brainers especially given how small we are.

Those tools allow us to perform at the same level security and compliance-wise as much larger companies. And they liberate our time so we can focus on adding value to our customers rather than futzing around with unstable internal systems.

Again, I get it, the 80k number seems like a lot. What we’re all trying to tell you in different ways is that 80k is _nothing_ to a company with 10k employees. They probably spend that much on stationary and cleaning supplies.


We have been talking about two separate things, and I reckon they are getting conflated. To be more clear, one is about the "Closed SaaS" vs "Free Software SaaS", the other is "outsourcing vs doing in-house".

My argument is more against the former than the latter. And it's not just about cost. It's about lock-in. It's about shitty customer support that takes 11 days to respond to a ticket. It's about building your systems on the "no-brainer cloud provider" that leaves you empty handed on the semi-yearly outage, and all you can do is pray to be resolved it quickly and console yourself that your competitors are offline just like you.

The point on the second issue was not just "they could do it in-house and cheaper". It is also "they are paying this much money and they are still in a weak position, where they have no control about some critical piece of the organization. I would understand if someone takes this route as a temporary measure while better processes are being developed and putting in place, but if this paying $1M/year is accepted as the "natural way of things" of things, it seems like management is saying "we are incapable of doing it ourselves, and we are too lazy to even care. Let's just hope that the money keeps coming, and if doesn't we just lay some people off".


I realized about a decade ago that I don’t actually care about the code being open source as much. It’s the data and file formats that need to be open.

That way I have a path to migrate to a different product. That’s the true lock in for me.

Never noticed any difference or cared between gitlab being “open source” vs GitHub being closed source. As long as it’s an unaltered git repo and there are apis for my data.


If you use Github only for its repo hosting capabilities, sure. But there is CI, issue tracking, discussion history, the OAuth server and everything else that is built around the repo hosting and is used to lock people into the platform.


>Why would anyone pay $80k/month to solve a problem that could be solved with 0.5 FTE?

I'm not sure who's the bean counter here. Maintaining a service, with the same availability as Github, is 3 FTE minimum, unless you are expecting your 0.5 FTE to also be on call. "I can hire an intern to replace $X" seems like the bean counter attitude. Do you have an SLA? Can management claw your raise back when your bespoke system has more than a day of downtime? If you go on vacation and the system goes down, can management expect you in the office on the next day? Remember they are paying you an $80k/yr "bonus" for three nines of availability. Or are you just going to tell the other 10k employees "hey it won't go down while I'm in Tahoe, just trust me."


> Maintaining a service, with the same availability as Github, is 3 FTE minimum.

Keeping Github is a complex issue, because they are trying to offer the same service to millions of people. It is a totally different beast than having to manage a service to thousands of people. Github needs multiple datacenters in multiple regions. A company self-hosting needs one server and can handle thousands of users. A lot less complexity, a lot less moving parts.

> when your bespoke system has more than a day of downtime?

What is "bespoke" about Gitea? Or Gitlab if you want an all-stop-shop solution? They can also do all the enterprisey things that one expects from Github.

Even if some catastrophe hits the server, your downtime will be measured by the time that it takes to provision a new server (hopefully automated), run some (hopefully automated) install scripts and restore the backup. All this work is a one-time cost, and any decent sysadmin should be able to handle it in minutes.


Here's a bean counter comment: Your self hosted site goes down. What's the cost per hour of my 10,000 users not being able to use it?

The same as the cost per hour of GitHub going down, I agree, but GitHub will have 20 people working on fixing it. We'll just have to wait for you?

What do I tell my 10k users?

How about security? Are you going to manage the provisioning of those users, the access control? How about audits? How about scaling? Your magical .5 FTE will do all this on top of your daily duties?

The calculus is never "just give me a raise" – it's not paying 1Password 80k vs paying some IT person that much. It's just that that money supports a core function for 10k people.

A former-boss once told me when he became a CTO of a very large company – he said, "at our scale, most of our conversations revolve around risk rather than cost. we even start to wonder, will this vendor be able to cope with our level of demands? what if they go out of business? what if they want to quit? what is the cost to our organization in time and money to have to switch to a different vendor/product/etc?"

Who knows, a company of that size might save half that money in cybersecurity insurance premiums just by adopting 1Password. (guessing here...)

It's just way more complex than you think.


> We'll just have to wait for you?

Already answered in another place. Github is infinitely more complex to keep up than an internal company service. If Github goes down, can be for a few hours. Or worse, it fails in a way that affects only a handful of users and it takes them 11 days to respond your support ticket.

If an internal service fails, the sysadmin (not the "guy who was brought in to setup gitea") can run a troubleshoot playbook and in the worst case can have a whole new server from the latest backup running in some minutes. Even better, your internal monitoring service detects that the service went down and it can be reprovisioned automatically.

> Are you going to manage the provisioning of those users, the access control?

Sigh...

A company at this size would have already a SAML/LDAP/AD/SSO solution for their other services. You just integrate your service with it.

> most of our conversations revolve around risk rather than cost

I don't disagree. I just think that is just a way to say "I want to pay to buy some CYA".

I fully believe that someone managing a larger organization has other things to look at in terms of priority, but at the same time I think accepting this blindly leads to some complacency.

It's like software developers who are so used to their top-of-the-line workstations hooked to their 3-monitor and fiber internet who forgets that their users might be connecting to their website through a $50 phone on a 3G connection.

There are other ways to reduce your risks. Prioritizing open source is one of them, as that gives some sort of "built-in" protection against vendor lock-in. Giving more autonomy to your departments so they can independently choose their IT solutions to eliminate the chance of systemic risks. Adopting a solution that let's you both outsource or bring it in-house without expensive switching costs. And all of those could be applied no matter the size of the organization, yet it seems that everyone wants to think they are some Silicon Valley unicorn and feel justified in spending ~$1M/year in a password manager.


One of the variables nobody's mentioned is whether your clients will listen to excuses and accept the burden shift using a vendor implies.


>A company self-hosting needs one server and can handle thousands of users. A lot less complexity, a lot less moving parts.

Just because you have one server doesn't make you immune to downtime. If the server goes down at 8am EST (5am PST), who is on call to fix it? Or is the server just down for 3 hours until you get into work (assuming you live on the West Coast)? That's what I mean by 3 FTE. If this is core to your company and you require 24/7 uptime, then either one person is on call 24/7 or 3 people doing 8 hour shifts.

>Even if some catastrophe hits the server, your downtime will be measured by the time that it takes to provision a new server (hopefully automated), run some (hopefully automated) install scripts and restore the backup. All this work is a one-time cost, and any decent sysadmin should be able to handle it in minutes.

And if it happens at 1:30am? Or if it happens while that Sysadmin is on vacation? I don't think you are considering that your solution has a bus-factor of 1 for a service that is depended on for a 1000 other people. The idea that an $80k/year cost is preposterous for a core service only applies to people who think humans are fungible, cheap robots who don't sleep, get sick or take vacations.

I'm not saying Gitea or Gitlab are bad products. Plenty of companies self-host Gitlab with their own teams. But the idea that it costs _much_ less than Github Enterprise once you get company sizes of 1,000+ is absurd. We haven't even considered what happens when Gitea or Gitlab starts serving you 500s because your company hits some use case that the OSS developers hasn't thought of. Who fixes that? Now you are looking at sponsoring development of that. Or do you fix that? Can you guarantee any SLA on fixing any bug in Gitea? That company is moving to Ubuntu 420.69 LTS. What's the timeline on getting Ubuntu 420.69 in CI? Is that .5FTE engineering hours?


Sorry, I was abusing the terminology. When I mean "0.5 FTE", I don't mean (necessarily) that you'd be hiring someone to work part-time. What I am saying is that if you have an IT team, the time that should get dedicated to this particular service should be at most 20h/week.

Presumably, your IT team will have many other projects to work on, some of them will be on-call, some of them will be on vacation, etc, etc. But when allocating the resources, I'd guess that 20h/week is plenty to such a service, even for a corporation of thousands of people.

Does that help?


> Maintaining a service, with the same availability as Github, is 3 FTE minimum,

Someone should tell every employer I've ever had this.


>This kind of logic I would expect only from a bean-counter

I think you're not doing a very good job of updating your expectations based on what other folks are saying. Across multiple threads you have a whole variety of folks trying to explain their reasoning, and you're dismissing them all out of hand. You don't seem to be giving much consideration to the idea that you could be wrong in at least some reasonable organizations (rather than all the folks you're replying to).


> This kind of logic I would expect only from a bean-counter, and a bad one to be honest... Why would anyone pay $80k/month to solve a problem that could be solved with 0.5 FTE?

I think you're underestimating what it would take to solve the problem (even one as "simple" as this one). But, let's assume there is an off the shelf open source solution that is perfect for your environment and requires no changes and integrates easily in your corporate processes without any work (this is already very unrealistic...).

Then, you need to run it and provide 24/7 uptime for your 10k users, which are presumably spread across multiple timezones. Your 0.5 FTE is going to be on call 24/7? Good luck filling that position.

Finally, how are you going to support the 10k users? Is that 0.5 FTE going to answer all the support requests?

I'm not saying it's bad to do things in house - and there are really good reasons to do so. You just have to be realistic. It's going to take a whole lot more than 0.5 FTE to run even a "simple" service like this with the kind of reliability and support your 10k users will demand.

Suddenly that $80k/mo to make it someone else's problem doesn't sound so bad :)


> Finally, how are you going to support the 10k users? Is that 0.5 FTE going to answer all the support requests?

I know I'm cherry picking but the outsourced solution is taking 11 days to say "try again" so maybe whatever that employee takes is an improvement, heh



> Why would anyone pay $80k/month to solve a problem that could be solved with 0.5 FTE?

Depends on the structure of your org. Is the 0.5 FTE being able to maintain addons for all the browsers, dealing with on and off boarding, multiple apps for multiple platforms, write end-user documentation at the same level as 1Password does?


Pay $80k/month to 1Password for an year, one year later you (and all 1Password customers) are still dependent on them.

Contribute $800/month to the vaultwarden developers, one year later you (with the help from all the other companies that understand the benefits of open source) will end up with a FOSS product that can be as good or better than 1Password: https://news.ycombinator.com/item?id=31582369


> This kind of logic I would expect only from a bean-counter, and a bad one to be honest

And your logic is so out of touch with reality that I don't know whether you're trolling or not.


> Ha, these costs are a DROP in the bucket for businesses at 10k employees

First, it's not just the cost. It's the fact that a company of 10k employees do not need to be at the mercy of a third-party vendor. And if the company is going to go with the line of "we want to outsource everything that is not core to the business", then they should be asking themselves what is their "core" business that warrants 10k employees in the first place.

Second, if it's just the occasional SaaS that they need and they can't find or train people to do it, I'd understand it. The problem is when you look at the typical early-stage startup who raises $500k for a seed stage round and they think it is normal/expected to burn $10k/month on Github/Jira/CI/Contentful/Dropbox/Figma/etc/etc/etc, when there are alternatives that could work and people don't even try them.

> You're missing things like:

No, I am not. Check the comment that I referenced. I suggested a scenario where you can hire someone to implement the solution and have an ongoing support contract for a fraction of the cost from 1Password.

> Do developers want to work with a system that isn't GitHub and be productive?

Have they even tried? Surely any Engineering Manager worth anything should be able to at least back up their choices based on more than "we are using this because this is what everyone else is using"?


> It's the fact that a company of 10k employees do not need to be at the mercy of a third-party vendor.

Oh man, where do I even start here. Have you ever worked in a company with 10k employees? In IT at a company with 10k employees?

> Github/Jira/CI/Contentful/Dropbox/Figma/etc/etc/etc, when there are alternatives that could work and people don't even try them.

So let me get this straight. You expect every organization with 10k employees to:

- Find talent to build, operate, and maintain on said systems

- Keep that talent

- Train employees on said systems. Cheryl in accounting has only used SAP and you want her to understand how to use XYZ?

- Spend the time NOT having the solution available to them while they wait for said system to be built/implemented.

- And and and...

> No, I am not. Check the comment that I referenced. I suggested a scenario where you can hire someone to implement the solution and have an ongoing support contract for a fraction of the cost from 1Password.

You absolutely are. You clearly have no understanding when it comes to what is required to "just hire someone to do it".

> Surely any Engineering Manager worth anything

Yaaa, no true scotsmen!

The trope of "just hire someone" needs to die. Orgs (of all sizes) regularly do buy vs build analysis all of the time. Saying that one is better than the other unilaterally is just plain wrong.


> Orgs (of all sizes) regularly do buy vs build analysis all of the time.

The point is not "buy vs build". The point is that one does not exclude the other.

Sally from accounting has only used SAP? Fine, keep paying for it, but also invest in the development of an alternative. Also go to Sally and offer training in this alternative. Ask Sally what is missing in the alternative compared to what she's used to. Take her feedback to the developers and tell them "If you solve X, Y and Z, we might be able to drop SAP and switch to you".

> You expect every organization with 10k employees to find talent to build, operate, and maintain on said systems.

No. I'm expecting that some of them will be able to it in-house. Others will look for a third-party vendor that does not lock them in. Others will continue using the proprietary solution BUT will set aside part of their budget to invest in open source alternatives, as a way to create an open source alternative that can help negotiate with the proprietary vendor.

And others could do all of it, or none of it.


> The point is not "buy vs build". The point is that one does not exclude the other.

This whole discussion is literally "buy vs build" and the fact that you don't know that means you're way out of your league here.


Something troubling you? Is making personal attacks going to make you feel any better? If putting me down helps your self-esteem: my very first comment started with "I was not cut out to work for a large organization". So cheer up, you get to wear the big man pants, ok?

The one thing that bugs me though... Maybe it is me being out of my league, but why can't bigger companies at least foment the develop of alternatives that reduce their dependency on third-party, closed service providers?

Another thing that I fail to grasp: why is it that smart and distinguished people like you always put a response in absolute, all-or-nothing terms? Case in point: when presented with one possible strategy, your counterargument was "do you expect all companies to do this?". The answer is (obviously) negative. In my childish mind, I thought it was possible to have different companies doing different things. Can you explain why this type of thinking is so clearly wrong? What do they teach in the Big Boys League that show how that absolute conformity and total adherence to the rules is the surest way to win?


> but why can't bigger companies at least foment the develop of alternatives that reduce their dependency on third-party, closed service providers?

They do. This is literally called "buy vs build".

> why is it that smart and distinguished people like you always put a response in absolute, all-or-nothing terms?

Because you started the conversation with absolute, all or nothing terms, which I countered. Read these two statements and tell me which is absolute, all-or-nothing and which isn't:

   *It's the fact that a company of 10k employees do not need to be at the mercy of a third-party vendor.*

   *Orgs (of all sizes) regularly do buy vs build analysis all of the time.*
> What do they teach in the Big Boys League that show how that absolute conformity and total adherence to the rules is the surest way to win?

Lay off of the conspiracy kool-aid will ya? Lots of reasonable people are disagreeing with you because we've all experienced this, not because some holier-than-thou entity is telling us to.

> Something troubling you?

Not the slightest, thanks though.


Nice to know is all good with you, it's a shame though that you resorted to personal attacks and name-calling.

> This is literally called "buy vs build".

"Buy vs build" presupposes an either/or decision. Like there is no point in building an alternative once a company has decided to buy something or vice-versa. It also seems to implicate that if a company chooses to "build", that they will be taking all of the burden and costs of development to themselves.

What I'm talking about is beyond the buy/build dichotomy. What I am advocating is for companies to treat all and any systems that they depend on as a risk, and never stop looking or financing the development of F/OSS alternatives that can mitigate these risks.

This does not mean that all companies should stop using Github and implement their own source repo/CI systems. It just means that companies should look for a way to hedge their bets.

And if you think that this is something that companies do "all the time", I'd be eager to know of any, e.g, Design Agencies that support/contribute to the development of Gimp/Inkscape as insurance against their heavy dependence on Adobe Suite. Or companies (of any size) that allocate part of their budget to fund F/OSS alternatives to the SaaS solutions that they use.

> Lots of reasonable people are disagreeing because we've all experienced this

What I am seeing is "lots" of people attacking a strawman like the one you created. Maybe my phrasing was off, but my original question was "what would be the size of the team where having someone in-house to mitigate the risk of this dependency becomes clear?", and somehow this got translated (by you and others) to "why don't they just do everything in-house?".


> it's a shame though that you resorted to personal attacks and name-calling.

> but I wonder what would be the point where it wouldn't take a bean-counter to say

Name-calling you say?

> Maybe my phrasing was off, but my original question was "what would be the size of the team where having someone in-house to mitigate the risk of this dependency becomes clear?"

It wasn't.

> It's the fact that a company of 10k employees do not need to be at the mercy of a third-party vendor.

Re-read what you wrote again. You're just not discussing in good faith...we're done here.


I didn't call you a bean-counter. You called me directly for "being out of my league".

> Re-read what you wrote again.

I read these two sentences you put there, and I still can not see what is contradictory about them.

The idea I am trying to convey from the very beginning (even from last week's conversation) is "I think that buying from a closed SaaS when there is an open source alternative is stupid. Not only it costs more, it also does not really mitigate risk and it just gives an excuse to a manager to punt the problem to someone else. If however the open source solution is lacking some key functionality or you think that the best solution at the moment is to go with the established player, at least throw some money to the developers of the open alternatives as a hedge and eventual way out. The open source solution can give you optionality. You can do it in-house, or you can outsource it, etc."


> Not only it costs more, it also does not really mitigate risk and it just gives an excuse to a manager to punt the problem to someone else.

> You can do it in-house, or you can outsource it

You are 100% correct companies should do this AND better yet this is exactly what is referred to as "Buy vs Build"[0]. You keep trying to convince me and others that companies ARE NOT DOING THIS ANALYSIS. I'm telling you that they most certainly are because it's literally what I do for a living (tech strategy consulting). And guess what, the reason companies keep coming back to closed-sourced providers (GitHub, etc.) is because over and over again we conclude it makes financial sense to do so.

> at least throw some money to the developers of the open alternatives as a hedge and eventual way out.

Now, this part is a hilarious suggestion. No one does this or would do this, because it makes zero financial sense.

[0] - https://medium.com/adobetech/when-to-build-vs-buy-enterprise...


> You keep trying to convince me and others that companies ARE NOT DOING THIS ANALYSIS.

I am not arguing that they don't do "buy vs build" analysis. What I am saying is that they stop here, where it would be smart if they look for other options.

> No one does this or would do this, because it makes zero financial sense.

This is what I am saying that companies DON'T DO, and I am arguing that not doing this is short-sighted. There are plenty of strategic reasons that make it reasonable to invest in open source projects:

- It is good PR. [0]

- It can help with hiring: [1]

- It can be used as a negotiation tactic when dealing with different vendors. This is just an extension of the "Smart companies commoditize their complements" [2] idea from Joel Spolsky. Going back to the 1Password example: even if you still want to continue using 1Password, the "threat" of a viable open source alternative is enough for them to not try to jack up prices. By investing small sums in open source projects, you might be able to negotiate the price down on the closed services that you depend on.

- It reduces the risk of becoming locked into solutions from third-party vendors. Even if your company "keeps going back to Github", it might be wise to contribute to the development of an alternative as a backup plan. So, if push comes to shove and Github outages become so problematic that the they affect your business, there is a way out.

Not to mention that it can be tax-deductable, so you can get all these benefits without really affecting your bottom line.

[0]: https://blog.sentry.io/2021/10/21/we-just-gave-154-999-dolla...

[1]: https://news.ycombinator.com/item?id=31679118

[2]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/


> I've been thinking for a while whether there could be a market in "open source strategy consulting".

It does exist, but not suprisingly people who want to pay less usually aren't keen on paying lump sums (if at all) someone who could teach them on how to do it.


Self-hosting Gitlab typically consumed about 1/8 FTE for projects I've worked on, which comes out to 50-100 users break-even point, depending on your sysadmin salaries.


> Now, these prices from GH can make sense if you are a team of 10-50, but I wonder what would be the point where it wouldn't take a bean-counter to say "can't we just hire someone else do that in-house, part-time? It would cost as much or less,

$250/user/year; for $50k yearly salary is 200 users.

I don't think $50k salary is a lot. On the other hand, I've heard you should roughly double salary costs to approximate cost to employer (including all overhead). So then a $50k/yr local gitlab (or somesuch) admin only becomes viable salarywise at 400 paying users. And if you have 400 users dependent on that one employee for their daily work, you need backup. At least another .5 FTE - so 600 users before the business case makes sense salarywise.

So that's not really soon - if you're reaching that size, you want professional support, not what a team of 1.5 employees can cobble together.


There is another thing that goes in favor of SaaS: Capex vs Opex. Most of these expenses are tax-deductable, so from a bean-counting point of view the number would have to be a lot higher than the sticker price.

But when I say "hire someone else to do it", it wouldn't have to be an employee. It could be a consultant, it could be a third-party IT support service. I really don't think that this kind of work would require a FTE. You'd be hiring someone to get the initial setup and you'd just run a support contract for the solution to ensure that things keep going. I think such a system you could be paying less than what the vendor charges and you also get more control over your operations.


The level of risk and needed effort also scales with the user base. So supporting local Gitlab or similar for 20 users is more like a 0.05 FTE, whereas it can easily be 0.5 FTE or more for 100 users.


250 per year for people who get paid say 250k per year.

So that is 0.1%, I'm sure the productivity boost from GH can be greater than that for knowledge workers. Even for people getting 25k the productivity boost could be more than 1%.

That said this kind of support turn around is just bad and should be called out forcefully.


Bean-counting enterprises do all kinds of annoying shit to deal with the GitHub cost, like automatically disabling user accounts after 30 days if unused, or multiple orgs.


How does multiple orgs reduce costs?


Limiting number of licenses and requiring business justification, so one part of the org can't on-board 50 users that probably never need to commit anything. I imagine at some point GH might get variable pricing like NewRelic where you pay less for a read-only user.


I suppose it’s about setting different tiers depending on the orgs. For instance a higher tier for the org with your main dev repos and deployment/CI system, and more basic tier for orgs with your translators’ and QA team’s repos, etc.


Go ahead, I have it public for a reason :).


Redundant Git access and repo access can be dealt with by using Artifactory. But then you have to hope Artifactory works. The SaaS offering isn't very stable and self-hosting is a huge pain in the butt.

You could also architect your business not to go down when an upstream vendor goes offline, but maybe I'm crazy.


> You could also architect your business not to go down when an upstream vendor goes offline, but maybe I'm crazy.

This is what I'm wondering the most about honestly. With Azure DevOps we have slots, so if a deployment fails, there is no switch that happens and the current available online version stays online. Something about putting all your eggs in one basket, but if you're going to do it anyway, make sure you handle a pipeline failure cleanly. A failure to build should not take down production, ever.


Can confirm Artifactory is a PITA.

Used to use Rietveld and now using Gerrit. You have to buy in to that code review methodology and you lose a lot of the other things (issue tracking, CI), but those can be handled elsewhere by better/more-specific tools.


This is pretty surprising to me, as even as a free GitHub user that probably has an above average amount of contact with GitHub support, I've never had to wait more than 30 minutes to an hour for a reply to a ticket.

If they really give you worse service than free customers if you don't have a good SLA, that's a little troubling and they should maybe rethink their support strategy - but I also imagine that helping with outages is a little more complex than regular support tasks(account recovery, sponsors, etc).


I've struggled with their support for a few years now. One of my repositories, a very popular one, is used as an example in some of GitHub's own educational resources, which causes loads of spam issues, PRs and comments on said repository.

Support has refused to so much as acknowledge the problem for two years now, if not longer.

It got so bad I stopped paying for GitHub entirely and will never give another cent.


Reading SLA can be complicated but it definitely pays back both in terms of money and reduced risk. Here's a dissection of S3 SLA as an example: https://alexewerlof.medium.com/dissecting-the-s3-sla-26421dd... and Amazon is quite good.


It says right there:

Guaranteed SLA?

No


Yes, it says so on the page zamalek helpfully linked.

But it doesn't say so on the page you get to if you select "Pricing" from github's homepage. Nor, as far as I can tell, does it say so on any page linked to from that page.


On the pricing page I see that Enterprise includes:

Standard Support - GitHub Support can help you troubleshoot issues you run into while using GitHub. Get support via the web.

Premium support is listed as "available" (so not included):

With Premium, get a 30-minute SLA and 24/7 web and phone support. With Premium Plus, get everything in Premium plus your own Support Account Manager and more.


Right, from which you can infer that you don't get a 30-minute SLA. But not that you get no SLA at all.


A SLA is a promise to do something in a specific amount of time. If you don't have that promise you don't have an SLA.


> With Premium Plus, get everything in Premium plus [...]

Off topic, pedantic, but still...

That is horrible writing. And that's not even going into the "and more".


Why would they need to list things that _aren't_ included? Where would that list stop?


Generally, it would list the things that aren't included in that tier but are in higher tiers. Like they do for most of their features.


That makes sense.


That's basically my point: take the time to check your coverage.


It's time to move away from GitHub. There are better alternatives such as Codeberg or Gitlab.


I'm pushing for Gitlab (who have an SLA built into their similarly priced offering), but obviously the stakeholders need to decide how we eat this risk.


See my comment here: https://news.ycombinator.com/item?id=31411169 I don't think GitLab is better in this front.


I use both GitHub and Gitlab for contributing to various open source projects. In my experience Gitlab is far superior. Github is too sluggish and buggy compared to Gitlab.


Gitea is very pleasant to self host...


I always see Gitea recommended, but is it as mature as gitlab self hosted? From authentication methods (SAML, Cognito) to code review, issues, wiki?


Lacks search functionality (among other features)


True, I forgot Gitea!


Gitea has CI now?


Gitea and Drone CI work really well together and are so much ligher, easier to deploy and manage compared to Gitlab that is not even funny.


Does "lighter" really matter in this case? From first-hand experience with Gitea, I can tell you that it most certainly is not more reliable than Gitlab. I've gotten 500 internal server errors from just clicking around the UI too fast while it was running locally on my machine. Constant errors and bugs after running it for about 4 months were ultimately what pushed me to Gitlab.

Don't know anything about Drone CI, but it seems to me like a lot more work to manage two separate systems than it is to manage a single Gitlab instance, where everything is already nicely integrated.

I'm not a sysadmin by any stretch of the imagination, but I was still able to set up Gitlab, Gitlab CI with a bunch of runners, Gitlab Pages, and offsite backups with very little effort on my self-hosted instance. All I had to do was edit one config file to enable the various services (backups were with borg and systemd via the "gitlab-ctl" CLI)

Been running that setup for probably 2 years no with the only outages being external internet/power outages. I'm the only user, so my usage isn't at all going to reflect OP's needs, but it's still more reliability than I ever got from Gitea.


There is also Woodpecker CI if you want a lightweight Drone fork.


Not built-in, but it's documented and supported by the community somewhat. Take a look at the docs and previous hn threads


I think you're comparing free GitHub to Codeberg. GitHub is a lot more than code hosting for open source.


Technically git is distributed version control and should be workable even if GitHub is down. You could fetch from your coworkers directly even for instance. We’ve really fallen a long way if GitHub is a single point of failure.


They sell CI/CD pipeline service, which one can build their infra around. This is supposed to work "in the cloud" with availability/reliability of the cloud... and this is one of their selling points.


I'm not sure what service GitHub thinks it is providing these days. It's like they're trying to lose customers.


I spent 2 months trying to get someone at GitHub to add another user. If you have GitHub Enterprise, and are paying them, you have to go through sales to add users.

Good fucking luck getting them to call you back or respond if you aren't adding 1000+ seats. It's unbelievable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: