It cut your cloud services bill by 93%, but how much did it increase your engineering bill by?
If your engineering time is free, then this calculation is complete. Otherwise it is not.
Does that 93% saving pay for a DB engineer, or enough of your developers' time to build the same quality of redundancy as you'd get with a DBaaS?
This calculus is going to be different for every DB and every company, but the OpEx impact of switching to dedicated servers is a bit more complex than you suggest above.
(a) I’m talking about projects I host in my free time
(b) My server budget is fixed.
So, for me the choice was between "use cloud tools, and get performance worse than a raspberry pi", or "run dedicated, and get more performance and storage and traffic than I need, and actually the ability to run my stuff".
For less than the price of a Netflix subscription I’m able to run services that can handle tenthousands of concurrent users, and have terabytes of storage (and enough traffic that I never have to worry about that).
And the cost of setting it up was for me a few days.
For me it was a decision between being able to run services, or not being able to run them at all.
Sure, hobby/spare-time projects are one of the cases where it's perfectly reasonable to self-host; often it's fun to learn about the underlying tools by rolling your own db, and doing so can save you some cash (at the expense of your own time).
However, that paradigm is not really applicable to GitLab's OpEx calculation; they have to pay their engineers ;)
Yes, it might be more affordable. They seem to think it is, as they have chosen to go with self-hosted.
My point is simply that your posts above didn't address the complexity of their calculation, as they didn't factor the costs of switching to self-hosted.
For me, personally, going from cloud servers to rented dedicated servers cut my bill by 93% – more than an order of magnitude. At same performance.
In fact, it’d be cheaper to run 10x as many dedicated servers than to use cloud solutions for me.