Hacker News new | past | comments | ask | show | jobs | submit login
How to save $13.27 on your SaaS bill (dgerrells.com)
179 points by rustystump 86 days ago | hide | past | favorite | 76 comments



This makes me tired. I know it's supposed to be humorous self deprecation, but it's soul crushing to see the pseudo real-time thought process behind the fantastically over-engineered setups from my day jobs. All for someone's humble blog?

Obligatory HN footnote: My blog costs $6 a month to serve HTML from digital ocean. Landing in the top five links a few times on HN didn't make the Linux load blip much past 0.20. GoAccess analyzes nginx traffic logs for free, if you want to know what countries are scraping your pages.


> All for someone's humble blog?

Maybe they did it to have fun


I am guessing here, but part of the critique might be that definition of fun


A lot of places I've worked at never gave me the chance nor opportunity to use all the fanciful technologies we read about so often. Building your own blog was often the only outlet to explore them.


If you are only serving static content, it's hard to beat GitHub pages.


Cloudflare pages is better overall because it’s trivially easy to integrate with DNS for your custom domain/cloudflare workers, and handles staged changes better IMO. You can point it at a GitHub repo so unless you have a complex build it’s easy to setup.

Unfortunately IME it’s not a super well-polished product though (I can’t for the life of me get their CLI “wrangler” to login to a headless machine, and their HTTP APIs are not documented well enough to use for non-git file sources, so I can’t get it to work in my not-so-special dev environment setup). So it’s only better if you can get it to work, although that’s something you’ll probably figure out in the first 5-10m of using it.


But cloudflare has a growing monopoly on internet traffic that is worse for the internet than privacy busting laws that are passed. If you are a technologist worried about the distributed nature of the web, you should avoid it.


As opposed to Microsoft...


You can use custom domains on github. Wouldn’t go near cloudflare.


Why?


GitHub Pages is pretty bad for static content with its universal

  Cache-Control: max-age=600
that can’t be changed. Your assets should have much longer expiry and hopefully be immutable. Just get a server, it’s cheap and you can do proper cache control and you’re not beholden to your Microsoft overlord.


What does that matter?


With long expiry/immutable assets, only the HTML needs to be refetched from the server on refreshes or subsequent visits, instead of everything after merely ten minutes. On slow and/or high latency networks the difference can be huge. And you don’t even need to intentionally refresh — mobile browsers have been evicting background tabs since the dawn of time, and Chrome brought this behavior to desktop a while ago to save RAM (on by default).


But they are not refetched if etag is used properly: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ET...

Most you have is some HEAD requests.


By refetch I mean re-requested, which can return 304 responses. You still have to do a roundtrip for each resource in that case, and many websites (including static ones) have this waterfall of requests where html includes scripts and scripts include other scripts, especially now that some geniuses are pushing adoption of native esm imports instead of bundling. The roundtrips add up, and good luck if your link is unreliable in addition to being high latency. Compare that to proper caching where a refresh doesn’t request anything except maybe the html. I have experienced the web on such a link and it’s a shitshow.


Seems like a problem with the websites and not the cache expiry.

I have also experienced this, during dial up period, I agree its not pretty and most dont even consider it.


Ok. I don't think this is a big deal for the vast majority of blogs, like mine, that are hosted on GH pages. It's just HTML and some photos that are unique per post. But I also don't see why GH would put the number so low.


Because they have this one max-time for everything, from things that should be refetched frequently (html, unversioned scripts and stylesheets, etc.) to things that should be immutable (versioned scripts and stylesheets, images, etc.). They don’t understand your website. You do, and you can set exactly the right headers for the best user experience. Btw you can set the right headers with Netlify and Vercel as well.


Presumably so that when it shows the checkmark and says your website was updated you don't go there and wonder why it hasn't updated for you


I'm not confident a "cheap server" (like a $5/mo DO droplet) would be able to withstand being on the front page of HN, but I am pretty confident a GH pages page could withstand being on the front page of HN.


I had a blog article of mine on the HN front page a few years ago and the nginx serving static pages from an ultra cheap VPS didn't even break a sweat.


Is there a workaround to this within the realm of GitHub Pages? My initial search concludes: without GitHub allowing custom headers, no


Or cloudflare pages. As far as I can tell static content is served at no cost and dynamic requests have very generous free limits (something like 100k requests/day)


i love gp. this one almost didn't fit on it because I wanted to serve a small db file to the client rather pay for remote. Luckily I was able to keep it under their pretty generous file limit.

https://github.com/dgerrells/liftme

https://dgerrells.github.io/liftme/


The main downside of GitHub pages is that they don't support running your own Jekyll plugins from _plugins; sometimes it's just a lot easier to write a bit of Ruby code. That said, you can just generate stuff locally and push the result, but that's the main reason I've been using Netlify.


Can't you do pretty much anything in GitHub actions?


You mean run Jekyll and deploy "manually"? That should work, yeah; didn't think of that actually. But the standard "GitHub Pages" deploy won't work with custom Ruby.


My main server costs $0, but only because I sold my soul to Oracle: https://www.oracle.com/cloud/free/


Souls really don't go for much nowadays, do they. Faustian bargains used to at least get you some magic powers and renewed youth.

"I sold my soul and all I got was a $5 virtual machine"


Yeah. I'd much rather pay $5 (or $20) to the little guy than to the giant company


Not as many little guys anymore, much consolidation :(


The difficulty with the "sold your soul" meme is that preserving your soul is a moving target. I've got some Oracle free tier instances. They get deployed with nixos-rebuild, same as anything else. The main difference between them and any other virtual server provider is when I've got to do something that requires logging in to the overwrought web interface, it's slightly less friendly than other providers (the IP config is a bit weird, too).

Using an offering from a specific company is not selling your soul. Selling your soul entails adopting something in a way that you become reliant upon it, giving whomever controls it leverage over you. The chief one these days is using Proprietary Software 2.0, and especially writing significant code that ends up inextricably wed to it. That can include the Oracle Cloud API, but it also includes every other lock-in-hopeful proprietary service API, including all of these "easy" and "free tier" offerings from not-yet-openly-associated-with-evil SaaS "startups".

So in short if you're choosing between some proprietary solution that offers "free" hosting (eg Heroku, Github pages, anything "serverless", etc) and Oracle free tier that gives you bog standard VMs on which you can run common libre software, choose the Oracle free tier route and don't think twice. If Oracle engages in "altering the deal", then the most you'll be on the hook for is $5/mo at a different provider rather than having to completely redo your setup.


Oracle cloud is suspiciously good. They also claim not to do the AWS thing: if you exceed the free limits, they'll just shut you down rather than bill you absurd amounts of money. I guess that's reserved for the Java and DB billing divisions.

Their free tier gives you quite a lot of disk. The catch is being capped at 10Mbit, which can be mitigated by .. Cloudflare!


Good times. I'm on Oracle too but now they decided to charge me for "compute" and nothing changed at my server :(

Time to jump ship


The last time I tried, I couldn't get a VM running for whatever reason. Any issues with OC?


So it cost you everything...


I tried it, and man is it just the worst interface in the world. $50/yr for a cheap VPS from somewhere else was worth it to me.


Exactly same here using digital ocean app service. $5 a month as no backup is needed :-). A CDN does most of the heavy lifting.


Tangential, is there a single provider which does (Python) app platform (web, cron, workers) and hosted Postgres plan costing 10 usd a month? A VPS still seems most compelling option for me.


I'm going to blow your mind here.

You can install any SQL variant yourself on any web server. I can be on the SAME MACHINE. Even a vps! Boom!

Everyone used to do it all the time. For some reason everyone decided to pay more for less power and use the cloud instead.

You still won't go above 20% CPU for even moderately complex CRUD applications.

If you're really crazy you can add a cron job to send a backup each night to S3.

And it'll take you all of an hour to do that.


Any reason not to use S3 static hosting and cloudflare? I host at least 4 sites for between $0.03-0.1/month this way.


I did that for years but have recently switched to Cloudflare Pages. Cost are negligible either way, but Cloudflare auto publishing straight from a GitHub webhook out of my repo is slightly fewer components.


I do this too! It’s kind of a pain to set all the right headers and such though. I use a deployment tool called s3_website but it seems abandoned…


I think humans are tinkerers. Given a choice between utilitarian productivity and tinkering, unless it's a life or a death situation, people will go ham on the tinkering. Especially for such low risk things as one's personal blogs.

Now what is maybe a bit strange is companies like Vercel having massive valuations because of this. I said in another comment somewhere does anyone actually use them beyond the free or low cost tiers?


serving static files via nginx is easy on the compute. I'm serving something a tiny bit more complex (instructions at http://funky.nondeterministic.computer) and the $5 DO droplet couldn't keep up. I had to upgrade to a $12/mo server to keep up.


Vultr does me cheaper than DO for a given amount of oomph.


It seems like just a practice run to give the latest fancy hype a spin. Bonus points - they got a blog out of it too.


add the extra $1/mo for backups and you're golden.


I am sure he had a hell of a lot of fun though.


I’m very impressed that Vercel is able to sell so little for so much. They do the very bare bones hosting and charge a fortune to run everyone’s inefficient JavaScript framework of the month to replicate the speed and simplicity of a static site. Amazing.


They own React at this point, it seems. More and more hires I'm coming across know Next.js rather than React itself, and Vercel is now a massive part of the core React contributor team...


What does this mean? How do you "know" nextjs without knowing React? Do you mean they've heard of it and list it on their resume?


It's like Django and Python with Flask. You can be good at using Django, but can't actually build an API with Flask (or program in general)


I had a personal project that was slightly more complex than something like a digital form and I wasn't even able to run it for free (I have zero users, why would I pay?)

At least the Heroku free tier could run all my apps. RIP


On Vercel? How? As long as it's non commercial you should be able to just run it for free there


This been my experience eon Netlify, but not with Vercel. The biggest bottleneck is often the limit of 12 serverless functions per site (technically the limit is dependent on what framework you use which is even more frustrsting).

The function limit is particularly frustrstinb when you need route splitting to avoid slow cold starts or memory limits. I even hit this in a few Astro projects which was particularly suprising - when serverless rendering was an all or nothing option for Astro Vercel was effectively useless on Hobby plans.


The limit of 12 functions is only if you are deploying an API-only project without bundling[1]. The majority of the modern frameworks support bundling, so you can write many, many more APIs (100s+) which compile down to a handful of functions.

This bundling also means fewer cold starts. Bundling is the default for Astro[2]. Also worth noting, on paid plans, functions are kept warm automatically[3].

[1]: https://vercel.com/docs/functions/runtimes#functions-created...

[2]: https://vercel.com/docs/frameworks/astro#configuration-optio...

[3]: https://vercel.com/changelog/vercel-functions-now-have-faste...


Thanks Lee. That makes total sense when using SvelteKit or NextJS on Vercel, when Vercel owns the build step, bundling, and infrastructure you really have a great chance to optimize everything.

Its a bit of a crap shoot with third party frameworks though. With Astro, unless I'm misremembering the timing, they defaulted to bundling per route originally and only changed that when Vercel users ran into issues with the Hobby plan. More interestingly on the timing, I think that was right around the time Vercel took over as Astro's official hosting sponsor. Not sure how much a part that played in the change in defaults.

In general, I'm always hesitant with a build system that I depend on to route split in a way that impacts my actual cost to run. At the end of the day I have little say in how routes are split and little insight into what metrics are used at bundle time to make those decisions. That said, I haven't heard any horror stories with SvelteKit or NextJS on Vercel so the concern may very well be unfounded as long as I stay in the Vercel ecosystem.


1: Vercel is running millions of personal Next.js static sites for free.

2: Inefficient in what sense? In my experience, most of the latest software startups are shipping incredibly quick with Next.js / Vercel stack infra. TS/JS is still a much faster runtime (and only one with types) than the practical alternatives of Python, Ruby, and PHP. There is a single digit percentage shipping new startups in Java/C#. Go could make a decent case.

3: IMO the Next.js / Vercel deployment experience is far far better than what I dealt with wrangling Django templates / non-template integration / deploying anywhere else.

In Django VPS land, you can follow this guide and encounter multiple issues particular to your setup: https://www.digitalocean.com/community/tutorials/how-to-set-...

and figure out Dokku deployments or GitHub action CI issues.

On Next.js / Vercel, you can:

1: Click a button linking github and Vercel.

2: Enter .env on Vercel

3: git push


> I live on the edge, the edge of the network, the browser, the bleeding edge. Everything must be serverless, multi-region, edge delivered, eventually consistent, strongly typed, ACID compliant, point in time recovery, buzzword buzzword, and buzzword bazzword.


The writing style is gold. The technical approach too was quite entertaining.


I also did the same, built my own analytics with TinyBird for one of my projects (https://linkycal.com). It ended up costing less than paying for a hosting provider


> I am open to ideas on why this happens but my guess is because bun isn't written in rust.

liked, commented, and subscribed.


The Rust Evangelism Task Force lives on.

(I wonder what the n-gate author is up to? I hope they're happy and doing something fun. I miss their HN summaries...)



This was a fun read, haha.

> I am open to ideas on why this happens but my guess is because bun isn't written in rust.

LOL classic. I love Rust and I enjoy when people take the piss out of us fans.

I do use SQLite every now and then but I'm always surprised by how low-latency and high-throughput it is. I have bad intuition for how efficient it is. Good stuff!


I quite liked the blog. Minus all the bleeding edge stuff, I build an analytics website for me a few months ago and it was quite fun. Later extended it to included some real time insights on the performance of my sites.


I like the writing! I enjoyed the reading of it.


Another saga from the gypity chronicles!

I really like this author's writing style. It feels like I'm reading my own inner monologue.


Some of that style/humour reminds me of primeagen


Pretty sure it's "Squeeh" and "Gypity" giving it those vibes (I know he's the one that led to me calling it "Gypity" and he always pronounces SQL "Squeal). Solid bet that the author is a consumer of Primeagen content.


Clearly the author has never heard of tinypng.com


pngcrush !


or webp


Somewhat disappointed they could not save an extra dime (10¢; $0.10):

* https://en.wikipedia.org/wiki/Leet


I think he'd have hit that if he'd used Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: