Looks very similar to my project https://github.com/kkuchta/scarr from a while back. It even uses the same acronym (I assume that's just a coincidence, since we both just picked a cool-sounding english-language word using the initials from S3, Cloudfront, ACM, and ACM.
At a glance:
- Mine handles domain registration + ACM verification automatically
- This one wisely uses clioudformation instead of api calls
- This one does apex->ww redirects, whereas mine uses the apex and has no redirect
Wow, that is a fun coincidence! Indeed, I was going for a catchy four-letter acronym in the same vein as popular stacks like LAMP or MEAN. Perhaps the fact that we both landed on the same components and permutation of components means that there's something there :)
I also started off in the same manner of implementation - bash scripts wrapping AWS CLI calls - then stumbled upon the more straightforward, template based approach.
GitHub pages [0] gives you static sites with HTTPS and a custom domain without nearly as much complexity as this if you're looking for an alternative to Netlify.
I had honestly never heard of Netlify. I thought GitHub Pages was the standard, with S3 static hosting a second (more involved) option.
EDIT: Googling suggests Netlify offers a build, deploy, hosting pipeline all-in-one box. Which is substantially more than any of the projects mentioned here. These serve a single purpose - simple hosting of static websites.
that's a bummer. I thought now that we have private free repos, websites could be hosted there. Still nice feature. I guess, the public nature definitely generates some trust and also is a good way of showing your work as Github works like a project showcase platform too.
You only have to put the static content in the public repo which is no different, visibility-wise, than it is going to be anyway. So who cares?
I keep my "uncompiled" site in a private repo that builds and automatically "deploys" by replacing the contents of the public repo and pushing that up. Source is private, final result is public.
I have always and only used Gitlab Pages, except one test site at Netifly, that too from a Gitlab Repo. Gitlab Pages are super easy, static, need one .yml file, one cname and txt dns record.
GitLab introduced their Pages relatively recently, maybe three years ago. The other two (Netlify and GitHub pages) are a few years older. I also vaguely remember something about it first being available in GitLab's enterprise edition, and later on being ported to the community version.
Not saying that there's anything wrong with it in particular, but it arrived a bit too late to set the standard for anything. People compared it to Netlify even when it was first announced.
Netlify is a great tool, my biggest issue with it (and why I continue to use GitHub Pages) is that the "Global CDN" is a cluster of DigitalOcean nodes, which I don't trust as much as e.g. Fastly in terms of performance and reliability.
(Note: that info is from anecdotally looking at Netlify site IPs, I could be wrong)
I tried using github pages with fastly, however it appears when a new site deployment is done, github does not invalidate the fastly cache, in addition to fastly independently caching resources on the site, which can cause broken site deployments for several minutes where it is using mixed cached resources from old and new deployments. I opened a ticket with github support, and they said it was expected behavior. It makes it partially unusable for me.
frontended doesn't make sense as a word. Fronted has many somewhat applicable definitions including
* provide (something) with a front or facing of a particular type or material.
* act as a front or cover for someone or something acting illegally or wishing to conceal something.
"he fronted for them in illegal property deals"
* stand face to face with; confront.
As a non native speaker I came to the conclusion that in English it is possible to turn any noun into a verb. Not parent but "Frontended" made sense to me.
Maintenance is my primary concern. I deal with software for a living. I want my blog to just work without me having to worry about maintaining the VM. Netlify makes this dead simple.
I used to host Wordpress sites for myself and family members. I've now moved nearly all of those sites to Netlify (for hosting) and Forestry (for editing/CMS). I no longer have to worry about malicious hacking attempts, Wordpress updates, or anything else outside of the site content.
Maybe I'm just a security nut, but I would probably also relegate ssh to a non-default port, allow key-only authentication, narrow ciphers, close all other ports (except 80, 443, and 53). Also fail2ban, sysctl tweaks (networking, disable coredumps), and a whole bunch of other things I have in a script.
I've seen way too many people get their boxes trashed to leave an internet-accessible one exposed and unsecured.
What are your thoughts on sharing your script? I have a few VPS and would love some new tools / proper setup. I have been learning as I go, learned a few day 1 things not to do, but would like to learn more about networking/coredumps. Cheers!
I'd have to clean it up first. I wrote it for a competition, and it does its job well; I may clean it up and improve it soon. Right now, it's a mess of a monolithic script.
Excellent, well if you get around it to I would love to scope it out. Autodidact after being fedup with shared servers like, GoDaddy/HostGator/Inmotion, they were easy to use since I had no idea what I was doing, I moved to Digital Ocean and its been a fun learning experience. I love using command line and solving problems. Would love to be as tight on security as you are! Cheers
That's great that you have enough time and experience to consider all of this easy. As someone who works a bit higher up the stack, I rarely go as deep as configuring Nginx. This setup may take you a few minutes, but I usually end up spending an entire Saturday on stuff like this. Having done this for a few years, I would rather spend my free time on other things.
I'd say continuous maintenance with response to specific issues. Also debian updates don't restart services which rely on updated shared libraries, which means you need to restart your nginx after openssl updates. Also restarts when kernel is updated. Also...
There's really more to it than just an annual upgrade. You're likely not going to be affected if you ignore this, but why risk it?
Ok, I forgot to add 'reboot' to yearly maintenance :). And change the ssh port or consider a private key. But if its just for a personal static website, I wouldn't get overly concerned about being hacked. Assuming you have backed up your page, its another handful of simple commands to rebuild the whole thing anyway. They are also quite fun for other uses, like setting up a squid proxy, messing with an email server or irc server, just having a personal mini-cloud you can easily access from anywhere.
It's not about rebuilding if your website is defaced. It's the possibility of someone (for example) adding a client side exploit / throttled miner to your existing website. Without more monitoring, you won't know it happened, and neither will most of your visitors.
Yes. I can't remember the details of entry since it was decades ago, but the end result was JavaScript snippets targeting browsers appended to the end of index page.
Adding extra servers like own cloud storage, email, IRC, etc. just expands your risk to more services (unless you internally separate them into namespaces/VMs, but then we're really far away from a "simple static hosting" territory)
Lucky for me I dont use javascript. But that was decades ago right? Well.... relax! I think you are letting these fears get in the way of actually enjoying something quite fun. Perhaps the NSA has some lovely nginx exploits, but the script kiddies that trawl the web these days are laughable. (knock on wood).
It was decades ago because that was before I started working with IT security and stopped using single VM for mixed purposes and treat patching seriously. It's literally part of my job to not relax about those things and keep bringing them up, and remind people that they're not easy, annual apt-get updates.
You're right that there's fewer wormable issues these days. But the question is: does your usual approach to security allow you to stay safe when (not if) the next one happens. And feel free to continue in not-super-secure way for personal, fun things. Just keep in mind that there's more to the story and the more moving parts, the more you need to work to keep things reasonably secure.
Your story is almost identical to mine - years of hosting on a VPS a bunch of small family/project mostly-Wordpress sites. I simply exported them and uploaded to Netlify+Github. I haven't really bothered keeping the connection from the back-end to a dynamic export but have kept those pieces in place for another wet weekend.
You make a good point clearly. Thanks for taking the time to do it.
I guess I feel like the maintenance cost is worth the knowledge I gain from automating my own infrastructure, but I realize not everyone is interested in devops. I'll also note it costs me very little time - I don't remember the last time I had to do anything actively with it.
Elsewhere in the thread I mentioned vendor lockin, which does concern me. I also worry about vendor monoculture - if everyone just uses AWS, they gain undue influence over the market, so in some ways I guess my stubborn self-hosting is a small gesture against that.
I see a lot of people complain about how the internet has become a drab, uniform machine that treats people as eyeballs or wallets to be sacrificed to Moloch [1], little like the wild, free-spirited collection of small sites it was back in the late 90s.
I think a lot of that is the price paid for centralization and funding, so again, self-hosting is a small way to fight back just a bit against that.
I use Netlify and can vouch for its simplicity. I have a few sites on it, some are deployed via bitbucket and some are simply drag-and-drop.
I never used Forestry but by the looks of it, it looks more of an actual CMS and far too sophisticated than Netlify. Being said that it looks over engineered to me for hosting static websites. But if I wanted a CMS to host my client websites whom I have to hand over control, I would definitely give Forestry a try.
I disagree with encouraging people to do this. You are not accounting for a CDN here, like the post. A website on the HN front page went down yesterday on a $5 VM.
And S3 just holds your HTML files, for super cheap. There’s no lock-in concern there. You can easily migrate to nginx in the future if you really want, but start with S3
HN won't take a static website on a $5 VM down if it's set up even remotely correctly. Traffic to a popular link on HN is likely to get on the order of ~100rps max (more likely 1-10rps). Nginx will handle that with no problem.
CDNs may make a site a bit faster, but for a static site it's unlikely to make much difference if you're on a good host in US/EU or central Asia. If you're hosting in Australia or Japan, maybe it might be a little slower than expected, but still totally usable.
Completely agree. I think many people here regularly work on larger web applications in dynamic languages with heavy JS front-ends piling on dependencies.
Nginx is unbelievably fast by itself, not to mention the optimizations that are completely unnecessary for a static blog. It's not going to be your blocker.
If you're serving up 20MB of JS and inlined images on each page load, yeah, you may want to rethink that. But we don't need to get wild. My homepage is 9.2KB. Longer blog posts (e.g. [1]) can clock in at 20KB. HN won't take that down.
Not to mention that most VPS providers to even speak about nowdays use SSDs
For a personal site who the heck even needs a CDN, the only reason I might use that if I put photography website with huge shots or if there's a bunch of videos as well.
Yep this doesn't surprise me at all. A stock install of nginx with no tuning at all was reaching 26k rps on my 2013 MacBook Pro when I tested it years ago.
I have front paged on HN and Reddit several times. Often 'only' using 5$ vms. However i was using cloudflare or at least nginx and proper caching settings.
I run several hundred dollars monthly of infrastructure but my websites are nearly all on a simple VM for about 20€/month on Vultr right now.
Web hosting only is expensive when people run badly optimizer infrastructure
There are some low end VPS providers too that go as low as $1/mo. I usually stick to $2/mo or higher just for stability, reliability. Even you even got hosts like Hetzner Cloud, Scaleway (see European hosts) that provide great service, bandwidth, and VPS. I don't know why people use Amazon... I don't find their value proposition very good unless you need dynamic scaling for unpredictable demand.
That last unless is what the value prop is. Personal site is indeed just fine on a cheap vps (and I can also put up the occasional file), but AWS has better reliability and much better scaling. When you consider the opportunity cost of time, AWS can come out cheaper.
I host my site on Apache running on an ARM board in my garage.
I'll consider moving to a VM if/when the ARM board eventually fails, but it's been running for 6 years so far. I have 6TB of storage, which mostly serves as a NAS but includes about 200GB of photos for the website.
There is no deployment process; the web root is mounted by NFS on my desktop. I can share large files with people just with "mv" or "ln -s".
> how many 9s do my personal websites actually need?
My router seems to crash every 3-4 months, and I need to reset it. There's around 15-30 minutes of power failure every year. I don't worry about this.
Sorry about the ignorance but how do you run it from your garage? What about bandwidth? Could you share the url? Also would you recommend me any guide to get started?
The usual roadblock in this process is getting ports exposed to the internet. In the best case this can just be done on your router configuration. In the unfortunately common case the ISP blocks you from doing this and the only solution is to change ISP.
The only thing you have to google for is “Port Forwarding” and it’s usually a few clicks in your router interface. Then you just run the service you want on your computer / NAS / Raspberry Pi and tell your router to forward the port to your service IP / Port. If you have a dynamic IP at home you probably also do have to get a script or something to update your domain records if you want to point a domain to your home service.
yes. I host my personal website (maddo.xxx) on a single EC2 instance with just nginx. It's easy. It's fast. When I want to over-engineer the shit out of it for fun, it's ready.
I served a static website off nginx from a docker container for a while. At some point there was a breaking change and it would have taken 3 minutes to fix, but I didn't bother. Static hosting is a solved problem and there's not really a reason to do it yourself unless you just want to learn.
Yep, you certainly can, which is part of their the beauty.
Last I looked, though, you couldn't deploy to S3 without using tools that work specifically with it.
I guess it's really not that big a deal, but I prefer the genericness of "I'm configuring a webserver and pushing my files to it."
That process can be just about fully automated, even including HTTPS setup if you want that, and then you can use with whatever server provider you like.
Depends on the tools! If you're manually copying files, there are clients (e.g. Transmit, which is what I use) that just treat it like any (S)FTP server. If you're using the command line, yeah, you need to use Amazon's CLI, although it's still basically a one-liner to sync the directory you want to publish.
If it’s a static site and you own the domain then vendor lock-in isn’t really a problem regardless of if you use cloud services or not. Because you can just dump those files on a different provider and change your DNS entry. It’s not even remotely the same level of complexity as other services when people normally talk about vendor lock-in.
I don't know why anyone cares about vendor lock in. It's either trivial to move an aws lambda to a google cloud function because you don't have a lot going on, or it's not trivial to move stuff from even your own servers to other servers because it's under huge load and you have considerable amount of data you'd have to migrate under complex conditions.
Moving around is either hard or easy based on things that don't really have anything to do with vendor lock in.
No, vendor lock in can mean a lot, from lets say even a simple plain API implementation, where one vendor might implement something(storage for eg) in a way where its not possible for the other vendor.
I recently moved one of my k8s cluster from gc to aws, even terminology change can introduce a lot of awkwardness.
As an aside, I genuinely wonder under which circumstances a CDN will be useful for a static website nowadays.
I have a static website that has been on the HN homepage a few times and got picked up by the Chrome mobile recommendations and a nginx/https with slightly tweaked configuration never had a problem handling the traffic even on the smallest DO droplet.
The CDN makes the site load faster by caching the content on edge nodes close to the client. It’s not for taking load off the origin, but purely for network latency.
What I like about static sites is that you can serve the site in its entirety from a CDN. So you can literally just CNAME www.yoursite.com to yoursite.gitlab.io (or w/e static site host you use). This dramatically cuts down on latency worldwide. It also removes your web server as a single point of failure for short-term outages.
> you can literally just CNAME www.yoursite.com to yoursite.gitlab.io
After so many years I still can't really understand how easily people hand over almost complete control over their site to someone else, just because everyone else does. It's like handing over your e-mail account passwords when LinkedIn started. Yes, CloudFlare, Google and others are helping you, but there is a price to pay that might not be immediately visible.
It seems pretty different from a password because you're not giving control of your domain: if they broke their contract, you could take it back at any time.
That's the other odd part about this complaint: you're trusting a company like GitLab not to break their terms of service, which is a potential factor to consider but also one where they'd have severe negative outcomes to their business if they went rogue. Since you're already trusting a number of other parties, why is this one so much scarier?
> It seems pretty different from a password because you're not giving control of your domain: if they broke their contract, you could take it back at any time.
You are giving them everything they'd need to obtain a DV certificate for your domain, though. You can stop them from using it at any time just by changing the DNS records, but you'd need to wait at least two years (825 days for maximum TLS certificate duration) before you could be certain any certificates they had been issued before that point had expired.
The first hit is brutal. I won't say the CDN since I'm not an expert, but it doesn't take long to go cold (minutes) and once it's cold even the cached hits are 400ms.
The technical reason to use a CDN with aws s3 is so that you can have a custom domain name with https. s3 will do http custom domains, but to get https you proxy it. In this case, you can think of Cloudfront as the proxy.
We use a combination of Netlify + Webflow + Hugo for our website (www.facetdev.com). With that we get a global CDN and our website will never go down.
Netlify has been awesome and it made it stupid easy to combine our www site on Webflow with a hugo static blog in a subfolder (/blog). This might be my favorite web publishing workflow ever.
If you haven't tried Netlify yet, definitely give it a look.
Yes. It also provides a really nice UI for building our www site which we like to rev frequently. Webflow is the bomb if you are familiar with HTML and CSS. Super clean HTML, total control over all the css attributes, drag and drop builder.
Is it a completely hosted service? It looks cool, but I'd be reluctant to use it if it's a subscription to an online tool where I have to pay forever. Is there a standalone version of that editor?
It is hosted, but you can use it free forever if you have Netlify in front of it and use your free sitename.webflow.io URL as your origin server. You can also export your site as static html if you want.
How much does this cost? I put in some more effort to setup my HAProxy and nginx containers on a Vultr node, but I get LetsEncrypt for free, so I'm just paying for a Vultr node (or DO droplet) and the price of the domain name:
It costs at least $0.50/month but probably not much more than that for most small to medium sites.
The $0.50 is the monthly cost of the Route 53 hosted zone; the CloudFront and S3 costs typically amount to pennies, but of course it depends on traffic.
I have a two-line Makefile that with one target that sync's my website with an S3 bucket. Deploys are instant. The rest is handled by Cloudflare an AWS. The sheer number of moving parts in this system is outrageous for a static website. A fun project for sure, though.
I think the complexity for this setup is about the same. Once the different AWS services are provisioned during the initial setup, subsequent deploys are quite straightforward. For example, I have a three-line Makefile target for Jekyll sites that looks something like this (using Docker with a local `aws-cli` image wrapping the CLI):
Bundling service config and launch makes the whole process easier, for sure. There's also more than one way to configure this depending on what your needs are, so it'd be cool to have a few different versions of SCAR.
I started with a setup similar to your diagram and tweaked it when I realized S3 didn't serve index.html when the URL was just the parent "directory", i.e. example.com/foo/ doesn't resolve to s3://example.com/foo/index.html. To get this working I had to write a bit of JS in a Lambda function and deploy it at the edge of my CloudFront distribution to do some URL rewriting.
Given that's the behavior most people expect, might be worth considering?
That should indeed be the default behavior out of the box with the way the S3 buckets are configured. I have a couple Jekyll sites deployed this way, and a request to the parent directory does get served by the contents in `index.html`. Are you not seeing that behavior?
I'd definitely like to add more variants of the default stack. At the minimum, I'm sure there are folks that prefer `www` redirects to the apex domain, or removing the `www` subdomain altogether.
Recently moved some static sites from S3 to AWS Amplify Console. Super easy setup and even easier maintenance with the Git-based workflow: https://aws.amazon.com/amplify/console/
Anyone have an a average monthly fee for using these as hosting solution? last time i ran the numbers using all that services go from 5 to 10 USD per month and was better to use amazon lightsail (3.5 per month) or other cheaper alternatives at lowendbox
For anyone looking for a hosted solution, https://surge.sh/ is super nice and simple without any of the complexity of managing the stack yourself. Deploying uses one simple command, and you get hosting and custom domains for free, though I believe SSL is paid for custom domains. (I'm not affiliated with Surge at all, just a happy user.)
I was actually wondering that myself: Is there interest in a hosted service? It'd be quite similar to (as many comments have suggested) Netlify and the one you linked to.
I was mostly going for a DIY solution since I wanted to "own" the bits being deployed while remaining as close to the infrastructure as possible. Providing a hosted service somewhat moves away from the DIY spirit; I suppose additional tools/UIs could be offered to simplify setup and deployment and still run everything directly on AWS, but at that point one might be inclined to just move to one of the other hosted solutions for the simplicity.
A what? In the majority of the World copyright has been automatic for about 140 years.
How you get copyright is you make a work. No need to put anything else on it. IIRC there are about 3 countries that aren't signatories to the Paris Convention.
In USA you can file a notice in order to get better treatment in court, but it's not been required for 40 years or so, is that what you're referring to?
Sorry to nitpick, but copyright declarations are a thing of the past in many nations with copyright protections automatically conveyed upon creation, registration only necessary within a short time after infringement was detected, with registration serving to only maximize the monetary sanction the government will levy on your behalf.
and regarding license, they have the MIT license added to the repository
That's fine, but this is purporting to be a copyright declaration. I know they're unnecessary, but if you are got to add one, you should do it properly.
great job! I wish more projects have 1-click deploy to Heroku, aws, gcp or azure. This is a good habit more people should get into.
Running this project on aws can give a cloud beginner an interesting way to expose them to many concepts. Now I just have to figure out what static website I want to run in this!
Please do the same for running your own scalable wordpress install!
The technology is awesome, but I won't use Cloudformation, Azure Resource Manager templates, etc. until AWS, Azure, etc. support spending limits. Getting into the habit of clicking "Deploy Stack" when you're credit card is attached to an account that allows unlimited spending seems risky to me.
I just built my first static page since middle school this last weekend using netlify and a static site generator [Publii]. I was amazed at how simple and fast netlify is.
I’m confident I could figure out how how to do something much more complicated. But I want to focus on other things and it’s nice to not have to think about it.
The AWS CloudFormation console has a "Designer" tool that allows drag-and-drop creation of template files, and also visualizes existing JSON or YAML template files with these diagrams.
You've only taken care of the surface-level complexity with AWS. Want to do something more like add a header to the response? Well then, create a lambda, deploy it to the edge, and pay per page view. This is something Firebase is much more elegant at - the initial deploy, and then evolution and addition of features geared to static site deployment.
Try out https://freepage.io is much easier to use than github pages. You don't even have to create an account, verify email and all that nonsense to use it. And it has social media built in to get your page out there in to the world.
I'm usually paranoid about vendor lock in, but I can't join you on this one.
Netlify assumes a version control repository that you can pull from, run a build step, and then host static files from. The build tools are open source, the output is static and trivial to download and rehost, and the repository is git meaning one clone is all you need to port to any other service.
It's not so much my code that is locked in, as that netlify has spoiled me by making deployment so streamlined that it would be hard to go back to manual deployment. This gives me another option, which I appreciate. That's all I meant.
Why do you worry about vendor lock-in with netlify? They host static sites, so you are free to go anywhere. Unless you happen to be using their other services which this doesn't address anyway.
This is my concern in a nutshell, unless someone has a tool that spits out a properly formatted .htaccess so I can migrate my HTTP headers and redirect rules.
Netlify's playground is easy to use for setting this up, but I'd also like to have this available in a standard format - just as an escape hatch in case I need it.
Of course that depends on what kinds of assets you serve and how many hits you get, but to provide a ballpark, I had a small personal site with few media assets on S3, and it consistently cost me US$0.12 per month. I think once it cost me 14 cents and I thought, "Wow, I must've been popular last month!"
I didn't run analytics so I can't say how many hits it got, but traffic was probably fairly average for a personal site.
This seems like a nightmare to setup and maintain for a new comer. Netlify lets us setup things in a whiff. This is a nice project but not for anyone below intermediate.
There are plenty of reasons even beyond privacy and MITM content changes. Supposedly HTTPS is better for SEO and also there are browser APIs[1] that only work in HTTPS context
Totally agree! Only thing is GH pages might limit your size at some point... Wish they would introduce a per GB pricing and allow you to scale. That would make it a permanent solution.
At a glance: - Mine handles domain registration + ACM verification automatically - This one wisely uses clioudformation instead of api calls - This one does apex->ww redirects, whereas mine uses the apex and has no redirect
Seems pretty cool!