I deploy my React apps and landing pages there and don't need to worry about the underlying compute, or load balancing, or anything else. Thankfully I didn't do any frontend work before Netlify was available!
I don't really get all the hype around the Jamstack (https://jamstack.org/), though.
Netlify's hypothesis seems to be that if you deploy websites on CDNs, instead of on web servers, then you'll get better performance.
Also, because it's just a static site, there's no backend or database, so you get better security.
I accept this hypothesis, but it seems like you still need a backend. Netlify is promoting functions as a service [1] so you can avoid having web servers for the backend too, but I'm a little skeptical you get the flexibility to build applications with dependencies or non-trivial business logic or design with this approach.
If anyone's tried a FaaS and this is incorrect, please describe your experience!
In my experience you can achieve a lot with FaaS - but you still need an API, database and storage - which Netlify Functions doesn’t offer out of the box. I have a couple of sites deployed on Netlify with a ‘serverless’ backend deployed to AWS separately.
> Their functions are also limited to triggers relating only to Netlify events
to be clear the functions are also exposed to the public internet as http endpoints so you can ping them from anywhere else, including, for example, your frontend app, or Zapier [1], or GitHub Actions or cron service of your choice
It's glorified in the sense that it's much cheaper. I have had small sites on s3 plus cf and have gotten a bill that boggles the mind because I could be hosting the content for free on netlify.
As someone who is currently hosting a static site with S3 + CloudFront, I'd be interested to hear more about this. Were you hosting something more heavyweight than mostly text?
Make sure that you've set the correct cache control headers on the S3 objects. So "Max age" should be something far future for static assets and something near future for HTML files.
If you forget that then Cloudfront will retrieve the objects from S3 every time there is any activity on your site and your bill will go up really fast.
This is precisely why netlify's offering is valuable. It works out of the box with zero config. If you're using a common static site generator you don't even need to setup the build process, just give it your git repo, and push when you want to deploy.
That's not really a compelling reason when it's literally a single input field on a form to setup. Also, if you're in the business of web development, HTTP cache control headers are something you should know about regardless.
As with a lot of AWS, it's easy if you know you need to do it, but there are a lot of gotchas which can be very expensive if hit (and there's no way to limit expenditure).
Any devs out there with a Visual Studio/MSDN as subscription should also be aware of their free Azure credit. I get 50$/Mo, which is more than enough to host a static site with CDN (my blog typically runs at around 1-2$).
For how long? I remember I got something similar for AWS for a year. Hosted my Wordpress, year passed, then prices jumped up. I migrated to Netlify, rebuilding my site as a static one, hosted for free now.
If you regularly serve bigger files but have low traffic DigitalOcean S3 Spaces and a custom Cloudflare Workers CDN might be an interesting solution for you. Cloudflare Workers are based only on requests and priced at 5$/10M and if you cache everything heavily DO’s 5$ S3 plan should be enough for most use cases. (I’m not affiliated with either company)
Check Zeit Now. You get cloud functions but you can almost architecture your code as if you were writing a monolith. Probably better as each function can be in its own language.
> Probably better as each function can be in its own language.
That sounds awful. How do you share any of your common logic? Do you have to write versions of your important routines once in each language, each with its own set of quirks and bugs?
It's bad enough in microservice land, where every time you want any shared code you have to extract it to a e.g. "utils" library and massage the versioning to get it in to the right apps.
End result: most people don't and you end up with many different little versions of your routines that do important things.
I don’t think anyone is advocating for choosing a different language per function, but rather you can use the best language for the job. For example, we have a Python monolith that has one route that is CPU-bound and would otherwise be an ideal use case for a language with real multithreading. The Python version often times out (60s) while a native Go (or perhaps Java) version would probably be on the order of a single second.
In our case the limitation is less about the monolith architecture and more about our team’s unwillingness to learn a new language.
> a Python monolith that has one route that is CPU-bound and would otherwise be an ideal use case for a language with real multithreading
Sounds nasty. The classic python answer is to try rewriting the innermost loops in cython/c/cppyy etc or try using numba/pypy.
> best language for the job
The problem with this "best ... for the job" phrasing is that there's an objectively, indisputably right answer, where I've experienced atrocious choices being inflicted upon a team under that guise, resulting in a component that nobody wants to maintain because it's foreign to everyone.
> Sounds nasty. The classic python answer is to try rewriting the innermost loops in cython/c/cppyy etc or try using numba/pypy
Yep, we’re doing this. The problem is that we’re still calling back and forth between C and Python very frequently and rewriting that next layer in C or similar is prohibitively unmaintainable.
> The problem with this "best ... for the job" phrasing is that there's an objectively, indisputably right answer, where I've experienced atrocious choices being inflicted upon a team under that guise, resulting in a component that nobody wants to maintain because it's foreign to everyone.
Fine, then let’s at least not use the wrong tools for the job, in this case, Python and language’s that impose similar tradeoffs.
Yes - the trick really is to move your accelerated code away from python's object model, because then you can drop a lot of overhead and also release the GIL. Tricky, though. Out of interest, did you see any improvement from pypy or numba?
> prohibitively unmaintainable
I don't think well written cython should be any more unmaintainable than Go IMHO.
We didn't try numba and couldn't get pypy working; we had dependencies that didn't work with Pypy. I'm curious about numba though.
> I don't think well written cython should be any more unmaintainable than Go IMHO.
I've written a lot of Go, it's always struck me as about as maintainable as Python (I would put it at a bit more maintainable than Python given its mature static typing story--mypy still has lots of problems). I haven't given Cython a fair shake, but it doesn't seem very well-invested in; this is generally what has kept me away from it.
Of course. It's just managed by someone else and you often are paying per usage. And you'll find yourself needing to wrangle numerous serverless platforms to actually build out relevant business logic if you want to do something meaningful, one third-party microservice at a time.
> Also, because it's just a static site, there's no backend or database, so you get better security.
It's not as trendy to think of it this way, but "static sites" are just caches with manual invalidation.
It might be more expensive in the long run, but it's an operational expense rather than a capital expense, which I would argue most software projects are.
You typically hook up changes in your data with static site rebuilds.
Thinking of it as a cache has some merit, but it is more exciting to think of it as a event driven system as a whole, where instead of triggering a render on demand (user wants to look at something) you trigger it when a change happens.
This pattern reduces the need for ad hoc caching and eliminates some accidental complexity as well because it reduces the coupling of management and representation.
I am an experienced user and avid customer of Netlify. Netlify is amazing for small sites, blogs and what not. But as you go up in traffic, that's when you start to realize how over-priced their charges are, atleast for bandwidth. I'm not saying they're bad, they still need to get back all the money they lose on their free users. But for enterprise customers, it's really bad.
I'll give you an example - My client has a high traffic website with over 300TB of bandwidth which is being served via cloudflare on a $20/mo plan (probably free plan also works well I guess).
Migrating this to Netlify would cost us roughly $6000 per month. (Netlify charges $20/1000GB). To me that's insane if you really want to adopt JAMstack.
For 6 grand, I can host an elaborate GCP setup with load balancers on compute engine. Or I can have a dozen instances on AppEngine all automatically handling my load for me. What am I missing?
This is not to say I don't like static sites - it's a different thing. I am 100% proponent of static sites, maybe just not Netlify.
They don’t recommend it, but you can front a Netlify site with Cloudflare—including putting the cf origin cert into Netlify. I do this for my website.
You just add the domain to the site in Netlify, but then set up the hosts as CNAME in cf pointing at the codenamed Netlify hostname for the site.
I’m on the cf free plan, and the whole setup works great.
Cloudflare could probably take the whole stack though if they gave me a simple tool to upload a directory of files to them.
(I stopped using Netlify for builds when I started moving my repos off of GitHub due to GitHub’s collaboration with ICE, and Netlify only supports the major git services for autobuilds. I self-host my repos with Gitea now, so I have to build on my Drone server and just use the Netlify CLI in the last CI step to upload the built site.)
Technically there is not a limit of 30 sites, but rather of 30 deployed worker scripts. With a little JavaScript editing, you can create a single worker script that is able to serve an unlimited number of sites (by including the hostname in the KV key).
Kentonv, how does CloudFlare manage to offer free cdn when google cloud and other providers charge a ton for bandwidth. What are your limits? How are you still making money in the millions of sites that use free CloudFlare cdn ?
Because internet architecture is based around transit capacity, not usage. You buy capacity like 10gbit/sec lines to interconnect with and Cloudflare has enough bandwidth to easily support their free customers and upsell on other features.
Remember they are also a security service with DDOS protection and that requires high bandwidth anyway. Other CDNs don't charge for bandwidth but rather for bytes-used, which is just more profitable, especially if you don't have a lot of other features to charge for.
I'm surprised that more CDNs don't try to compete with CF on this but apparently that's just how they want to operate.
The product and engineering challenges involved in building out our network such that it is both performant and cost-effective to operate are fascinating.
The easiest way to learn the answer to this sort of question is to come work at Cloudflare :)
I run a SaaS on Google Cloud functions and the bandwidth cost from somebody linking our homepage in a HN comment is often 2x the cost to run the actual service.
Which product? Cloud Storage??
Is there Cloud CDN (or some other CDN) on top? If not, it's not surprising. With a proper CDN setup (say with Cloudflare) that number should be much lower than that of Netlify. Particularly with Cloudflare since your CDN cost is almost 0.
FYI Netlify doesn't work well with Cloudflare. That's the reason why it's so expensive as a standalone option.
I like netlify but it's weird how little them seem to do. They released analytics like a year ago and it's had almost no improvement/progress, lots of people on the forums clamoring for more and it just seems to fall on deaf ears.
I paid for the per-site analytics for a few months, the server side analytics means it captures much more of the traffic than client-side GA style, but the metrics available were just too basic for any real commercial use.
This has been my issue as well, I started using their analytics when it launched. I thought they must still be working on it from the state its in, it is absolutely not worth the cost currently. You cannot even see the data from the previous months! Despite having paid $9 for them just to store page view totals.
> Netlify’s co-founder Chris Bach says they weren’t looking for new funding, but felt with the company growing rapidly, it would be prudent to take the money to help continue that growth.
Never understood this. Is there an example of a company that took funding "just in case" and it was worth it?
I love Netlify (my business currently depends on it), but I'm getting the idea they are feeling the pressure of "We got something big we have to make it as big as possible", which is not inherently bad but in Netlify's case could easily take the product in the wrong way.
Their focus on enterprises especially worries me, as their current feature set works amazingly for "indie hackers" and small businesses.
Its smart to raise money when money is cheap (right before a recession), especially if your business is particularly dependent on business from other startups. The startups-serving-startups ecosystem is incredibly fragile to downturns.
If you don't need the money you're in a much stronger negotiating position and can get better deals vs being forced to raise and the investors having the upper hand.
I've always been an indie hacker, more adept at building applications/front ends than infrastructure and devops, so things like Netlify, Zeit Now, Firebase etc make a lot of sense to me because they abstract those parts away.
I've recently started at my first enterprise contract (10+ years into my career) in the cloud team (as a front-end engineer building the interface to a cloud console) and am amazed at the level of resources poured into infrastructure and devops.
I'm obviously new to enterprise - and there are probably reasons I'm not aware of that prevent them from going all in on things like static site hosts and managed infrastructure - but I do feel that there's a lot of room for enterprise to shift in this direction.
Enterprises have infrastructure teams because they have applications that need to solve business logic that cannot be solved by static sites alone. Even the simplest businesses invest in data processing pipelines to extract value from their data. When the scale of data processing becomes very large, they need to rearchitect their systems in different ways to better handle that kind of traffic.
So it helps to have in house infrastructure teams that can somewhat consolidate and invest in long term planning and architecture for computer systems and services required by these enterprises.
when you say "shift in this direction" do you reckon your client would adopt a managed solution like Netlify/Zeit, or to build some custom solution in-house like PayPal[1] did?
I've hosted a few things on Netlify's free tier. Very, very convenient, I must say.
A bit like Heroku also.
And just like with Heroku, you get the nice free offering and the second you out-grow that, costs explode like crazy (and leave you wishing you'd just rolled your own anyway).
For example, you have to pay $19/month/site for 1,000 form submissions on Netlify.
Say it with me: PER site, $19/month, 1,000 form submissions. I am not sure how they can possibly justify charging that much.
I think Netlify's success has got nothing to do with "microservices". The new (not so new now actually) trend that allowed it to prosper is when people started splitting up backend-rendered webapps (where the backend was effectively rendering most of the html, classic mvc/rails style) into a static frontend/single page app making api calls to an api backend (or multiple but it doesn't really matter here)
Can anyone who's used both Netlify and Zeit comment on how the two compare? I personally use Zeit and I really like their interface. The work they've been doing with nextjs has also been very interesting.
Netlify's core USP appears to be making the process of deploying and operating a static site easier - auto-deploys from GitHub in build process, HTTPS cert management, simple CDN config etc.
Claimed 'performance' gains come from static hosting closer to visitors but AFAIK their CDN runs on AWS, and relies on nginx which means HTTP/2 prioritisation is broken - they've got a lot of work to catch up with Akamai, Fastly or Cloudflare on this front.
The whole JAMStack approach seems to result in companies re-inventing the wheel in non-performant ways - Contentful CMS delivers sites via Netlify but all the content is requested as JSON which JS components then render to the page.
Taking away the deploy pain is definitely a win for many of use but it's something I could imagine Cloudflare launching, which coupled with their other features - image optimisation, edge workers etc could probably blow Netlify out of the water
Doesn't matter if it's multi-cloud or not, the HTTP server Netlify are using still has HTTP/2 issues
Suggest you look at at few more Contentful sites in DevTools the Contentful site itself has no content in it's HTML and requests all content as JSON which is then rendered via JS components.
Throwing "microservices" in there seems like a reach. I'm not familiar with Netlify, other than the static content generator CMS thing, but I don't see how its more beneficial to a microservice design versus any other API backend.
Can someone clue me in on the benefit of Netlify over say, S3 bucket fronted by cloudfront? Seems like the cloudfront solution is much cheaper with not much more dev time investment
Yeah, but does your S3 bucket talk about something called JAMStack the whole day? \s
There isn't really an advantage. It is static page hosting. I remember the days when you would just FTP those sites onto a server, serve with Apache and call it a day, but I guess everything needs to be behind a CDN these days.
You don't bother with S3 setup (let alone aws setup), static access rules, then cloudfront setup and config, domain binding and ssl setup. You also don't bother with cloudfront cache invalidation.
Netlify abstracts all these things for you, also it stores all the versions you have uploaded or built and gives a quick option to rollback to the specific version.
I'm kind of AWS power user and have no problem with with corresponding terraform templates for S3 hosting, but I prefer Netlify if possible.
The overwhelming reason is easier deployments since it'll build on a git/repo push and roll out automatically with cache invalidation and proper cache headers. Very little config beyond your custom domain name.
Netlify makes it easy to deploy static sites. I’ve used it and it’s been a decent experience. Their UI is quite complex and the notifications are annoying (they don’t turn themselves off after you read them).
GitHub can roll out something like this very easily. They already have GitHub actions and GitHub pages. Netlify’s core proposition is when you commit to a branch, a build command is run that outputs assets to a directory. Netlify will put those assets in a cdn. You can route your custom domain and they’ll provision the letsencrypt very for you and you have a https site. Boom!
They have other value adds like forms which seem quite overpriced. That’s when you need a db.
Overall my experience is that it’s good to get something out of the door but you prolly will need to move to a cloud provider if you need full control like caching headers and any form of backend.
Basically as a dev all I want is something simple. Here’s my script that builds the frontend. Here’s the docker files for various services. Here’s my requirements for a database. Lastly here’s my routing rules of urls to different docker services and how to scale them. Go host it and give me a good log analyzer so I can monitor how things are operating.
Render.com has nice things but there’s definitely value in simplifying most webapps deployments and still making the whole thing dirt cheap to operate.
I just switched to Netlify from GitHub Pages a little under two weeks ago. The process of setting up DNS and switching everything over was seamless, I'm very pleased with the site under the free plan. I was choosing between Netlify and AWS and chose Netlify due to simplicity and a better free tier, even though it costs a bit more (or theoretically would in the long run).
A bit of economics:
The free plan offers 300 minutes of build and 100 GB of bandwidth. To compare, AWS S3 offers 20,000 GET requests for free. Given my pages average about 1 megabyte (other stuff is served from third-party CDNs, which are free). Thus 1 GB of bandwidth is approximately 1000 page views. In other words, Netlify offers about 5X as many free page views per month as AWS. I'm under absolutely no danger of getting to 100K page views, in a good month I get a couple of thousand. However, let's assume that I start writing better content and more people read it, like more than 100,000 page views per month. In that case, Netlify overage costs 20 cents per GB, AWS costs 9 cents. Overall, "at scale" Netlify is about twice as expensive as AWS S3 for static sites, but it's better at modest scale.
AFAIAK GitHub pages require that the built files are stored in the repo.
Netlify checks out the repo, runs the build command and then takes files from your dist directory. I’d love for GitHub pages to behave like this. A much simpler setup.
Well, you can technically push the src changes to `master` branch, have a github action build + update a `deploy` branch and then have gh pages read from there.
As a user they've never sold anything to, I can't even imagine a valuation of $53m. I am not calling BS, I'm wondering about the business and the moat that justifies these numbers.
I don't know about all those extra feature, but to ship a static create-react-app, netlify has made my life super easy. Happy customer here. I hope they keep their core offerings solid and don't go crazy with new features to justify the Series C.
I really wish Netlify would start charging money for their core service, which is SUPER slick. A pretty big portion of their offering is that they don't just do hosting, but also do CI/CD for static sites, which is incredibly convenient.
I currently pay $18 for analytics for my podcast page (https://kompilator.se) and my agency's page (https://yoisho.se), just to pay _something_, but the quality of the "analytics" is unfortunately extremely sub-par compare to the rest of their service.
I deploy my React apps and landing pages there and don't need to worry about the underlying compute, or load balancing, or anything else. Thankfully I didn't do any frontend work before Netlify was available!
I don't really get all the hype around the Jamstack (https://jamstack.org/), though.
Netlify's hypothesis seems to be that if you deploy websites on CDNs, instead of on web servers, then you'll get better performance.
Also, because it's just a static site, there's no backend or database, so you get better security.
I accept this hypothesis, but it seems like you still need a backend. Netlify is promoting functions as a service [1] so you can avoid having web servers for the backend too, but I'm a little skeptical you get the flexibility to build applications with dependencies or non-trivial business logic or design with this approach.
If anyone's tried a FaaS and this is incorrect, please describe your experience!
[1] https://www.netlify.com/products/functions/