Hacker News new | past | comments | ask | show | jobs | submit login

> Spinning up a dedicated server in a single datacenter somewhere isn't going to give the same results, especially if your users are geographically distributed like in this case.

Maybe not, but is the target audience that shills out $20/month really the type of people who have optimized their site to such an extent that shaving 50ms off the request latency by having your edge cache geolocated is really the type of thing that makes the difference? most of that group could probably do a lot of other optimizations that probably count for more.




> Maybe not, but is the target audience that shills out $20/month really the type of people who have optimized their site to such an extent that shaving 50ms off the request latency by having your edge cache geolocated is really the type of thing that makes the difference?

The common mistake is to pick a server geographically close to yourself, only access it from low-latency connections, and then assume that everyone in the world is seeing the same thing.

Or to only visit your own site with everything already in the browser cache. If you're not seeing cold start loads, you're not seeing what every new visitor to your website is seeing.

Consider the Photopea.com website. The author explained in a comment below that he spends $60/month to host the site without a CDN. Several of us loaded the site and it took 2.5 - 5.0 seconds to load. He could sign up for a cheap Cloudflare account, reduce the size of his server (due to caching), and the load times for everyone would drop by a significant amount.

If you're hosting simple, static content like a blog for an audience that doesn't care about load times, then of course nothing matters. But for modern, content-rich websites (photos especially) it can actually be a substantial improvement to add a CDN even if you have a single fast server. You may not see it, but visitors from distant locations definitely will see a difference.


With some browser security policy that blocks part of the download, the homepage www.photopea.com clocks in at 3.80MB (so it should be much higher in practice). In this case, it's mostly JS, so designing your website properly (without JS, especially if the app itself is wasm not JS) would have much better savings than moving to CloudFlare CDN.

A CDN is more times than not the wrong answer to a real problem. Shave off your website and consider content-addressed protocols for big static asset download (like the textures from the article). If you run your website as a lightweight glorified Bittorrent index you'll notice your costs are suddenly a lot less, and you can still have a smaller "Download over the web" button as fallback.


> Consider the Photopea.com website. The author explained in a comment below that he spends $60/month to host the site without a CDN. Several of us loaded the site and it took 2.5 - 5.0 seconds to load

This is a conclusion i am extremely doubtful of.

Ping time new york <-> tokoyo is about 180ms. So lets say as a worse case the ping time to the single server is 180ms (its probably not that bad), and lets say the latency to cloudflare edge server is 20ms.

So using cloudflare on a cache hit (best case), you save something like 160ms per roundtrip.

Which don't get me wrong is a huge savings and worth it (although this scenario is hugely exagerated).

However say you want to load the page in under 1 second instead of 5 seconds. In this scenario you would basically have to have 25 round trips to bring the site from 5 seconds to 1 second just on rtt savings of having a geo located edge server. If your site needs 25 round trips to load, something else is clearly wrong. (And this is an exagerated case, the real world the benefit would probably be much less)

To be clear i'm not saying that geo located edge caches are bad or useless. They are clearly very beneficial thing. Its just not the be all and end all of web performance, and most people in the demographic we are talking about probably have much more important things to optimize (otoh using cloudflare is cheap and doesnt require a lot of skill, so it is a very low hanging fruit)


> So using cloudflare on a cache hit (best case), you save something like 160ms per roundtrip.

Per packet. If you're doing a cold start, you'll pay that latency cost several times over: first the TCP handshake (3 roundtrips), and then the TLS handshake (2 more roundtrips). That's 800ms of extra latency before you even get to sending the first HTTPx request.

Cold start latency matters a lot.


> In this scenario you would basically have to have 25 round trips to bring the site from 5 seconds to 1 second

You’re forgetting that the TCP protocol itself is bidirectional. High latency connections will have lower throughout, especially at the beginning of transmission, because the data isn’t literally just streaming in one direction.


Anything over 100ms [1] is perceived as not-instant by a user. If you wait 2RTTs with 50ms per round trip, then you've already exceeded this threshold.

[1]: https://stackoverflow.com/questions/536300/what-is-the-short...


To clarify, i don't disagree.

I just think if your site takes 700ms, is there really a difference between that and 650ms?


If that additional 50ms hits your user's bail-out threshold, yeah.


1.13 seconds to load https://www.google.co.uk/ (even after preloading it and agreeing to their popup)

3.44 seconds to do a search for "donkey"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: