Hacker News new | past | comments | ask | show | jobs | submit login
The Future of the Web Is on the Edge (deno.com)
230 points by 0xedb on Oct 6, 2022 | hide | past | favorite | 194 comments



As much as I love Deno, I think this is the wrong way to frame it. For a number of reasons.

1) Sure, big web apps for a global audience would benefit from a distributed application and distributed data. But, honestly, most web apps I've worked in my 20+ years in web dev have been for an audience in a single country or even city.

2) It's easy for Deno to say "get a distributed app running at the edge!" when the hard part is having distributed data which they don't solve. If you don't need distributed consistent data then you're probably more than fine with a monolith with a CDN.

3) Those latency numbers don't seem right to me. My production server in AMS returns a response in 200ms to here (central Mexico).

4) Performance evangelists and salesmen will try to convince everyone they need to get a response in 50ms, but for most use cases that's just ridiculous. Most people are fine getting a response in under a second. Just use a CDN for your static assets and your monolith will be fine.

The edge is cool, but it's not "the future". It's just another tool with pros and cons.


As far as I know Cloudflare Pages Functions with Durable Objects or D1 is the only edge infrastructure that can do this currently. D1 uses SQL as it is built on SQLite, and Durable Objects is used for coordination, but also has a transactional storage API.

D1 is closed beta, Pages Functions is open beta, but limited to 100,000 requests. You don't need Pages Functions if you have an SPA, it's more of an MPA thing where Workers is used as a server for SSR. It let's you push more work to the server like in the good old days, except the server is a JavaScript runtime running on the edge. Durable Objects is the interesting part, because it gives you the data locality, but also strong consistency.

Not shilling for Cloudflare, I just think that their stuff is cool and the rest of the industry is definitely playing catch-up to them. You can argue that most of us don't need the scale and it's true, but I would also argue that we can use the performance and the developer convenience, and by that I also mean time, which is by far the most important resource. Their runtime is also open-source, which makes you feel less tethered to them.

https://blog.cloudflare.com/workerd-open-source-workers-runt...


Check out fly.io as well. They've been adding some SQLite stuff. They also have the added benefit that you can run just about any docker image, not just functions.


I'm building an app using fly.io and SQLite and it's pretty interesting.

this indie-stack shows how to use the remix framework, fly.io, SQLite and Prisma together.

https://github.com/remix-run/indie-stack

I think fly.io will only expand their SQLite features with the creator of Litestream onboard. Exciting stuff!

https://fly.io/blog/all-in-on-sqlite-litestream/


There are a number of distributed databases in the market. Fauna, GC Spanner, Cockroach, etc.

I love the CF stuff. I've been using Workers for a couple of years, even right now in production for streaming audio and other duties. But Workers are not a generalist solution for building complete applications.


It looks like Cloudflare is moving towards that with functions in Pages, which allows you to call Workers within your Pages app.

https://blog.cloudflare.com/cloudflare-pages-goes-full-stack...


Exactly my point, and I'm working on a complicated web app on top of Pages Functions and Durable Objects. The idea is that you can provide dynamic functionality without or with less client-side JavaScript, which is what SSR is about, and at the same time you can also build things like an authentication server, which traditionally requires an actual server, but can be done with just Workers and Durable Objects.


Too bad distributed DBs are very expensive atm. Their multi-region offerings are only good for companies who can spend thousands on a database each month.

I’m actually thinking that with GDPR and similar regional lock-in laws, the future might well be “distributed but independent” DBs, and then both the compliance and the performance problems are solved using more traditional architectures.


yugabyte looks quite interesting in that space. From what I can tell their OSS product allows for "Row-level geo-partitioning" https://docs.yugabyte.com/preview/explore/multi-region-deplo...

I haven't used it for anything yet, so I don't know how well it works.


Cloud flare made a blog post about this. I tried to find it but I couldn’t. The general idea was that in a post GDPR world as more and more countries put legal limits on the movement of data about their citizens, it gets harder and harder to serve these users with classic web tech. However, they say, that cloudflare workers and durable objects the user data is kept in region and thus you have a clean scalable architecture without legal worries about data jurisdiction.


I agree. It is the HN curse in that what is posted here is often while very interesting, too highfalutin for most peoples needs. But it is good to know about.

I do like web hosting services that make using CDNs etc. so simple that it is a nobrainer to use them and benefit.

When you have to work alot harder to get your thing done to use the “webscale” tech though, this is where you might be burning complexity tokens.

I never really see this happen at work though. Often at work it is erring on too conservative.


I disagree. If you look at what Cloudflare is building on top of Workers it definitely seems like the future.

KV storage, Durable Objects and Queues (Message passing). You can build complex apps using these tools.

E: and D1 and functions in Pages.

Everything being on the edge feels like the natural evolution of the current approach.

On the frontend there is a focus on more server side rendering and hydration/progressive enhancement.

On the backend there is a focused on globally replicating/distributing data for quicker access.

Edge is kind of the best of both worlds. For the SSR/hydration/progressive enhancement it's extremely fast because of much lower latency and for backend operations you can cache and/or store data at the edge so it's much quicker to access.

Abstractions over the edge like Cloudflare's products also gives you distributed computation and storage for free, which is pretty big. You don't have to care about scaling or coordination the same way you do with a monolith.

It seems likely that in 5yrs edge will be the default way to write and distribute new apps.


They are complex, expensive, locked in, partially closed source, and not nearly close to complete solutions.

It’s a shiny thing, but at the cost of so much DX and optionality which is a worrying general trend with edge stuff.


They have open sourced the workers runtime to alleviate lock-in fears: https://blog.cloudflare.com/workerd-open-source-workers-runt...


Still though:

> As hinted above, the full Cloudflare Workers service involves a lot of technology beyond workerd itself, including additional security, deployment mechanisms, orchestration, and so much more. workerd itself is a portion of our runtime codebase, which is itself a small (albeit critical) piece of the overall Cloudflare Workers service.


The parts we released are the parts you'd need to host your app on other typical cloud hosting services in a typical way (e.g. using kubernetes), which should address lock-in concerns.

The parts we did not release are the parts that would only really be useful for building your own hosting service to compete with Cloudflare. These parts would not be particularly useful to someone who merely wants to run their own code -- in fact, they'd be excessively difficult to operate for that use case. This is on par with other services, e.g. my understanding is Deno Deploy has not released this part of their code either.


I get it, but it means you end up with the arbitrary and limited API that you’d mostly only use for the benefits at the edge, except now you aren’t at the edge and still have all the limitations.


In what way are they more complex than doing it yourself and not complete solutions? Platform risk always exists, but in this case it seems pretty minimal. Cloudflare's solutions are not anything that can't be easily replaced, the value of them is that they are integrated with the rest of Cloudflare's platform and run on the edge.

What do you mean by loss of control? Cloudflare has local dev in their Wrangler tool. I'm not sure how testing/scripting is affected here since the main difference is the deployment target.


DurableObjects? It's a totally arbitrary API that's non portable and definitely not easily replaced. And once you opt out of db there's a lot less reason for the whole thing.


Counterpoint - every API is non-portable in the beginning. All it takes is a killer app - for example, S3.

I'm on the apprehensive side, but optimistic. Let the early adopters give it a shot so the rest of us can see how worth it it is.


My guess is more conservative - 10 years. But yes I concur with trajectory.


These are all valid points, though for me, one thing I like about the "edge" model for application servers (workers) is the abstraction that you don't have to think about where your servers live, how many there are, when they start and stop, etc. The fact that they basically have to be stateless also simplifies things

Even if your storage is centralized (and it may not have to be; some companies are offering edge storage solutions that handle synchronization for you), it seems nice to be able to just ship code and let the provider worry about when and where and how much

That said- I've never used edge workers in production, so I could be over-idealizing the reality


You don't need the edge to have the same abstractions for your application. It's commonly called the 12-factor app and there are many ways to achieve it

https://12factor.net/


I’m confused. I don’t know how long it’s been around, but I was introduced to this document when I got into Web programming 5-6 years ago. It explicitly states as one of the main goals, ”- (Is) suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;”.

I’m not familiar with all the authors, but they also say they came to these conclusions while developing on Heroku/ developing the platform itself. One of the earliest & biggest players in the ‘edge’ software market.

Can you elaborate on why you said “you don’t need edge”, when these ideas were intuited by folks creating apps on the edge? Because the way I’m reading it. It seems that this is a methodology perfectly suited (if not designed) for the edge?

I tend to agree with the article, that edge is the ‘future’. It might not be next month, or next year. But 5 years from now? It certainly seems to be trending in that direction. And for good reason I think.

Don’t get me wrong, I love nerding out with an Ubuntu server, configuring everything, and making my app run on the internet by hand. But dang if it isn’t a lot of work. Most hobby projects I start nowadays, I start from a simple static site approach, keeping in mind I’ll need to progress to more complexity/layers as need be, and throw it up on Netlify or Cloudflare Pages. It’s so efficient, it’s hard for me to imagine starting off a personal project any other way now.


heroku was never touted as an edge platform, where are you getting this idea from?

There have been many trends in software, some stay, some go, some evolve

Maybe you are conflating "edge" with "serverless" and over applying the latest buzzword? None of the technologies you have mentioned are "edge" and telecoms have been doing edge since before it had a buzzword.

It's all about deployment and running infra. The "edge" has a lot of complications, and while closer in spirit to the early internet, we gravitated towards centrally controlled compute because it is much easier to manage and maintain at scale. See other comments for why


Heroku was not a player in the 'edge' market. I just googled to confirm. They actually had a "Heroku Edge" CDN that they added in 2020 for the first time[0], which to me is their entry into Edge style hosting.

I think it's fair to call Heroku (and 12factor apps) a pre-cursor to edge computing, but certainly not an early player.

[0] https://web.archive.org/web/20201126133614/https://devcenter...


You often need to have your code close to the database or api servers for requests/queries that depend on each other.


A lot of time the code runs in the browser, which is probably further from the database, and you still need an API inbetween.


To elaborate... how should Deno market their stuff?

IMO developer productivity/experience.

Deno are in an extremely privileged position that AFAIK no one has. They have control of the runtime, the cloud platform, and the framework with Fresh.

Compare Deno with say Vercel which really only have control over Next. They depend on AWS, Node, and React over which they have no control. No wonder they're investing on Svelte/SvelteKit and other projects.

Or Cloudflare who are developing their cloud infra with mediocre DX (although improving) and no framework of their own to be able to sell the complete experience.


Pretty sure under the hood Deno's cloud is running on cloudflare.


It is not. It is a custom runtime on custom infrastructure. Why do you assume we use Cloudflare under the hood?


How could that be?

CF Workers have a custom runtime on which Deno cannot run.


Believe they've called it their own proprietary runtime since they've announced their Deploy service. Their runtime supports TCP connections, so I can actually make db connections, unlike cf workers.


Isn't the "runtime" mostly a wrapper around V8?


Your comment regarding performance is on point. 50ms response times are reasonable targets for certain niche cases, e.g. high scale—which most aren’t at) or super time critical applications (e.g. certain sectors of finance, where sometimes 50ms is even too long). Almost every other application or platform would be fine with 100ms or 200ms (or longer!) response times.


> 4) Performance evangelists and salesmen will try to convince everyone they need to get a response in 50ms, but for most use cases that's just ridiculous. Most people are fine getting a response in under a second. Just use a CDN for your static assets and your monolith will be fine.

That would be true if loading website was just single request. But today web apps load multiple requests and probably even more in the background since so many use microservices. When you say 200ms it sounds fast enough, but usually end result is between 3 and 5 seconds combined. If every request would go from 200 ms to 50 ms, total load time would be closer to 1 second, which is fast enough.


What you described is the shitty world we live in, when a web "app" that is nothing more than a glorified static web page needs a few megabytes of Javascrpt and "multiple requests because microservices".

This isn't solved my moving the apps to the edge


Static assets should be loaded from a CDN. You can render the initial HTML in 200ms and load the microservices client-side later without making the user wait. Or even better, don't use microservices. Caching can also help.


Sorry I'm in the <50ms gang. B)


50ms? I'm happy to get 50ms ping when I play quake/unreal tournament.

200ms is fine for a website.


> 4) Performance evangelists and salesmen will try to convince everyone they need to get a response in 50ms, but for most use cases that's just ridiculous. Most people are fine getting a response in under a second. Just use a CDN for your static assets and your monolith will be fine.

---

Disagree with this, performance optimization is a statistical issue and considering only the time spent on a single request is very one-sided. You should also calculate the latency of all requests (how long the user waits in total in the application) over the lifetime of the application. 50 milliseconds and one second have a huge impact on the experience.


2) It's easy for Deno to say "get a distributed app running at the edge!" when the hard part is having distributed data which they don't solve. If you don't need distributed consistent data then you're probably more than fine with a monolith with a CDN.

This is not targeted at you but at naive JS developers. JS has millions of users, if they can deceive a good chunk of them with their marketing, then they can make money.


Edge isn't only JS - it's potentially any language with WASM. Deno? sure. But surely you aren't conflating JS with Edge.


Surely you aren't conflating WASM/JS with edge, it's also any edge service that can run docker :)


I enjoyed reading your level-headed perception of this.


Thank you kind sir.


I agree, I think this might be another side effect of clickbait, framing the edge as something that "replaces" traditional architecture, rather than en _evolution_ of the existing architecture where both paradigms can continue to exist side-by-side. As always.


It's a tool with pros and cons, but its still "the future" because it's in early stages and a lot of the possibilities have not been realized yet


one of the driving patterns behind the whole internet (in concept) has always been to have dumb pipes and very smart edges/endpoints

so this is merely re-asserting one of the (now taken for granted) ideas that make the internet what it's become.


Instead of "edge", a lot of websites should just have 3 locations (us,eu,apac) with a non geo replicated Serverless database in each region. At least that's what we're building at WunderGraph (https://wundergraph.com/). Edge sounds super cool, but if you take state and consistency into consideration, you just can't have servers across the globe that also replicate their state consistently with low latency. TTFB doesn't matter as much as correctness. And if stale content is acceptable, then we can also just push it to a CDN. Most importantly, you'd want to have low latency between server and storage. So if your servers are on the "edge", they are close to the user, but (randomly) further away from the database. Durable objcets might solve this, but they are nowhere near a postgres database. I think the "edge" is good for some stateless use cases, like validating auth and inputs, etc., but it won't make "boring" services, even serverless in "non-edge" Locations obsolete. You can see this on Vercel. Serverless for functions, server side rendering, etc. and cloudflare workers for edge middleware. But they explicitly say that your serverless functions should be close to a database if you're using one.


I would even say that for 99% of the existing websites a free oracle cloud instance with 4 cores and 24GB ram + cdn is more than enough.


Here is my golden setup: Cloudflare Tunnels + All ports closed (except ssh) + bare metal server. You can scale to the moon with like a million active users on a couple of 16 core servers (1 primary, 1 hot failover). You don't need AWS. You don't need Terraform. You don't need kubernetes. Hell, you don't need docker because you know apriori what the deployment environment is. Just run systemd services.

99% of the startsup will never need anything more. They'll fail before that. The ones that succeed have a good problem on hand to actually Scale™.

What we're seeing is premature-optimi...errr scaling.

Edit more context:

For Postgres, setup streaming replication between postgres and hot standby. You need a remote server somewhere to check health of your primary and run promote to your hot standby if it fails. It is not that difficult. Have cron jobs to back up your database with pgdumpall in addition somewhere on Backblaze or S3. Use your hot standby to run Grafana/Prometheus/Loki stack. For extra safety, run both servers on ZFS raid (mirror or raidz2) on nvme drives. You'll get like 100k IOPS which would be 300x of base RDS instance on AWS. Ridiculous savings and performance would be just astonishing. Run your app to call postgres on localhost, it will be the fastest web experience your customers will ever experience, on edge or not.


Delightful. Only change: I'd add Tailscale for your SSH access (and to access your dashboards/logs) so you don't have ANY ports open.


If you use Tailscale, make sure to keep spare VGA cables or hook up an eth cable to the IPMI port. Tailscale has not been reliable in my experience. There is nothing like straight SSH connection and you can setup `ufw limit OpenSSH` to limit tries to ssh port.


You still pay the latency cost, even though your data travels mostly inside CF's network. It's very noticeable when you're far away from the server, which you will be for most of the world if you sell to everyone. Perfectly fine if e.g. you're only targeting anyone from your region.


This setup is good if you are only serving people in a single continent. The TCP handshake with someone half way across the world is still going to eat all of your latency. You can’t beat the speed of light.


Most startups are going to be operating in a one country. And most requests would be handled by Cloudflare edge except for dynamic requests to the origin.

You might be surprised how fast it would be. And most companies blow their latency budget with 7 second redirects and touching 28 different microservices before returning a response.

All I am saying is don't get fixated on geo-latency issues. There is a bigger fish to fry.

But after all fish have been fried, you’re right. Servers on the edge would help.


What happens if your remote server thinks the primary is down when it isn't really, and you end up with two hot primaries? Is this just not an issue in practice?


In that case, Postgres new-primary would just be detached and will not stream/pull additional new data from the old-primary after promotion. You should also make sure to divert all traffic to the new-primary.

This problem happens in AWS RDS as well: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...

Actually, this might be much simpler with Cloudflare tunnels. So it failover scenario would be something like this:

1. Primary and Hot standby are active. CF tunnels are routing all traffic to primary.

2. Primary health check fails, use CF health alerts to promote Hot standby to primary (we'll call this new-primary).

3. Postgres promotion completes and new primary starts receiving traffic on CF tunnels automatically.

4. No traffic goes to old-primary. Backup data and use old-primary as a new hotstandby, configure replication again from new-primary.

Even better strategy would be to use Hot Stanby as read-only so traffic is split dynamically depending on write or read needs by the app. Take a look at StackOverflow infra architecture: https://stackexchange.com/performance


You don't even need Cloudflare if the server is in the same locale you are serving.


Saving this


Nice


I wish the thinking that leads to articles like the one in the OP would take physical form so I could take a baseball bat and release my frustrations by beating the hell out of it. So many of my current pains in the ass at work stem from adding complexity to get some benefit that does nothing to make our product better.


Nothing from Oracle is ever free.


Nothing from anyone is ever free.


Yeah, but especially not Oracle.


Never trust the lawnmower to not mow your budget.


Is this a Bryan Cantrill reference? :)


For those not in the know, you learn a piece of software developer lore today:

https://news.ycombinator.com/item?id=15886728


99% of websites can run from the cheapest Digital Ocean droplet.


Some percentage of websites are actually Atlassian Confluence. Those might fall over.


Atlassian needs at least 100 cores and 1 TB of RAM to get anything resembling performance.


Only 1TB? You must have the lite version!


Agreed, except without the Oracle part.


We use the cloud free tier for https://prose.sh

We aren’t even close to hitting any bottlenecks at this point


the point is the developer audience that is just coming out of college...will not be familiar with postgres. they would make most of their applications saving data via "Durable Objects" on the edge. Someone just built Kafka using Durable Objects.

Not arguing that DO is better that postgresql. I'm arguing that a lot of the developers wont realise that. Because the DX of durable objects is superior.


*groans* is this another new name for an old thing? I know I could just Google it, but wtf is a durable object?

It sounds like a json file… is it a json file? C’mon, tell me it’s NOT just a json file…


It's a sharded key-value store with stored procedures written in JavaScript.


C'mon - lets not wrap it up in all that, be real. It's a JSON object and a few JS functions passed as arguments to a class constructor.

I'm not saying it isn't an elegant design, but can we pls not talk about proprietary implementations of a particular design pattern as if they're some kind of industry standard?


You're getting the data format confused with the database engine. Yes, a database might just be storing JSON, but how it's stored and replicated matters.


I agree. It's probably good enough for a lot of use cases.


Agree, except that amount of RAM ain’t free.


The free x86 VMs are capped at 1GB each, but the ARM ones go up to 24GB.

> Arm-based Ampere A1 cores and 24 GB of memory usable as 1 VM or up to 4 VMs with 3,000 OCPU hours and 18,000 GB hours per month

https://www.oracle.com/cloud/free/#always-free


The architecture I eventually ended up with for my product (https://reflame.app) involves:

1. A strongly consistent globally replicated DB for most data that needs fast reads (<100ms) but not necessarily fast writes (>200ms). I've been using Fauna, but there are other options too such as CockroachDB and Spanner, and more in the works.

2. An eventually consistent globally replicated DB for the subset of data that does also need fast writes. I eventually settled on Dynamo for this, but there are even more options here.

I think for all but the most latency-sensitive products, 1. will be all they need. IMHO the strongly consistently replicated database is a strictly superior product compared to databases that are single-region by default and only support replication through read-replicas.

In a read-replica system, we have to account for stale reads due to replication delays, and redirect writes to the primary, resulting in inconsistent latencies across regions. This is an extremely expensive complexity tax that will significantly increase the cognitive load on every engineer, lead to a ton of bugs around stale reads, and cause edge case handling code to seep into every corner of our codebase.

Strongly consistently replicated databases on the other hand, offer the exact same mental model as a database that lives in a single region with a single source of truth, while offering consistent, fast, up-to-date reads everywhere, at the cost of consistently slower writes everywhere. I actually consider the consistently slower writes also a benefit since it doesn't allow us to fool ourselves into thinking our app is fast for everybody, when it's only fast for us because we placed the primary db right next to us, and forces us to actually solve for the higher write latency using other technologies if our use case truly requires it (see 2.).

In the super long term, I don't think the future is on what's currently referred to as "the edge", as this "edge" doesn't extend nearly far enough. The true edge is client devices: reading from and writing to client devices is the only way to truly eliminate speed-of-light induced latency.

For a long time, most truly client-first apps have been relegated to single-user experiences due to how most popular client-first architectures have not had an answer for collaboration and authorization, but with this new wave of client-first architectures solving for collaboration and authorization with client-side reads and optimistic client-side writes with server-side validation (see Replicache), I've never been more optimistic about the future (an open source alternative to Replicache would do wonders to accelerate us to this future. clientdb looks promising).


FYI your light/dark mode icon is off-center for me. Fedora 36/Firefox.

Also at some window heights, the "Deployed with Reflame in x ms" box obscures the "Have questions? Let's chat!" text without generating a scrollbar.


Thank you so much for catching these! Just deployed a fix for both.


Good post. Do you have any resources describing the 'new wave of client-first architectures' you mention? I'm struggling to understand how you can do client-side authorization securely.


Replicache (https://replicache.dev/) and clientdb (https://clientdb.dev/) are the only productized versions of this architecture I'm aware of (please do let me know if anyone is aware of others!).

But the architecture itself has been used successfully in a bunch of apps, most notable of which is probably Linear (https://linear.app/docs/offline-mode, I remember watching an early video of their founder explaining the architecture in more detail but I can't seem to find it anymore (edit: found it! https://youtu.be/WxK11RsLqp4?t=2175)).

Basically the way authorization works is you define specific mutations that are supported (no arbitrary writes to client state, so write semantics are constrained for ease of authorization and conflict handling), with a client-side and server-side implementation for each mutation. The client side gets applied optimistically and then sync'ed and ran on the server eventually, which applies authorization rules and detects and handles conflicts, which can result in client state getting rolled back if authorization rules are violated or if unresolvable conflicts are present. Replicache has a good writeup here: https://doc.replicache.dev/how-it-works#the-big-picture


Great, thanks. Are they open-source (or are you aware of anything open-source that does something similar)?


Replicache has a commercial license but is source-available. Clientdb is open-source, but doesn't seem as mature yet. I'd love to see more open source solutions in this space too.


There with you. We chose cockroach and fly to achieve our global needs in the simplest manner. But to use their cloud multiregion offer, it costs ~3400usd/mo at their recommended specs and 3 regions. Pricey depending on how you see it.

We hope their serverless tier meets feature parity soon.


Yep, Cockroach's dedicated offering was pretty cost prohibitive when I last looked too, and I really didn't want to have to operate my own globally replicated database, so Fauna seemed like the best option at the time.

Really looking forward to Cockroach's serverless options too. More competition in this space is very welcome.


Anyone used Yugabyte before for a globally replicated DB? I use Hasura a lot, which depends on Postgres, and Yugabyte seems like a possible drop in candidate for Postgres, but wondering if others are using it in prod.


I haven't but it's definitely an interesting option! Cockroach is another option if you're looking for Postgres compatibility.


How are you finding working with distributed databases? I've worked with Fauna and Dynamo and they are a nightmare from the DX and iteration speed point of view compared to Postgres.

I'm gonna try CockroachDB next.


Dynamo has definitely been a bit of a nightmare to work with, but I actually find Fauna reasonably pleasant. You're not going to get the vast ecosystem of SQL based tooling, but the query language itself is designed to be composable, which reduces the need for ORMs (though I do still miss the type generation from Prisma sometimes), and there's a pretty nice declarative schema migration tool that handles that aspect pretty well: https://github.com/fauna-labs/fauna-schema-migrate

Haven't found myself needing much else from a DB.


Yeah, I mean, apart from missing intellisense, Fauna forces you to set in stone everything, which is not good for fast iteration and prototyping. I understand it's for performance, but things, especially in startups, continuously change.


> But they explicitly say that your serverless functions should be close to a database if you're using one.

Or else?


Let's say the user is in FRA, the database in SF. If the "server" is on the edge, you'll end up with 100-200ms between server and database, while the user has less than 10ms latency to the "edge". If the server does multiple round-trips to the database, it can take seconds until the first byte. If the server and database are both in SF, TTFB will probably be less than one second, as round trips between database and server are almost zero. One thing to mention is that it would be beneficial of the TLS handshake could be made on the edge, as it's a multi roundtrip transaction. Ideally, we could combine a server close to the DB with a stateless edge service.

So, the future of the web might not be on the edge. It's rather: The future if the web will leverage the edge.


You'll get the latencies noted for Heroku in the article.


Wait till we get to the final stage, where we install apps on people's computers directly. That way, we get ZERO latency. Can't wait! What should we call it?


Sarcasm aside, this is actually a good point.

Most apps are either unable to work without a semi-centralized database.

Or can work with local database and slow syncs to the cloud (a la Dropbox).

Very few apps fit between those two categories. Edge looks like a solution looking for a problem.


At long last.. "Year of the linux desktop".


Distributed client-side edge cache.


Oooh I know.

Cacheless


Applets ?


Yeah, we just gonna need 5 devs to build and support 5 different native apps. /s


Local first!


"localedge"


Native app , or, desktop app… it’s already there for decades….


Edgy PC!


a web browser?


picoservices


IDK, I think this is getting overstated.

Don't forget - there's very typically a runtime environment available that's a lot closer to the user... their browser.

The edge, as a place to run code, is a bit of a tweener... farther from the user than the browser, farther from the data than the database environment.

If the data the edge needs is also on or near the edge, you can really start to do something with it. But that means your data is distributed. You want to have a really solid plan on how your edge data is kept valid.

That's not so hard with static assets but edge processing on static assets seems like a relatively narrow case, because it needs to make more sense than pre-computing all the cases and just having a more static files.

Edge processing on dynamic data is pretty interesting, but having coherent distributed dynamic data tends to be app specific and hard to get right and keep right. There are certainly cases, but I don't think they usually tend to comprise the whole app. I think it will usually be a partial solution and add a bunch of complexity, so apps will want to use it sparingly, where it's really needed.

I think this will be more of a tool in the toolbox, not the general future of the web.


OK, but what about data synchronization?

One of the tremendously simplifying aspects of traditional web applications was that they were, to a first order of approximation, stateless. The state lived on the server in a centralized location. When an update occurred, it was done via an HTTP request that failed or succeeded.

If shared state between users is stored on the edge it needs to be synchronized between edge nodes, leading to collisions that may appear at a point significantly after a user has "clicked save". I can imagine this becoming a nightmare as a user accretes dependent local changes, all of which eventually have to be rolled back as edge data stores synchronize with one another.

Now, there are advanced technologies for this sort of thing, but they are relatively complex, hard to program against and often don't and can't offer great end user experience.

I am not saying that the edge isn't going to be useful for some applications, but it is throwing out one of the main simplifications that the original, REST-ful model of the web gave us.


One option is that you can abstract it away like cloudflare does using DBs or KV stores that live natively on the edge. Look up their IRC or Doom demos on the edge sometimes. There is no central server needed, the edge nodes just figure that out with each other behind the scenes and you don't have to worry about the low-level sync models. Less control, easier to use.


I like deno.com, but I am deeply suspicious of their speed test methodology. I hope I am wrong, but it really looks like they are comparing time-to-first-byte (TTFB) including TLS, i.e. a 'cold start'. It should not be almost 400 ms from Amsterdam to Virginia on an existing connection.

If your app makes one and only one connection, then fair enough, that is a real penalty. Otherwise, this is just the benefit of literally every single CDN. With enough traffic, their edge servers will keep connections open to origin.

For serverless, there is no origin -- even better for performance, assuming it has no need for a common data store.


> With enough traffic, their edge servers will keep connections open to origin.

Except AWS CloudFront when used with a custom origin server (whole site acceleration) - their "clever" load distribution algorithm requires inordinate amounts of traffic for effective origin connection re-use.

I run multiple sites in the top 100k websites globally and am getting effectively zero origin connection re-use. Badgered them about it but all I got was a wontfix :-/

Sigh.


I'm glad it's not just me - I remember trying to improve site response times for a site using CloudFront and I thought I'd done the right thing as far a settings go, but got no improvement.


> It should not be almost 400 ms from Amsterdam to Virginia on an existing connection.

Exactly, it should be much closer to 100-150ms.

My production server is in AMS and I'm in central Mexico (QRO). The response is around 200ms, often less than that.


Frankly - who cares unless there's a persistence mechanism to go with this?

The devil isn't in getting a static resource close to users (we've been able to do this for decades with CDNs)

The devil is in getting application state pushed close to those users.

"Eventually consistent" is a real bitch of a thing to deal with.


My conclusion from trying to get erlang distribution works across regions is, just don't. Distributed postgresql on fly.io is cool .. though only for read heavy.


> When people say “the edge,” they mean that your site or app is going to be hosted simultaneously on multiple servers around the globe, always close to a user.

This is the first time I see such a simple description of this. Often you've got the feeling of "magicians" using technobabble to let things look much more difficult or new than they are.


Yes - but even then... it's not really true unless your site is completely static resources.

If you're pushing a blog out - Great!

If you're pushing out anything that relies on application state... much less great.

At best, you then end up in a world of trouble dealing with eventually consistent data, sharding/tenanting, CRDTs, "edge" KV stores (that are really just hiding the eventually consistent nature from you) and all sorts of other trade offs.

If I'm directly collaborating with some half way around the globe in a web application - there is literally no magic way to wave a wand and make that latency go away.


One thing I don't understand CRDT hype is it doesn't solve semantics conflict problem, and solving it on app level would also be better by-passing CRDT completely. And one more thing, real-time collaboration just mean road without sign, lines, traffic light. Having process in place, in async way, is considerably more efficient.


I'm currently looking into using CRDTs for a side project, and would love to pick your brain:

CRDTs don't solve the semantic issue of a conflict, but nothing ever will, because the semantics are defined by business requirements, right?

Isn't the idea behind CRDTs to develop a set of "primitives" upon which one can build conflict free data structures?

And isn't all the hype about CRDTs a result of the fact that providing those primitives in a serverless way (ie, without requiring a central authority) was a hitherto unsolved problem?


> Often you've got the feeling of "magicians" using technobabble to let things look much more difficult or new than they are.

Just one of the ways people work to keep salaries up ;)


> Just one of the ways people work to keep salaries up ;)

I get a bigger paycheck by explaining tech to business in business terms so I'm not sure that's a valid approach after a particular level of salary.


Right. I think we did global deployment, multi-region with geo based DNS and anycast like 10 years ago. I guess the difference here, it's now a product. I'm still not convinced. Why 3 years ago I started building it as a product and you know it just didn't matter. It makes a cool story and blog post but the technical details just don't matter. Tell me you can deliv er sub 100ms response times globally and give me a framework to build for that but the rest is details I just don't care about. I used to care, I used to think developers should know these details, the reality is it's not important. Delivering developer experience means the user not having to care about those details.

Delivering sub 100ms definitely important but mostly if it's just static pages and there's no database IO or calling of external third party APIs during that process then it's not relevant. The large majority of software is now not just serving a static assets but a lot of complex logic which ends up dictating a lot of the page load times, not the traversal of light across the globe.


Can someone explain how this can actually work for a medium sized CRUD app? I’m a dev and don’t understand. I get hosting static or cached files at the edge and serving them. And I get how you could run PHP or another server-side language on edge servers too. But you want a single source of truth and if there’s a database involved, you still need to host that somewhere and the true latency will be at this step.

Are there simple modern solutions for maintaining the same database on servers all over? (This doesn’t even sound like a good idea, or at least like it would either be impossible or would have tradeoffs, and sounds like a minor optimization anyway)

Or are Deno etc only used for database-less sites?

Or is everyone just obsessed with TTFP (screen painting) load times because users hate waiting but like pretty loading spinner gifs?


There are a few threads here that discussed options for distributed databases. If you don’t need a single logical cluster for the entire system, it’s totally fine to deploy primary dbs at each region you need. You still gain some of the edge deployment benefits by virtue of no work required for multiregion app deployment. Otherwise, you’re configuring route53 to route requests to servers closest to them. But if you need a single logical database shared globally, any new distributed “new-sql” products work: Fauna, Cockroach, TiDB, and I’m sure many others. These are just the few I know that’s used heavily and publicly.


I'm yet to come across a non-trivial website or web app that runs on the edge in the way they have described. While it may or may not be the future, as it is commonly touted as, who exactly is testing edge computing in the real world today? What database are they using? How are they synchronizing application state across all these regions?

From practical industry experience I can say that deploying to 1-3 data centers is still the way to go, and that isn't going to change for the ~50-100ms of latency this approach will save.


For which type of web sites would this make a difference? I think for this to matter, the audience should be almost global, in which case it will cater to a small percentage.

Some examples: - News sites: These heavily use CDN's and caching, wouldn't make much difference. - Most CRUD apps which target a small number of users? Probably no significant difference being on the edge would bring. - Games: this is one area this might make a difference due to latency advantage.

Could someone give some real examples from the net that would make switching to this edge architecture a difference? For example, something like, it would be good if HackerNews/CNN/Intuit did this so that...?


E-commerce. Significant static content, but also significant interactivity, and performance is important as your users are almost by definition "shopping around".


> Games: this is one area this might make a difference due to latency advantage

But then players from Singapore can't play with players from New York.

So single player games. In the browser. Yay.


> The benefits of serverless are two-fold: > You only pay for what you use—just those 10 seconds if that’s all that’s happening on your app. > You don’t have to worry about all the DevOps side of servers. No planning, no management, no maintenance.

The second benefit can be had without serverless. Anything that runs containers offers that. The first one is a nice to have for side projects, but pretty irrelevant in the cost of a business building and shipping a product. If they're referring to autoscaling then, again, anything running containers can do that.

And no mention of where the data is? As far as I can tell, this is buzzword soup with no broadly applicable use case.


Curse you Title Case!!

With the convention that articles and prepositions are not capitalised, if you have a title that is mostly articles and prepositions, then the remaining few words which are capitalised can easily be confused with proper nouns, especially if those nouns are relevant to the title's subject. e.g.

https://en.wikipedia.org/wiki/Microsoft_Edge

I thought that the article was going to be about how the author thought Edge was going to (somehow) dominate the browser space sometime in the forseeable future.


Proper nouns don't take articles. If Edge was intended as a proper noun the title would have been "The Future of the Web Is on Edge".


Sorry, seen too many ungrammatical headlines in my time for that consideration to have swayed how I parsed that one.


Glad I'm not the only one who also thought the title was some sort of MS marketing slogan.

...or it could also be a pun about Edge becoming another Chrome-clone.


944.14ms - that's the worst case stated on the time to first byte for conventional architecture. That's not bad. Will your user notice it? Probably not for most applications and the ones that they do you can probably cache that locally.

There are performance limited applications - e.g. stock trading - but in those you're talking about choosing specific processors and disabling certain caching approaches to increase the performance that you want, choosing certain network switches, using microwave transmitters where existing physical infrastructure doesn't serve your needs... like fast is _fast_... and people pay for that in expertise and infrastructure.

Will your users pay for cutting your response time from 944.14ms to 45ms? And the additional complexity that comes with?

In some cases the answer is, in all honesty, yes - yes they will. And they'll pay you for every additional fraction of a second it takes light to go from the top of the Empire State building to the bottom, if it gets their trade in first.

More generally, however, your users probably aren't interested in delays measured in less than the time it takes them to blink. How fast is your ballpoint pen? Do you care? To your user's use case, it's all either categorised as instant or something you have to wait for.


Um 944.14ms is almost a second, much longer than the time it takes them to blink.


Granted. It is, however, the worst case scenario as quoted from a marketing piece that's going to render itself in the best light.

According to DDG the average time for a blink is between 100-400 ms. Okay, maybe they blinked a few times. You're taking on a large complexity problem for the difference between one and a few blinks - and that's in the worst case scenario, which for most applications can probably be ameliorated with pre-loading assets.

I'm just not seeing the user-value here. At least not compared to the investment required to adopt more complex architecture and the problems that involves - if your IO really does occur on those sorts of timeframes, (e.g. if you're doing meaningful things at sub 900ms, how are you dealing with race conditions in this distributed architecture? If you're not dealing with race conditions, and have an effectively static set of data that you're just pushing, why aren't you dealing with that via pre-loading the assets?)

That stuff's gotta be paid for. Do you think the difference between one blink or... I dunno, let's be generous and say four - I think if I blinked four times it might take longer than a second, mostly due to the delay between blinks - constitutes a competitive edge sufficient to offset the cost?


I’m not arguing that edge computing (vs more tried and true approaches like regional replication) is the way to go per se, but if every action you’re taking on a site with a significant amount of interactivity takes a second, it will feel fairly sluggish; and if your internet connection sucks then it could feel even worse.

For some apps that matters, for others it doesn’t. There are other ways to achieve snappiness even with sluggish network conditions, like optimistic UI, but those come with significant complexity too. But yeah if your site doesn’t need it then don’t do it.


Seems like we’re headed full circle. At some point in the future someone will rediscover the insanely overpowered machines in everyone’s home and hand with almost no latency.


Many definitions of edge computing are already this: Just running software on local hardware, but of course, still billing it like it's a cloud service with a SaaS model.

Throw it all out and run everything off a PC in your basement, you'll thank me later.


The Future of the Web is Peddling Endless Amounts of Cloud Frameworks

The number of web sites that need global edge computing is vanishingly small. More realistically you need a 20+ year old technology called a CDN to run your mostly static site.

Edge computing doesn't even solve any of your real problems here. I18n, l10n, international taxation/currency/payments, data-at-rest/GDPR/privacy laws. And then it introduces new problems, such as data partitioning. Are you going to partition at the edge and deal with tricky sync issues (CAP, anyone??) or are you just going to call back to your centralized DB server a thousand miles away from that edge? You know what's even faster than running that code on an edge device? Running that code on the user's phone or laptop. And you can get there. With a CDN.


You say that until you travel somewhere in Eastern Europe, South America, or Southeast Asia, and realize that 90% of the Western Internet is completely and utterly broken. Everything besides Google, Facebook, etc. (Because they’ve invested in solving the problem) are practically unusable.

The problem isn’t just TTFB or latency, as some people are implying, it’s poor interconnectivity at the transit provider level. Those links are frequently congested and experience fiber cuts, and unless you’re peered in that country your application is going to perform poorly. The companies who understand this have a first mover advantage in some of the fastest growing economies in the world.


Meh. My future is a linux laptop with emacs and ssh.

But yeah. I feel you. The Edge is where all the cool kids are hanging out.

And as much as it sounds like a buzzword, terminating TLS at the edge is and calling back to central services via a proxy w/ warm connections to the backend is pretty easy to deploy and does wonders for perceived latency.


That's perfectly sufficient for a large number of use cases, keeping things simple.

The edge isn't just where the cool kids hang out, through. There is a very noticable latency hit over long distances, amplified by the number of round trips needed. If you have a local business, this isn't a problem.

Making it super simple to deploy things to the edge, and developing systems to make it easier to push data that can be cached to the edge to avoid trips to a central database, is awesome even for hobbist programmers, like Heroku made it easy to deploy applications without worrying about VMs.


Yet another web dev article assuming that every web app is massive scale, for a global audience of consumers that have alternative choices.

I have worked on dozens of web sites for paying clients, and none were in that category.

Also, mostly I just want to know, why is Singapore's connectivity so slow? Some sort of filter?


The first deeper problem is state. Other commenters have talked about distributed databases. But the second deeper problem is data sovereignty. In, what I would argue is most cases, you don't want your data to be replicated around the globe. Local laws forbid it, and the marginal utility of having better latency when your customer goes on vacation somewhere that needed a 12 hour flight is pretty close to zero anyway. So why play with fire?

99% of the time that you want geo-distributed databases, I'd argue that building out a directory kind of service, where you use something like Cloudflare Workers KV to map the customer's ID to find which regional API endpoint to use (US/EU/APAC), is what you actually want.


The curl command mentioned in the article that's supposed to show the region being used for a request `curl -ls https://deno.land` doesn't seem to work as described. Does anyone know if this is a typo?


curl -I worked for me on a Mac.

The -I flag returns only the headers.


Thanks, this worked for me, too. I wonder why the command used in the article is so different.


At the edge, so your app is at the edge but the DB is 80ms away. Too much focus on latency, trying to fix a non existing problem.


The future of the money-web is on the edge. But the money web is for corporate persons, not human persons. Humans will continue to use a single server somewhere on earth. Let's not get cargo-cult'y here.


Counterexample: Kiwi Farms, which transitioned to a distributed hosting setup across a variety of providers after the recent Kerfuffals. But that was more to avoid a single point of failure than to mitigate latency concerns.

And yes, efforts are ongoing to identify and punish the hosting providers involved, their principals, and their principals' families.


Proves the rule.

I think we can all agree that what kiwi farms has to do is not what 99.999% of people will have to do. Due to their current high world profile they have many of the same needs as a corporation.


I'm working on a framework in Ruby that's designed to be deployed on the edge. It's got a Virtual DOM on the server and it streams updates to the browser, so having low latency is very important. It doesn't matter if the database is far away, its fine if it takes time to load data as long as the UI is fast.

I'm running the example app on fly.io, and if each instance can have 30 concurrent sessions, that means I can have 90 concurrent sessions on their free plan. 30 more sessions for another $1.94... I haven't done any benchmarks yet, but it will be interesting to check the performance on different instance types.

Deploying apps like this will certainly reduce costs, because you can deploy your app to where your users are. If you have a lot of users during daytime and few users at night time, you only pay for the users you got during daytime. You could have more servers in regions with active users during daytime and less servers in regions where users are sleeping...


If we push everything to the edge we might end up with an ecosystem where it's not affordable to run your own VM, and we'll be stuck building only on the technologies offered by edge providers. Even if it's not literally the case, for the working dev it might end up being the practical reality if edge hosting ends up being the trendy default. The equivalent to "no one got fired for buying IBM" and now "no one got fired for choosing React"

Cloud functions are cool, when you need them, and when you want them. But if they were the only option I would go do something else for a living.

I am already imagining the day where some young developer comes to the bold new idea that we could write better software, if only we hosted our own runtimes! Much like frontend architecture broke new exciting ground when they recognized they could compile their pages on the server before sending them. Wow!


This vision will become real when we can distribute indexes & query engines to the edge. A simplified model would be a lightweight CDN deployment of sqlite with distributed updates.

Querying large indexes is the most common use case that needs code to run. Sqlite would standardize storage & query implementation


The future of the web could also be back to the browser. Python can run in the browser with SQLite. I have had ideas of taking Django or fastapi and then either using the browser or put it in an app which runs it in WebKit or blink and syncing to the servers if that is even needed for the app.


That's neat, but it's not the web. You're merely using the web as a deployment mechanism, and the "browser" as a hypervisor. You might as well build an Electron app at that point if you want the desktop UX.


Out of curiosity - what do you believe the web is, exactly?

Because if you're ruling out his approach - you're also mostly ruling out the entire article here (since this approach does not in any way solve your datastore needs).


Right, I was a bit harsh with the "it's not the web" remark. Ultimately, the web is what we make of it, and it's not up to anyone to claim it's one thing or the other.

Initially, the "web" had a clear definition: a client-server architecture, a markup language to render data from the server, a protocol for the client to request data from the server, and a universal format to reference data. Then we added more technologies to improve styling, interactivity, P2P protocols, etc., and it's been evolving ever since. Instead of documents, we served apps. Thin clients were deemed unsuitable, until we realized performance might be an issue, and now we can choose whichever approach makes sense for the product.

So I'd say I'm a bit of a traditionalist in this sense, and think that a web app should still involve frequent communication with a server. If you're building a product using web technologies, but it's running entirely (or mostly) offline, that's great, but it's not part of the World Wide Web. You could use any number of technologies to build a desktop app at that point, and using web frameworks and a web browser is a choice made out of convenience, rather than suitability.

After all, if you download an installer over HTTP for an offline desktop app written in a non-web language, would that still count as a "web app"? We have to draw the line somewhere, and I suppose it's a matter of preference where that is done. Or maybe the line is forever blurred and web apps are all that we have now...

> Because if you're ruling out his approach - you're also mostly ruling out the entire article here (since this approach does not in any way solve your datastore needs).

I don't think that's the case. I may disagree with the assertion that "the future of the web is on the edge", but the article still suggests a traditional client-server model. It's just that the server is now distributed and closer to the client, which in general is a good idea.


The web is a deployment mechanism, and the browser does essentially manage sandboxed processes as a kind of virtual operating system.

The web is evolving. Python with SQLite running in the browser is part of the web.


Can you please explain more? How would a local sqlite database work with a typical, say a CRUD blogging app as an example? Do invidividual users have their own database in the browser?


It probably wouldn't be suitable for a blog or classic CRUD app. The use cases I've heard about are more for education and online editor/IDE.

For example: interactive documentation that runs code in the browser with an ephemeral/temporary database, possibly imported from the user's desktop. For teaching programming languages in the browser, with the compiler compiled to WASM, so the student doesn't have to set up a local development environment. Or a database explorer that's a purely static site with no server, neither remote nor directly on user's computer - just virtually in the browser, which is a more secure sandbox. You could have a folder of HTML, CSS, JS that's a full-stack application with client UI and server (theoretically).

Further reading:

• SQL Databases in the Browser, via WASM: SQLite and DuckDB - https://blog.ouseful.info/2022/02/11/sql-databases-in-the-br...

• Pyodide - https://pyodide.org/en/stable/ - Pyodide is a Python distribution for the browser and Node.js based on WebAssembly.

• Client-side WebAssembly WordPress with no server - https://make.wordpress.org/core/2022/09/23/client-side-webas...

• Stackblitz - Instant development environments - https://stackblitz.com


It's just browsers not having generic vm. It just not be able to run more than one programming language. (ok, wasm..). Other VM has a bunch of dialect of its langs run on them.


Or just drastically shrink how much data has to be pushed. Static content can be hosted via a CDN to get it close.

Serverless workers distributed on a global basis running nanoservices backed by Sqlite on the edge that gets way complicated.

My favorite sanity test is "distributed transactions". Do you need them? If so how complicated will it be to synchronize all edges when changes comes in from all edges.

The usual answer is "we will build that ourselves", then over time the team discovers why it is definite hard problem to solve.

You end up reimplementing parts of a database server with added level of complexity.

Most architecture should start with

"Distributed transactions".

Do we need them If so how will we do it.

That is just my obsession.


Is our current DNS on the edge? Decentralized all the way down to one's host file. Fully centralized all the way up to the 13 DNS servers. It has the issue of keeping up with state yet almost the entire internet rests on its shoulders and it works.


Companies that are banking on edge computing definitely seem to agree with this headline :)


I have been trying to find what algorithms or techniques are used to actually accomplish the routing whereby the edge server closest to the user is chosen. Is this accomplished by pinging multiple servers and choosing the one with the lowest TTFB?


that’s one way, yes, but it relies on DNSLB, which necessitates shorter TTLs, which means more round trips to DNS.

the smooth way to do this (how Google and others do it, the big boys) is via Anycast routing, where you are IP-routed to the nearest available node (I.e. all nodes globally share an external IP identity and so DNS is not driving LB or routing).

https://en.m.wikipedia.org/wiki/Anycast

edit: i should say, DNSLB needs shorter TTLs, lots of monitoring, and you're already in reactive mode, but otherwise it is a great solution.


Thank you. DNSLB = DNS-based load balancer or..?


yes, with NLB = network load balancer, at least when i have encountered the terms.


Are there any relational systems which store persistent shared state at the edge, with some kind of automatic geopartitioning?


I know you're not asking for this, but distributed consistency can get pretty hairy in anything but the simplest of cases. I know you've probably heard all this, but for the people unfamiliar with CAP theory, if you're asking yourself the same questions and you haven't spent much time researching it, it's worth it to get an over-view. The WikiPedia page is pretty simple, but has links to decent references:

https://en.wikipedia.org/wiki/CAP_theorem

And if you want to get your math on, check out TLA:

https://en.wikipedia.org/wiki/Temporal_logic_of_actions

(Again, the wikipedia page is laughably incomplete, but has references worth clicking on.)


A relational system with the guarantees usually provided by them seems really difficult to do. Like cockroach db but considerably harder or with considerably worse performance.

I always thought it was more common for stuff like KV stores with looser guarantees


You could probably run a CockroachDB edge cluster


> Better developer experience

That's some wishful thinking right there.

Though author admits that DX is worse right now. But then there are frameworks that abstract edge overhead.

Ok, but abstracted overhead is still worse than no overhead.

And then there's modeling your data without a centralized database. There's no world where it leads to better DX.


HN itself is hosted on a single US-based server. Does this actually bother people outside the US?


I'm in France and HN is almost always very responsive to me. I think one of the reasons why HN doesn't need to be "on the edge" as much as other applications is that servers roundtrips are relatively rare as it's more of a website, compared to web applications. To reply to you I clicked "reply", I'll click it again to post, but that's two interactions with the server in the time it takes me to write this post. If you're on an application that has to do rountrips to the server for each interactions, it could be useful to be on the edge, I think.


No, and here's why:

HN's page makes so few requests I can count them on one hand.

Compare that to a typical website and you'll see a different story. For example:

The main website of my employer makes 86 requests which takes 7.13s to load. But with some clever resource deferring/ordering I've been able to get the DomContentLoaded event to fire (and appear loaded) in 2.81s.

Unfortunately I won't be able to get that number any lower without some considerable months of investment on optimising that website, but that developer time is always prioritised elsewhere.


Wow that really fell apart in the last couple paragraphs.

Just glides over important topics like security and completely avoids others like data and application design. This hurts them and the edge space imo.

This is just a paper ad for Deno Deploy.


The future of the web and the only way it's growing in healthy way is, we, having vm that can runs many languages .. not thing that runs a single language .. everywhere.


That’s a nice goal for a generic cloud provider, but within a company having many languages causes a lot of problems. You really want two or ideally one language across your stack. Maybe 3 if you have some really specific workload.

If I’m a TypeScript Company or a Python Company I really don’t care if my cloud provider can run JVM. And vice versa.


I guess some nice wasm runtimes would suffice.


I'm looking for a blog post the combines the best elements of the future of the web with the future of work.


Are there other frameworks designed with edge deployment in mind?


I'm making one in Ruby. It's kinda like React but it streams all the DOM-patches to the browser (about 3-4kB of JS in total)

Clicking stuff feel instant due to being so close to the edge node.

I've been working on this full time for three months now, gonna keep going because I think it's a good way to make web apps. Hot reloading is pretty awesome. Event handlers/callbacks are just RPC-calls so no need to make a REST API or anything like that.


https://remix.run/ and https://qwik.builder.io/ are two others.

I believe that React Server components is also heading in that direction.


How does Fresh compare to SvelteKit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: