Hacker News new | past | comments | ask | show | jobs | submit login
Fresh – Next-gen web framework (fresh.deno.dev)
791 points by exists on June 13, 2022 | hide | past | favorite | 430 comments



Ooh, some competition for Next.js?

Vercel is doing a really good job with Next, but it's good to see some competition. Of course, that means there's now 65,535 + 1 more way of serving a web page using Javascript (sigh).

Rehydration is a really big deal. Sounds dorky but it dramatically speeds up load times and such by serving flat HTML and injecting JS afterward, like the old days, except you can write code like it's not the old days.


> Rehydration is a really big deal. Sounds dorky but it dramatically speeds up load times and such by serving flat HTML and injecting JS afterward, like the old days, except you can write code like it's not the old days.

Hydration is actually a compromise, and not a great one for UX. It’s in fact been said to be “pure overhead”, which I think is an overstatement but only slightly.

What you’re describing in the abstract is spot on though. Serializing server state to HTML and sprinkling in interactivity to pick up where it left off is exactly where we should be headed.

And yes it is like the old days, and yes all of HN will rapidly say so. The big difference now is the convergence of code written for both server and client, and compilers which help strip down and optimize what happens in the client.

Hydration in the current sense is re-running most of what the server already did, to recreate the runtime state it already had. That may be perceptively faster in terms of metrics like first paint, but it’s a huge barrier for time to interactive. All the more so when most content is static and has to load twice—fast first as HTML, then slower and redundantly as JS.

The best way to solve this is to not serve or hydrate anything at all unless you need to. The “islands” approach is a very good, but coarse, way to solve this: isolate components which are actually interactive, treat the rest as static. A more granular approach—termed resumability by Qwik and as I understand it the forthcoming version of Marko—works by treating the server-generated HTML as the initial state. The code executed from there is much more isolated than a full component.


> A more granular approach—termed resumability by Qwik and as I understand it the forthcoming version of Marko—works by treating the server-generated HTML as the initial state. The code executed from there is much more isolated than a full component.

Is that sort of what Phoenix LiveView does? Return a fully server-rendered page on initial load, then set up a "template" on the client side that can receive any values that change server-side over a websocket and patch them into the DOM?


This sounds conceptually similar, albeit maybe more similar to React Server Components? I’m really not familiar enough with Phoenix to get that specific though. Even in the JS ecosystem there’s a lot of nuance between seemingly similar approaches (hence why even commonly referenced concepts like hydration can be confused for what they actually do).


I think what Fresh is doing now is more similar to Server Components, shipping full components to the client. With LiveView, once it's set up on the client, the updates it sends over the wire are much smaller. https://fly.io/blog/how-we-got-to-liveview/ shows some examples of the idea.


Yeah this looks much more similar to Qwik. It even looks like they have a similar view serialization approach.


> The big difference now is the convergence of code written for both server and client, and compilers which help strip down and optimize what happens in the client.

I understand the spirit of your comment, but this was/is also true of Google Web Toolkit (GWT).


I’m going to have to take that on faith, their site doesn’t appear to ship the JS necessary to open the nav menu. But clicking through a few links confirmed what I recall: the major difference (apart from language) is the component/templating approach. Not that one is inherently better than the other (though I do personally prefer JSX), but bringing this concept to a dev environment which thus far mostly lacks it is a good thing for users. And with more flexibility in variety, there’s better odds users will get that experience.


> Hydration in the current sense is re-running most of what the server already did, to recreate the runtime state it already had.

It’s interesting that the problems with hydration is in some sense caused by the insistence on one-way data binding (deriving the view from the state). I imagine that with two-way data binding then you’d just need to attach event handlers and then the state would be derived from the view on the next interaction. Maybe.


It's really easy to make the states diverge this way. That's why the one-way binding is popular.

OTOH if there were a way to produce a two-way implementation from a one-way description, that could be great for performance. But this is already more CS than engineering.


How do the states diverge?

I think you can have a two-way binding with a controlled update procedure, akin to one-way data-binding. I’ve read some people claim to do it by producing a kind of one-way directionality under-the-hood (?) from two-way bindings. The goal being that views can update state (and other views).

I think it was here I read it:

«‘2-way’ shouldn't be a problem if a component reports user events (perhaps transformed/mapped) to the domain model without changing it's own core state (disregarding throttling etc.) and only changes its core state in response to events from the domain model.« — peerreynders @ https://dev.to/peerreynders/comment/1objn


I’m not sure that gets you much. You still have to know which parts of the code are implicated in executing those event handlers, which very likely close over other state and call into other logic. To an extent you can get that with static analysis (as Qwik does, with one-way binding), but highly dynamic code is tricky no matter what.


For all the improvements pioneered by Marko, why is eBay still so slow (compared to equivalent ecommerce sites in Next.js and similar)?


My guess, the bottleneck is in the data layer.

Honestly it's unfair to compare a small e-commerce site made with Next to one of the biggest websites on the internet.

If Ebay was made with Next it would probably be much slower. React is the slowest at SSR of the modern frameworks. Amazon considered using it but it was too slow for them so they keep using Java + sprinkled JS.


What's the story behind Amazon's SSR?



I’m not sure why you’re asking me, but I just did a quick “how slow does eBay feel” on my really spotty mobile connection and it didn’t feel slow at all. Faster than HN, which is usually my fast baseline that responds even when I’m not coaxing my network settings.


> Serializing server state to HTML and sprinkling in interactivity to pick up where it left off is exactly where we should be headed.

Ironically we were already doing that 15 years ago years with PHP/ASP/Java/Rails + jQuery.


This is neither ironic nor correct. But I will say I’ve described my mental model of Qwik as effectively that UX, but the compiler writes the jQuery for you.


if the code is the same on the server, one could serialise the state and transfer it with the html, right?


The way I read it, you might have lots of javascript to compute and render your e.g. counter control but you can keep that on the server side, only return the (much smaller) piece of HTML code and ONLY the JS needed to make the control interactive.

I don't know whether this means that it doesn't work as SPA any more since if you want to keep the best of both, you will end up creating more4 complexity, a new framework to learn and probably get marginal improvements at best.


That's how it works.


that’s how it works, but you also need to attach event handlers and set up the state for the framework on the client-side (for subsequent interactions).


I meant the state for the framework, as I understood it the same framework was used on the serverside.


You still have transfer the JS.


Right, but not all of it. The way Qwik serializes it is (this is from memory and probably overly simplistic, but conceptually approximate):

- Primitives already present from the server render are serialized directly into the HTML, and the compiled code reads those values from the DOM

- Everything else is split into fine grain chunks, assigned a special identifier (Qwik uses URLs) which is serialized to the HTML to fetch (which can be eager or on-demand) and activate interactivity as needed

My understanding is that currently Qwik serializes more than necessary—i.e. can be optimized further to eliminate non-interactive chunks from consideration—but that they’re focused on reducing JS cost first.


ITT, it was finally proven that JS is not code.


I don't think anyone believes JS is a great language. It's just what we're stuck with.

How I wish something, anything better could've taken over. .NET was beautiful to work in. Maybe the native apps for iOS or Android are cleaner (I dunno, never tried).

But browsers are what the world use, and they only speak HTML and Javascript (sadly). So we're stuck unless something else manages to create a sea change in how the world uses the internet.


> except you can write code like it's not the old days.

I imagine you mean that writing (frontend) code now a days is better than how it was in the old days. Well, at least from my perspective that's not the case. "Modern" frontend code requires:

- a package manager (npm)

- node (or deno or whatever)

- transpilers (or is it plugins?)

- TS

- 10K+ dependencies

And to be honest, what is all that good for? To being able to "hydrate" some server-side rendered template? Not good enough reason.


If you think it's bad then why do you use it? Browsers still support just HTML. You can shove a just a script tag down in the page.

Eventually, you'll have enough devs working on this you'll start to run into issues with how you coordinate your work. Your site becomes large enough the code gets more complicated and needs organising to work on it without slowing you down. You'll start to reinvent the above tools to solve these problems.

Frontend/JS tooling isn't perfect, but don't pretend like other languages/systems (can) have a complex pipeline of build tooling.


> If you think it's bad then why do you use it?

I didn't see GP make the claim of using it?


Taking it a little personal, eh?


I recently wrote a small web app.

I started with https://alpinejs.dev/ linked via CDN, and OpenJSCAD, also linked via CDN - I wrote basic html, marked it up with alpine `x-model` and `x-data` tags, and sprinkled a little vanilla js on top. Everything worked well and I got 80% of the way through the project.

In the final 20% I ended up adding a bundler (parcel), so I could bring in an scss framework and override its variables. While it added a fair bit of complexity to the project (dev dependencies, parcel config files) I gained lighter files via parcel's tree-shaking and minification, auto-recompilation/reload during development, and re-usable html partials via `posthtml-include`. I'm also set up to swap to typescript quickly, if the project gets more complex and I start to get annoyed with the lack of compile-time type checking.

So, it's 2022, and you can write a web app without any of the things you mentioned (transpilers, package managers, typescript). Yes, adding even one of them nets you a 100+ dependency node_modules directory - but the reason we keep adding them to our projects is the things it gives us are NICE, and the cost (complexity) is mostly worth it.


You made a web site that displays a document. Not a web app. Web apps, like native apps, have a lot more UI state that needs certain patterns and tech to manage them properly. Many devs can discern the difference and use the correct tools. Some do reach for the wrong tool but it's not the tools fault.


I actually am aware of the difference between a web app and a document, and I used the correct word. Thanks


We've been able to replace a couple of client applications that previously used WPF with a React frontend (and a small native application that uses WebSockets to create a bridge between the browser and a particular piece of local hardware).

Updates are now easy-breezy (update the server and you don't have to touch clients at all).

There is a place for SPAs. I just don't think your typical grocery store website should use something like that.


What SHOULD grocery store websites use, then? I feel like they're some of the most complex websites around (if they do any ecommerce/online pickups at all)... between product reviews, indexing, filtering, sorting, checkouts, SMS/push notifications, geolocation, real-time inventory, etc. It's not the sort of site that says "easily built in plain HTML" to me.


> "Modern" frontend code requires:

Not necessarily, at least in the case of React and Vue.

Rather than using NPM, you can self-host react/vue (or preact if you need something even smaller), or serve it from Unpkg.

You don't need node/transpiler/bundlers if you're only targeting modern browsers. You can use modern syntax, async/await, ES6 import and a bunch of other features with static .js files.

JSX is a tough one, but there are solutions to that, packages domz and HTM are made to allow using React/Preact without NPM/transpilers.

Typescript is also a tough one. But it's also optional, although a good idea to have. I hope optional type annotations land in ES6 soon so typechecking can happen without anything resembling compiler phase.

-

> To being able to "hydrate" some server-side rendered template? Not good enough reason.

The "hydrating" parts is entirely optional in "modern frontend". It also needs some stuff on the server that is IMO significantly more complex than what I described above. But apart from some very specific cases, one don't necessarily need it.


Believe me, I hate the toolchain/buildchain as much as anyone. Especially TypeScript (see below). Luckily, Next.js also takes care of all that too! Out of the box it preconfigures all of it with sane defaults. `npm start` and you have a working server with transpilers all seamlessly configured, and when you change a line of code it just hot refreshes in the browser. On a `git push` to Vercel or a similar capably platform, the server transparently does all that and gives you a sandboxed preview environment for that specific build. It is magical.

But really, what I meant about "writing code not like the old days" isn't so much about the shitty toolchain (which Next helps with, but I agree it's shitty). Rather, it's the ability to write code like:

<Header loggedIn={isLoggedIn}/>

<Sidebar options={isLoggedIn ? navOptions.loggedIn : navOptions.loggedOut}

<Dashboard>

{widgets ? widgets.map(widget => <WidgetContainer>{widget}</WidgetContainer) : <Spinner/>}

</Dashboard>

Basically, the ability to compose pages & apps out of components (which different developers can work on), and the ability to manage state in a central controller via Redux or useContext to avoid race conditions and the such. That sort of stuff is REALLY hard to do with plain HTML and JS, especially where there are multiple developers involved. React isn't a magical cure-all, it just makes it easy to componentize large apps into smaller areas of concern.

The shitty buildchain isn't a feature, it's an unfortunate side effect of browsers being limited to JS. Essentially it's a "compile" step that became necessary as the vanilla-JS developer experience wasn't able to keep pace with the complexity of desired business apps (and as devs of various skill levels flooded the market). So the developer tools kept growing, but they still had to be compiled/built into JS for the browsers. Next.js makes it relatively painless, compared to how it was just 3-4 years ago. But I agree, I hate that this is a step at all.

Ugh, as for TypeScript... I get that it's a necessary evil, but I spend more time fighting it than actual bugs... coming from PHP, it was already common practice to manually typecheck and coerce everything when needed, anyway. TypeScript often felt redundant and overly sensitive, especially when it came to async nullables, causing false alarms that React could just've silently handled with {isLoaded ? <Component/> : <Spinner/>}

But TypeScript is optional anyway. It's a superset/addon on top of JS, so you never have to use it if you don't want to. If it's scaffolded for you in someone else's project, usually it's just a matter of a either using a .JS extension instead of .TS, or adding a ts-nocheck or similar to that file. Other devs may hate you for that though, when your object or props ends up breaking theirs... so it's definitely a conversation worth having first :)


The htmx library is a good way to do that: https://htmx.org


HTMX is revolutionary in it's bang-for-buck simplicity


Unpoly.js also good


https://Remix.run is the real competition for (/successor to) Next.js. It can target a Deno runtime so I guess it's competition for "Fresh", too.


That’s one of the best designed sales pitches I’ve seen for a web framework, you could use this approach to sell any idea.

Just scrolling list of small blocks of text with small/interactive screencasts/animations that communicate the idea via succinct bullet points.

Much more fluid than the usual approach of paragraphs or breaking the page up into large blocks/sections.

It’s closer to older HTML where you just have text and a scrollbar.


Opinion only: I found their front page difficult to skim. It took me several scrolls-to-end-and-back to even realize they were trying to explain concepts on that page, not just showing screenshots. Normally I just page-down quickly to get to the features list or comparison table, but doing so here bypasses most of the explanatory animations. I kept wondering, "where's the section that tells you what this does and how it compares to other frameworks". Then I kept looking for a Features page on their top or bottom nav, but didn't find one.

Maybe it's just years of bad habits trained by seeing too many bad marketing sites, where the typical signal to noise ratio is really bad. I guess even when I see a good sales pitch, I don't recognize it as such anymore and try to skip through it... sigh. Sorry, Remix.


I viewed it on mobile which I found much nicer than desktop FWIW


I think Astro is more likely the successor to Next.js and probably not remix. I think Next will steal the best parts of Remix but Astro was built with simpler foundations and has integrations for deno with Netlify edge functions among others and unlike Remix is not tied to React but you can choose your framework.


Remix is working on a Vue and Preact adapter, so won't be tied to React for long. I see the community is looking at a SolidJS adapter too. I'd say it'll be just fine


I didn't know Remix supported Deno, that's interesting


Have you seen Remix yet? It’s pretty compelling in terms of competition for Next.JS.

It makes different trade offs and isn’t strictly better by every metric, but overall I’m very happy with it for the two use cases I’ve tried it with. It’s a very low overhead framework once the simple conventions click.

I’d still like to check this out, then redwood and a couple others too. I’m not huge on these frameworks in general, but they tend to have some excellent ideas and smart people behind them, so plenty to learn by experimenting with them.


I've heard really good things about Remix, especially the nested routes. But I think Next is trying to copy that in Layouts? https://nextjs.org/blog/layouts-rfc

I use Next not just for the routing and composition and hydration, but for all the other quality-of-life improvements (image resizing, buildchain configs, hot reload), especially when it's paired with Vercel (per-push sandbox builds, stale-while-revalidate, seamless CDN, access to serverless, etc.)

I'm really excited to see how Remix and other Next competitors evolve, but for now, I think it's still the most "full" stack of the React frameworks? Is that correct?


That’s correct about Layouts, and I agree, if you want something with all the quality of life tooling ready to go, Next seems to be the way to go.

It isn’t too hard to get the same/similar tooling outside of the vercel ecosystem, but it does take know how and a bit of extra time. It isn’t obviously worth it unless vercel isn’t meeting your needs in my opinion/experience.

I mostly play around with other frameworks in order to learn, but for any client where a frontend framework made sense I’d almost certainly choose Next.

Edit: people also get bent out of shape about perfect hydration and shaving milliseconds here and there, but like you mention, Next offers such a comprehensive solution in a situation where routing and hydration are only a part of the big picture. Next gets you really far without any effort up front, which is crazy. I really like it!


It's funny to me to see how many people say Next is copying something about remix when the entire remix thing is a copy of Next.

Heck, I've heard in a podcadt they're even working on a hosting platform copying vercel.


For sure, Vercel was working on nested routes before Remix was a thing, as far as I know. A lot of people suggested it was lifted from Remix. It’s not a new idea at all and people have been asking for it for years.

Arguably there is nothing all that novel in either framework. We are constantly reiterating and rebuilding wheels, doing it a little better each time.

I’m glad to see the Remix take in things gaining steam though. It’s quite a bit less cognitive overhead than Next without any magical trade offs. Nothing ground breaking, but definitely better at things I care about.

Like most things, a blend of solutions would be ideal


Eventually React is just going to become Angular3, lol


All of tech is just people borrowing from each other. It's a good thing! I wish IP patents weren't a thing.


Image resizing can’t be done statically. That’s a big issue if you want to serve things from cdn property.


> that means there's now 65,535 + 1 more way of serving a web page using Javascript (sigh).

You only need one (or none). I personally recommend Next.js for just about anything.


Next is lovely to work with but it has a flaw. It loses client state between pages if you use getServerSideProps. For any app that needs to load some up to date data on every page if the user is hitting it for the first time, but doesn't need to load it if the client already has it, Next doesn't have a solution. You end up using a persist gateway pattern which is a massive amount of work that you shouldn't really need the client to do.

The problem is that that's pretty much every web app. Every time I've used Next I've end up abandoning SSR and building a plain clientside rendered app.


Also, next/image only works with a CDN, ie Vercel, it doesn't work with static site generation. It's been an open issue for years and I honestly now feel like Vercel doesn't fix the problem on purpose, to push more people to using their service rather than simply exporting to a static host which are plentiful.


been saying it for awhile... Next.js is an ad for their services.

Take a look at their middleware. It's designed to be used solely with their serverless cloud BS. It uses a janky JS sandbox which means you can't use node APIs. It's just horrible for no good reason at all. I've never seen middleware so intentionally crippled anywhere before.

And the whole reason you need middleware is to maneuver around the flaws that the prior person mentioned with getServerSideProps.

Next.js probably seems great if you're writing greenfield code in a dead simple web app. Routing/URL handling on Next.js is just broken. This is all glaringly obvious if you've been around the SPA/SSR world for more than a day.


AFAIK, Vercel middlewares use Cloudflare Workers, which are V8 isolates. Describing that as a “janky JS sandbox” is pretty funny.


What is the alternative for SSR?


You can set next/image to work with Cloudinary or Imgix or a custom provider. It gets the job done that way, but yeah, the experience isn't as smooth as with Vercel.

Same with incremental static regeneration and some other features... Next is obviously built not just by, but also for, Vercel. Vendor lock-in already is a small issue and may become a bigger one if they keep going down that route.


They're working on that with Server Components: https://nextjs.org/blog/layouts-rfc?utm_source=next-site&utm...


Can you elaborate a little more, or navigate to some GH issue, where this is discussed, please?


Choosing the tool that fits best with the requirements saves a bunch of time during a project lifecycle.


Depends on your SPA framework:

https://hackernews-csr.ryansolid.workers.dev/

No need for rehydration, I'd say. Combine this with code splitting for large apps.


Maybe besides the point but this is falling back behind modern HTML[0] in that it is reintroducing flashes of blank pages.

It's not a nice experience. Remix (and, most likely, Fresh as well) prevents this.

[0]: https://developer.chrome.com/blog/paint-holding/


check out Qwik (by Misko Hevery the author of Angular) which does (much) smarter hydration than Next (or this Fresh thing)


Yeah, I too have been meaning to try it out. I guess the "barrier" for me is the ecosystem surrounding it.


"No build step" is a weird pitch when what actually happens is that, in production, they'll fetch a WASM module on the fly to do bundling at request time.

That build step was there to avoid doing this work over and over!

https://github.com/lucacasonato/fresh/blob/458fe2ca3c12508a6...

https://github.com/lucacasonato/fresh/blob/458fe2ca3c12508a6...


I thought this too. Isn’t it a better user experience if the bundling happens ahead of time rather than at request time?


I'm not familiar with their implementation, but it could be that the bundling is only a temporary solution for browsers that don't support import maps. Or a light layer on top of it.

Import maps are pretty cool, you don't need to bundle in modern browsers and it plays to http2's parallel request strengths.


Maybe it has a cache and this only happens the first time?

It would be a bit ridiculous to re-compile JSX into JS on every request.


Yeah, it doesn't do this. The transpiled output is cached indefinitely for a given deployment.


Hey Luca, any chance you might add other front end frameworks in addition to Preact?


I didn't say every request.

It'll still happen on every cold serverless request, every first request to every scale-up VM, ...


The next-gen SPA frameworks/libs like SolidJS or Svelte are already very fast and more importantly very small in bundle size. At least much faster and smaller than React or Angular. Therefore the advantages of SSR frameworks like this new one are much smaller when compared to e.g. SolidJS.

The performance claims made for this new framework need to be proven by benchmarks. Check out this SolidJS Hackernews clone (Client Side Rendered) https://hackernews-csr.ryansolid.workers.dev (by the way, there're other implementations like Remix or Svelte as well)

IMO very hard to beat. For larger apps we can use component based code splitting.

The new SSR frameworks are very complex. So there's a downside to it.

Of course there's a big market for the cloud providers as you need to run and pay for a server for your SSR instead of simply serving static JS! That's why there's such a hype lately. I'm not convinced that the performance gains are worth the complexity, costs or vendor lock in.

Again, check out the SolidJS example app I've linked above and measure for yourself if you really need the additional cost and complexity of a server pre-rendering, hydrating, etc..


This feels like a strange argument to make when server-rendered HTML has been the norm for decades, and it's only been recently that SPAs have become popular.


It may feel strange, but as Javascript has become very, very fast in the last decade and the new SPA frameworks are highly optimised, I'm not surprised by the performance of SolidJS. Obviously for dynamic content.

That's by the way the reason I was asking for benchmarks. The new kind of SSR frameworks are by the way much more complex than the older template based server-rendered HTML.


The problem is accessing backend data sources. Fetching that data in the first request and responding with server side rendered html > serve js that then initiates network calls to get that data (while showing a spinner in the ui)


Meanwhile, the Svelte people are working on SvelteKit, their integrated SSR+SPA.


the Solid team is working on Start which is their full stack framework


Ryan Dahl talks a bit about it in this talk at Remix Conf 2022: https://www.youtube.com/watch?v=4_nxvVTNY9s&t=10781s

He describes it as a post-Unix web framework (i.e. built on serverless primitives like cloudflare workers/deno deploy) with the goal of <10s deployment (which he says requires JIT compilation on first-request)


He really is the JS server-side sect leader. Plain wrong about so many things you lost count while he talks. Glad that the JS community, not that I am fan, left this dude behind.


What makes you believe the community left him behind? And he's only wrong from your perspective, especially as you say you aren't working in this domain.


Well, NodeJS is the default for service-side JS. Node > Deno.

React is the main driver for JS not some server-side BS.

The classic scripting arg. JS is a poor choice for scripting and hence, not used for it that much anymore.


> Well, NodeJS is the default for service-side JS. Node > Deno.

Are you aware that Deno is very new? I wouldn't say that Ryan Dahl got left behind because everyone hasn't switched to Node yet. There is a large amount of interest in Deno, exemplified by how frequently Deno projects make it to the front page of HN.

> React is the main driver for JS not some server-side BS.

Haha. Just because you hate JS doesn't change reality.

> The classic scripting arg. JS is a poor choice for scripting and hence, not used for it that much anymore.

This statement doesn't make sense. It never was very popular as a Bash replacement, if that's what you're meaning. And otherwise, it is the only option for browser interactivity. So it doesn't make sense what you're saying.


I am aware about what Deno is. But it is yet simply not the major leap forward that will get rid of NodeJS with that large ecosystem.

React was the first framework that requires the coder to really understand and utilize the modern features of JS.

Scripting something simple or spinning up a simple endpoint have always been arguments for NodeJS. R always talks about this. Terrible ground for decision making.


So my reading of that is that Fresh as it stands now is more of a demo and challenge to the Remix community to step up.


Not really, Fresh already powers several websites https://deno.land/, so it definitely has production use.


Ryan Dahl described it that way. He said it’s not something they’re really promoting or planning to utilize long term.


Is there a chance he pitched it this way to manage expectations / because he was pitching this at Remix Conf and didn’t want to upset the hosts?


I don't fully understand the difference between this (and something like Remix, which seems similar) and other frameworks like Next.js (React) and Nuxt.js (Vue). Can someone explain a bit about the differences, and pros/cons to each?


The end result might be same but all of these frameworks/library/tools have some tricks up their sleeves that makes things easier for developers to implement certain functionality.

The major difference with Fresh is that it runs everything just-in-time when it is needed, hence doesn't require building no shipping anything by default to the client(but you can still ship some JS for client side interactivity).

The key here is no building (packing, bundling, transpiling). This don't just save time but actually removes the complexity as what you see is what you get. The only things that ships to users visiting your site is around 0-3kb (plus client side JS you decided to ship), not prebundled transpiled polyfilled prebuild 10mb JavaScript.

Since it is Server Side Rendering, the performance is based on design decision.


> The key here is no building (packing, bundling, transpiling).

How is that possible? In the documentation (https://fresh.deno.dev/docs/getting-started/create-a-route) I see .tsx files... so I imagine that at least one needs to compile TS to JS and then JSX to JS. Perhaps I got that wrong, though and browsers nowadays support TSX out of the box.


It's using the Deno runtime, not Node. Deno has a TS compiler built-in, thus not requiring you to set anything up in that regard.


Thanks. I didn't know that.


Nit-pick: It does have a build step to generate the manifest file, at least currently. This is needed because Deno Deploy still lacks dynamic imports. So, their claims of no build step are as of now still aspirational.


It sounds like it only needs to be regenerated when you add/remove components though, not when you simply edit them.


How does the counter demo work? I saw no network requests so it must be supporting a client side computational model. There must be TS to JS compilation of some sort.


> The only things that ships to users visiting your site is around 0-3kb (plus client side JS you decided to ship)

This 0-3kb includes HTML or some JavaScript runtime ?


How is this better than astro?


It’s in the same space as Astro. I can say with certainty without even looking at my Twitter feed that the Astro team will welcome more work in the space. Even if it’s not “better”, everyone leading FE web projects who isn’t a dilettante is learning from and inspired by each other’s work.


I see, so the primary difference is what you ship to the browser?


The big standout feature that sets it apart for UX is partial hydration. DX like Next (or whatever similar), and UX like plain HTML plus some isolated interactivity, is becoming a focal point for a lot of the current crop of FE tools. Astro has been a big player in this area, Marko doing it for years, Qwik is another really compelling option. But the more the merrier where devs can dev how they want and users aren’t getting gigantic globs of JS they don’t need or want.


(I am not affiliated with any of these technologies, but am a Next/Vercel customer. I am also not super familiar with anything except Next, but this is my attempt at an explanation.)

I think they all try to solve the same problem: how to get a modern interactive app to run on (and be performant) what is essentially a hacked-together ecosystem, HTML + Javascript, with decades of backward compatibility baggage. The essential problem is that browsers work on the ancient and really poorly designed DOM, but developing against the raw DOM sucks. It's fine if you have a simple webpage with headers and some text, but once you get into stateful UIs, it gets hard to maintain pretty quickly. So there's a mismatch between user experience (in HTML) and developer experience (terrible in HTML, better in other frameworks). So developers of complex apps end up abstracting it away with something like a JAMstack.

So you have things like React, which is essentially a UI library (vs a more fully-featured framework like Angular or even the older Rails stuff, or something like Laravel/Symfony for PHP or whatever the .NET equivalent is). React lets you compose apps not out of DOM primitives but components you define yourself, which in turn are reusable and composable.

But there's a lot of things that React don't handle out of the box: page routing, state persistence, static builds, image optimization, hot reloads, CDN caching and invalidation, etc. A lot of teams end up reinventing all those wheels, or else clobbering together 80 different open-source solutions and 10 vendors. It gets hard to maintain very quickly.

Enter Next.js, one of the earlier successful React-based frameworks. It turns a React app from a quirky UI library into something almost beautiful, because you can now make an entire app, not just a UI, using React and some easy to learn JS config objects.

For example, to make a blog with React, you'd first need a CMS (let's assume you have that part figured out) and an API (also figured out). You can write it as a single-page app, using fetch() or whatever to query the API every time. But then the client has to download that and then render the page. If the CMS is on a different host than your webpages are, it can take quite a while. That whole time your user is waiting, seeing a blank page. And if your CMS goes down, your website goes down, even if the content's been the same for days.

Anyway, you could try to statically bake all that into HTML, but then every time you add a new blog entry or update an existing one, you have to rebuild your project. And then if you want it to be fast, you have to invalidate all your CDN caches.

Next.js essentially takes care of all of that for you, in one easy to use and well documented package. Combined with Vercel (the company behind Next.js, who provides hosting) it also abstracts away all the complexities of the buildchain, CDNs, invalidations, etc.

As a duo, their most powerful feature is rehydration. You can code your app as though it were a single-page app, using React to compose components and pages, combined with file-system routing, to create a whole site. But then you push your changes and that's where the magic starts: Your Next.js server (like Vercel) picks it up, builds it with data fetched server-to-server from the CMS, bakes everything into flat HTML + CSS, and invalidates it across the CDN within seconds. At this point, any user who visits your site will be able to download the HTML + CSS, even with Javascript disabled -- the client does not ever speak to your backend directly. To them, your page is just a static HTML page, served straight from the CDN edge. This means the client doesn't need to load React to see your page. They can have JS disabled and it still shows up normally, it just won't be interactive.

Seconds later after the HTML has loaded, some "bootloader" JS then downloads all the other JS that enables interactivity and dynamic data fetches (comments, etc.)... all invisibly to the user. That is the "rehydration", taking a React app that you wrote and the server buildchain "dehydrated" (baked into HTML + CSS), but then rehydrating it to add interactivity back. Yes, you could do all that manually, but Next.js makes it magically trivial... you never have to think about it, it just works. And it's lightning fast.

So take that rehydration stuff and add on a bunch of other quality-of-life developer experience improvements. For example, images are traditionally another headache, needing something like Imgix or Cloudinary to be able to dynamically resize them on the server (so the el cheapo 320p Android doesn't get served the same 4k retina image). Same with script updates... if you change your SPA, you have to figure out how to invalidate current caches, how to sure visitors who've cached the old version still works with your backend API, etc. Or hot refreshes, or page-by-page invalidations, or whatever. It does all of this in one framework, so you can get rid of ReactRouter, Redux (useContext can handle many uses cases), image processors, Preact, Express, etc. Really the only thing you need to provide is a CMS of some sort, typically a headless one.

That's Next.js (the open-source framework). Behind them is Vercel (the hosting/PaaS company which maintains Next.js). Next is the one I'm most familiar with, but I believe the others are similar (but someone correct me if I'm wrong):

Nuxt.js is to Vue what Next is to React. I believe it's a little less featureful than Next, and it's maintained by a different company.

Remix takes some of the Next.js principles but implements them differently; instead of having a server rebuild and bake your project at push, you can do a similar thing on edge compute/serverless (like Cloudflare Workers directly). It has a really neat feature: nested routes (really more like UI layouts), which are composable UI units that are hydrated serverside, similar to Next.js, and then sent to the client as HTML wholesale. So like your <Dashboard> can include <Widget> and <Chart> and <Comments>, but each one can be individually hydrated, composed, and reused -- all invisibly to the client. My understanding (again, not familiar with Remix) is that this was such an amazing feature that Next.js straight up copied it last month with their Layouts RFC (https://nextjs.org/blog/layouts-rfc?utm_source=next-site&utm...).

Fresh looks like Deno's attempt to produce something similar, but it's still early and not quite as powerful.

If you're a web dev and you've never tried this stuff, I strongly recommend taking a look. I've been coding webpages since Netscape, before CSS was invented. Next.js was the single biggest improvement to my professional life in decades, especially coming from the hell that was Drupal + jQuery. I coupled Next/Vercel with a vendor-supported headless CMS, and the overall developer experience made me fall in love with my job for the first time ever. So much so, I actually switched careers from being full-stack to solely frontend/React focused, because Next.js just made it so enjoyable. No more infra and DevOps hell, it all just works, it's all in JS/Typescript, and I can just focus on coding for UX. What used to take weeks to do in the old PHP + jQuery framework would only take minutes to prototype, a day or two to finish in Next. I hope the other frameworks bring you as much joy!


I've actually worked with things like Create React App, Vue CLI etc a lot but never any of the "meta frameworks". Based on what you are saying, it seems like the main difference it where things are evaluated? So, if I do my filtering on the client (with React) I need to make a request, get all the data, filter, render. For something like Remix or Fresh, you can do it on the server first [0]. Either way, the user has to wait, it's just a different kind of waiting:

1. Pure front-end solution (React) they wait on the front-end to handle it all. 2. Remix or Fresh, they are still waiting, it just happens on the server.

It seems like there isn't a significant difference either way - ultimately, you are still waiting, as a user. If the payload is huge, maybe the server model is faster - unless the server is getting smashed with requests, then it'll actually be slower?

[0] https://remix.run/docs/en/v1/pages/philosophy#serverclient-m...


For responding to user interactions (like filtering or searching) there isn't really a significant difference; like you said, it's just moving the "wait" elsewhere.

But many sites are more heavily read than interacted with: blogs, news, documentation, even to some extent HackerNews and comments. These are all "write rarely, read often" sites. In those cases, prerendering can be way faster both for the end-user (it's just HTML being downloaded from one single source) and also for the origin server (you build once, CDN caches it everywhere and takes over from there). Typically, the tradeoff is that it's also a PITA to manage these buildchains, especially once you get into obscure webpack or babel configs. Next.js handles it really elegantly.

Next.js has other benefits too, especially when coupled with Vercel. It is more than CRA + static builds, and even if you never end up using the rehydration system, the routing/image optimization/per-commit preview sandboxes may still be helpful, though not life-changing. For me the killer feature was being able to detach the data layer from the (write-rarely, read-often) frontend, such that the frontend could always just assume it would have access to the latest data from the API (because Next.js takes care of that).

To give you a before-and-after comparison... I worked on this page previously: https://www.fieldmuseum.org/exhibitions

That version is running on Drupal. Some of it was in a Drupal template, some of it was in-house PHP. To fetch data, we had to use a mix of Drupal built-ins and some raw SQL, mixed into some ugly templating language. Then through custom modules we had to add jQuery and React, sprinkled on top. Drupal had to "build" the page into HTML whenever we save, and then a separate buildchain would add back the Javascript on top of that and try to bundle it all. The developer experience was ugly and required needed at least four languages (Drupal, PHP, jQuery, React). Filtering is done serverside, so filtering by e.g. Type = 3D movie requires an API call and takes several seconds. If you turn off Javascript the whole page breaks and you can't access any of the links anymore. This version is pretty fast thanks to in-house optimizations and extensive caching by Pantheon (a specialist PHP host), otherwise it would be really, really slow. Our dev and staging machines were hell to use because every page took like 10-20 seconds to load.

The new version (WIP, and I don't work there anymore): https://nextfield.vercel.app/exhibitions

That is 100% Next.js/React and only that, no more PHP or jQuery needed and no other frameworks. Data was moved to a headless CMS (DatoCMS in our case, which returned convenient GraphQL responses). It is slightly smaller over the network. All filtering is clientside and instant. If you go to a different exhibition page, it just has to load the JSON data for the new exhibition (text and image URLs + thumbnails), in like a 25kB JSON instead of the whole HTML page (headers and footers and all) all over again. The images are resized on the server for your viewport needs before they're sent to you (though TBH I am not a big fan of that feature because it's not preloading images right now). Even if you had JS disabled, the page would still load and the images and links would still work, you'd just lose the interactive filters.

So for users, the new version is hopefully a bit faster. The real improvement was in the developer experience, being able to code everything in React and not have to think about how it's going to get rendered into HTML or how we're going to balance our caching strategy (invalidations vs not overloading origin). And devs didn't need to use PHP at all anymore.

CRA wouldn't handle much of that, it'd just serve up a SPA run by a single server.


So I looked at https://nextfield.vercel.app/exhibitions to understand how Nest.js's static page rendering works alongside React.

The HTML it returns is really big! 374kb before compression. I turned off Javascript to see how the page loaded without it, and it still downloaded the whole 374kb HTML (I guess that's to be expected).

I dug into the HTML and realised that the reason for the size is that Next.js includes a <script id="__NEXT_DATA__"> element that contains the entire data of the page's components as JSON for React to hydrate. This JSON takes up 60% of the whole HTML file! I suppose it's prioritising smooth rendering over reducing data use, but it doesn't seem very optimal, tbh. And it's really inefficient for browsers operating without JS, since every navigation click requires downloading a bloated HTML file.

I found Github issues, SO questions, and blogposts about this drawback, and it seems like this is a common practice for even the largest websites. This massive duplication seems to be a problem that newer frameworks are trying to solve. I wonder if non-dynamic sites like this aren't better off just being static HTML served from CDNs, since they aren't interactive.

Also, the main stylesheet is 1.12mb uncompressed! Bootstrap is part of it, but it seems like there's a lot of other CSS?

(Now I understand more about why my web browsing on mobile consumes so much data even on low-media sites, ouch.)


So I don't want to take away from your main point: That these frameworks add bloat (even more on top of React). It's an important consideration. Like you said, it's a drawback/trade-off. In our case, we decided it wasn't a dealbreaker and would still be worth it because:

1) That 374kB file is only like 50kB gzipped. We had bigger fish to fry/optimize, like that massive CSS. (Which is a holdover from the PHP days. Phase 2 of the project would eventually have refactored that and tree-shaken it, but it was out of scope for the prototype.) A lot of the bloat you see is because that's still a work in progress and they haven't had a chance to do any optimizations yet. A future version would probably get rid of much of that CSS and maybe even Bootstrap, then think about webfonts, oversized images, etc. There's a lot of work on that front.

2) Our target audience wasn't expected to have Javascript disabled, and with it enabled, Next's hybrid rendering model actually makes it really fast to navigate between pages after an initial page load. In an earlier test, we had the individual exhibition pages side-by-side, the Drupal version next to the Next one in iframes, plus back/forward buttons to navigate through each exhibition in sequence. Originally that was meant just as an easy way to catch visual differences, but we realized that the Next version was loading almost instantly (couldn't even see it load), whereas the Drupal version took 4-5 seconds per navigation. It turned out that Next was able to just download the tiny JSON (~20kB) for each exhibition and injected that into the React shadow DOM for updates; it didn't have to download anything else while navigating between navigations. That was an unexpected, and really powerful, feature that would normally only be available in SPAs.

Realistically, this means that if users had JS on, the initial page load might be a bit bloated, but subsequent navigations between pages should be MUCH faster.

3) The JSON shape was also an artifact of our CMS API (in GraphQL). If we wanted to, we could've optimized it before sending it over the wire.

4) Most importantly... I want to reiterate this... the main benefit of Next.js is NOT page performance, but developer experience. To the extent that there are any improvements at all to page load speed, that is a nice side effect. But even if there weren't, it would still be worth it for us. The big difference is in being able to quickly create new pages/templates, in a single language (JS), with a zero-stack configuration. Especially once we also decided to move the content to a hosted headless CMS. That meant no more LAMP stack to maintain, no more DBs to prune, no more fiddling with CDN caching, Drupal modules, Docker, CI/CD etc. Previously we were spending 80% of our dev time fighting our own stack and framework (Drupal is really really hard to work with, even compared to the messy Node/React ecosystem). Next got us out of the DevOps and infrastructure game completely so we could focus solely on the frontend, and Next + Vercel takes care of everything else. It's not just "serverless" but almost stack-less in a way (managed). All we had to do is write React, push a commit, and done. For a small team, that was HUGE... being able to push out a new template in a matter of hours instead of days/weeks, and being able to deploy to production in seconds (and roll back in seconds) too. In the Drupal world, there are companies like Pantheon and WPEngine and Acquia that try to do the same thing for the PHP landscape (and they do it well), but Next + Vercel is waaaaaay easier and faster. Other competitors in that scape (for JS) are Netlify, Gatsby, etc. These Jamstack hosts are gaining popularity because they are so much easier to work with than the traditional backend + pipeline + frontend model. But of course they have their own tradeoffs and aren't right for every use case -- for example, if we anticipated heavy user interactions (logins, profiles, personalizations, etc.) we would've had to consider a lot more factors. But for write-rarely, read-often informative website, it was the perfect fit for us. YMMV of course!


I hate Drupal as much as the next guy, but I have to defend one thing about it: it doesn't have to be slow, and it doesn't have to have jQuery. Just get rid of all of the bloatware that comes out of the box, and then don't solve every problem with a plugin/module. I know, a 3rd party module for every little thing is exactly how Drupal development looks in most companies. Sorta like jQuery a decade or so ago.

Anyway, with a bit of discipline and some frontend tricks (preconnect, preload, async) you can create a Drupal page that's super fast. And it would actually serve less code to the end user than a page based on JS frameworks (as all things happen on the server).

Example: https://www.magneticpoint.com/


Yeah but next gets you the same benefits without needing PHP or a DB (for the frontend). Drupal requires a LEMP stack, which is both hard to maintain and hard to scale/replicate.

You still need a place to store actual content/data, of course. But that could be any store or service that gives you an API endpoint to fetch from.


That first one was way quicker for me. Might also be because I'm in Firefox on a RPi 4.


Sometimes it also depends on whether your local CDN edge has a hot copy. You could try a forced refresh and see if it's faster the second time? Up to you :)

The Drupal version is seeing production traffic, while the Next version only sees a few devs now and then.


Given all the benefits you have listed, would the next big bottleneck to solve be the backend? I think it would be better to reduce/get rid of JS/Node and move on to server side Julia or C++.


For that simple page, the backend is a headless CMS that the vendor maintained. We just put content in and get GraphQL out and did not have to worry about how they hosted at all.


> That is the "rehydration", taking a React app that you wrote and the server buildchain "dehydrated" (baked into HTML + CSS), but then rehydrating it to add interactivity back. Yes, you could do all that manually, but Next.js makes it magically trivial... you never have to think about it, it just works. And it's lightning fast.

Actually I think the big trend in JS front-end development is realizing that you do have to think about it! React rehydration is often a very slow step (whether it's Next.js or anything else, I don't think it makes a difference), definitely not "lightning fast" on non-trivial apps with lots of data. Islands architecture goes a long way towards solving that but it's still a bit limited today.


>Seconds later after the HTML has loaded, some "bootloader" JS then downloads all the other JS that enables interactivity...

So until that happens the UI looks functional but isn't. The user is left to tap/click furiously on that button but nothing happens?

It sounds to me like this is only suitable for pages where interaction is an exceptional thing that users will only attempt to do after reading some content.


I throttled it down to Chrome's "slow 3G" setting and it still worked fine. The filters were active as soon as the page displayed, though the images took a lot longer to load after that.

There might be cases where the issue you describe occurs, but I haven't actually seen it in testing. If it's a concern, you could of course add throbbers or the such. But generally, in our limited tests, it hasn't been an issue.

Still, it is an issue they (and the React ecosystem) are actively working on improving, with React Suspense (https://17.reactjs.org/docs/concurrent-mode-suspense.html) and Next.js Layouts (which I understand is copied from Remix(?) and can fetch component groups in a hierarchy https://nextjs.org/blog/layouts-rfc). There is also server components and streaming, which can render HTTP snippets/components (instead of pages) and send those back over the wire, similar to the PHP days: https://nextjs.org/docs/advanced-features/react-18/streaming

But again, the benefit is mostly to the developer experience. It's a lot easier to write everything in React than to have to switch between PHP and Drupal (as in the admin UI) and JS. There are some benefits to the user experience if done well, but the same could be said of a statically cached Drupal output.


Heads up on doing ad-hoc/jit-rendering in CF workers though. They bypass the cache.


What happens when the user interacts with the page while it's being rehydrated? Is that click eaten, does the user see some error, or does the page remember the click and run it when it has pulled in the relevant js?


“The page remembers the click” was the original intent [0], but the latest version of React includes a feature called selective hydration [1] which can hydrate a component synchronously in response to an event if possible (i.e. without replaying the event). Naturally React itself has to be loaded for any of that to work.

[0] https://twitter.com/dan_abramov/status/1200118229697486849

[1] https://github.com/reactwg/react-18/discussions/130


Follow up: How does the HTML "remember" the events to replay without any help from Javascript?


It doesn’t. This only works once React itself (i.e. the JavaScript code for the library) has loaded, which usually happens pretty quickly. The user-written React components might not finish loading until much later.


I am not sure how it works behind the scenes -- honestly, that's one of the drawbacks of Next, in that a lot of it just "black box magic" -- but it still seems to work fine when we throttle down to 3G. I am not sure how it decides which scripts to hydrate in what order... we were saving that optimization run for the end, but I changed jobs before that could occur (sadly).

There are some ways to work around that, if it actually turns out to be an issue (which it wasn't for us)... discussed it a bit more in my other response: https://news.ycombinator.com/item?id=31727249


> or whatever the .NET equivalent is

That would be Blazor for anyone interested in checking it out.

https://en.wikipedia.org/wiki/Blazor


at the top you mentioned "Rails". Rails handled the backend. You defined database schemas. It built forms to edge them. To a certain level you install got a working front end and backend.

Where does Next fit here?


Deno vs Node appears to be one of them.


Sure - this seems to be an implementation detail, though - eg, Remix and Next.js are both on Node.js but seem to have some difference that's not abstracted away, in terms of how you develop, how concerns are separated, etc.


I would say that deno vs node is the biggest difference, with node you need to setup and maintain 3rd party tools (bundlers, transpilers, etc) but with deno all that tooling is first party, so in theory is less things to install and worry about.

Deno has other advantages in paper, like official ts support, all the tooling was written in rust (so it's more performant that the default ones that the others use).

The only downside right now with deno is popularity and maturity of the ecosystem, it is just too new, so you will have hard time finding what you are looking for that works out of the box, while a lot of companies invested in node official packages.


Remix is not "on node", it can target other runtimes including Deno and Cloudflare Workers.


1. With this approach of sending "only the small JS chunk needed for interactivity" aren't we going to end up with the situation where chunks A and B need common part C? And what if C needs D etc. There should be provided a dynamic modules loader. Is it implemented in all those hydration based frameworks?

2. How is solved the situation when <button> is delivered to the user, but "onClick" action is not because network failed?


> Is it implemented in all those hydration based frameworks?

I can't speak for all frameworks, but fresh can dynamically break out shared dependencies so you don't have to download the same code twice.

> How is solved the situation when <button> is delivered to the user, but "onClick" action is not because network failed?

Developers need to deal with this in their applications. The counter example on the fresh homepage uses <button disabled> for the server side render, and only enables the button in the client side when the counter island hydrates.


Ooh, I like that.


Why use '$' as the package namespace prefix/identifier instead of the already agreed upon convention of '@'? E.g. '@fresh/{package}' vs '$fresh/{package}'.

Seems like a departure from the norm for no reason unless there's some Deno particularity about it.


Because the '@' convention is for organizations, not the actual package, e.g. '@company/pkg'. In this case, '$fresh' is the actual package, and there is no organization name.

This just may be the Deno standard for their import mapping functionality since they also do full URLs for imports like Go (sans schema) normally.

Deno is also just not exactly like JS ecosystems, and that's exactly the point too. Opinionated defaults, out-of-the-box support for TypeScript, death to NPM.


I think the convention of `~somepackage` is more common than using `@` as a shortcut.


Slowly getting somewhat cynical. This is the only space where both "prebuilds everything and therefore saves rendering time and improves caching" and "no build step and so speeds up deployment" are both considered valid feature pitches.


> Slowly getting somewhat cynical

Are you equally cynical about the hundreds (thousands?) of different ways to setup and deploy infrastructure (all with different pros/cons)? FE is timid compared to devops churn.


On the last decade and half the entire group mind of the computing discipline switched from "no build step will speed-up development and deployment" into "a heavy build step will speed-up development and deployment". So, it's not like the web developers were going at it alone.

Anyway, the change happened because of real environmental changes. Developers everywhere didn't just wake-up and decide their old values were the exact opposite of the truth, both opinions stand on valid models of the world and hard-acquired empirical information.


Why does that make you cynical?

They're two different spins on how to render content. They both have pros and cons. It's good to know this and make an informed decision about which strategy you take.


To me it reads more focused on developers than users of the site you build.

I like that in the last couple of years the developer experience was improved in some ways, this kind of apps that you could develop "easily" as monolithic could be served in a serverless hosting as a microservice (if I understood correctly how serverless works) serving each endpoint as a separate service, everything without you as dev put too much effort into it.


> fresh also does not have a build step. The code you write is also directly the code that is run on the server, and the code that is executed on the client. Any necessary transpilation of TypeScript or JSX to plain JavaScript is done on the fly, just when it is needed

I find this very interesting. I get that adding a build step can be a pain during development / deployment, but running your TS build once per deploy seems _much_ more efficient than doing it repeatedly, as needed. Or does it get cached , so it's built at-most-once? I haven't really dug in.


It’s running on Deno, which builds TS on the fly with SWC (and does a bunch of other stuff with Rust-V8 interop). Generally speaking it’s close enough to zero overhead that Deno tends to perform better for TS source files than Node for JS source. I’m sure there’s plenty of caching involved, but even without SWC (like ESBuild) is a barely noticeable drop in the bucket.


Even if it isn't caching, it could be added with a service worker.


I explored using client-side service workers for build-less deployment workflows a while back, but the blocker was the initial visit when the service worker hasn't been installed yet. Ended up using es-module-shim's fetch hook (https://github.com/guybedford/es-module-shims#fetch-hook) instead, which worked quite well.

I kept the demo repo around here, in case it's helpful to anyone: https://github.com/lewisl9029/buildless-hot-reload-demo.

The repo itself is quite out of date at this point, but my current project, Reflame, is essentially the spiritual successor: https://reflame.app/

Reflame has the same ideals of achieving the developer experience I've always wanted for building client rendered React apps:

- instant production deployments (usually <200ms)

- instant preview environments that match production in pretty much every imaginable way (including the URL so we don't have to worry about special whitelisting for CORS and whatnot), that can also be flipped into development mode for fast-refresh (for the seamless feedback loop we're used to in local dev) and dev-mode dependencies (for better error messaging, etc)

- close-to-instant browser tests (1-3 seconds) that enable image snapshot comparisons that run with maximum parallelism, and only rerun when their dependency graphs change, and auto flake detection/recovery


Am I the only one that sees the "do not use in production" warning as a challenge? ;)


  > "The framework uses Preact and JSX for rendering and templating on both the server and the client."
Really nice to see Preact used here, it's a much more rational choice than React if you were planning on using React anyways.

I set Next.js up to use Preact as the engine but it takes a bit of config work to do this and isn't an officially/OOTB supported feature.


If you're into React but faster, there's InfernoJS, which is basically React/Preact but /even faster/: https://www.infernojs.org/


Then there's solid js:

https://www.solidjs.com/


no hooks yet though, which limits compatibility and arguably developer experience. Preact has better compatibility - if that's something you're looking for anyway. I suspect that was part of the motivation as well. Also, Preact is 3KB full, inferno is 7.2 KB, if I recall correctly, may have also been a motivation here.


God, the landig page animation with the lemon is delightful!


Fresh = Deno version of Astro (static-first with SSR) and Isle (vue-focused) and bigger Next.js/Nuxt SSR (with no client js) modes. Remix also does well here with a focus on only SSR.

It's basically what the original "isomorphic" javascript promise was to have the same code seamlessly running on server and client with a flexible split on what part ran where, now possible down to an individual tag/component.

Also worth mentioning projects like .NET Blazor, Phoenix Liveview and Rails Hotwire that approach client interactivity through their own backend languages instead of JS, usually with some partial refresh mechanism using AJAX or websockets.


> Deno version of Astro (static-first with SSR) and Isle (vue-focused) and bigger Next.js/Nuxt SSR (with no client js) modes.

I think you have just summoned Cthulhu.


yeah, i was starting to feel queezy...


> code seamlessly running on server and client with a flexible split on where that would be

It's a sound idea. Ideally this would fall back nicely when JS is disabled to good old get/post requests. It'd enforce a stricter mental model of what's a "page" and the boundaries between them. Currently, with JS component libs, this concept is somewhat blurry.


Does that mean JS running server-side? That's a full-stop dealbreaker for me.


Deno is a JS runtime like Node, so if that's not your thing then yeah, this isn't for you.


Many Node-based JS frameworks, such as Gatsby or Next, allow SSG that can be distributed by a CDN. Your comment is weird.


Dynamic responses also can be cached in a CDN you know...


The whole point of static site generation is that you don't have to manage servers; that you can instead simply generate your assets and distribute them via CDN. Sure you can manage servers and put a CDN in front of your servers, but then you still have to manage those servers...

Anyway, you're missing the crux of my point. I was criticizing GP for implying that all NodeJS-based frameworks require a dynamic server to run respond to queries at runtime. They don't.


Sorry, this is just funny considering how JS-on-the-server has become a (the?) dominant web paradigm among startups and newer tech teams. It's like a comment from 2014.


Just because it is the dominant paradigm doesn't mean it is a good paradigm.

Remember PHP was a dominant paradigm for a long time. (I actually don't dislike PHP that much myself but it is a good example since many dislike it intensely.)

Edit: JS, until TypeScript, had almost all the disadvantages of PHP (bug prone syntax, lack of typing, inconsistent ordering of method parameters etc) but not PHPs advantages (shared nothing, fast dev cycle). The only advantages JS had was that it looked cleaner and that one could use the same language frontend and backend.


Yes that’s part of the point. I’m not sure there is a name for these, I just call it frontend server. You don’t necessarily have to put your whole application there, but can focus on client interactions, and these things are really good at that.


Could you elaborate why? I’m finding the concept appealing, but would like to learn more about the tradeoffs.


I suggest you take a look to Phoenix Live View, it's like that (very little client side JS) with the added benefits of the BEAM. You can serve thousands if not million of simultaneous clients with a small machine.


These JS/TS full-stack frameworks need to be banned right along with fruit-flavored vapes. Think of the children.


Does anyone know what software can be used to design the juicy hero animation seen here?


It's an SVG animation. There are a number of programs that can be used to create such an animation.

I use Flow, pretty simple to use and it's included with setapp (an app subscription platform on mac).

https://createwithflow.com/

You can also check out svgator, it's an online based solution. https://www.svgator.com/

Cheers!


These are great, thanks!


Thanks


A JS framework, not a web framework


It’s a JS web framework. It does pretty much what your average web framework does, except adds in first class interactivity, and drops an opinionated ORM layer. I personally really like this balance, and will kick the tires when it’s not experimental.


Does anybody know if the goal is to become a more opiniated framework with an ORM layer and MVC structure etc.?


I love that the pendulum is swinging back to file-based routing. It reminds me a lot of the simplicity of cgi and php scripts. I'm sure there's a point where it explodes into a monster of complexity with enormous sites, but for everything smaller it's so much simpler and easier.


Next (no pun intended) thing we will rediscover is using templating engines (only in JS or so), because we realize, that mixing state and behavior is a problem. Then we will have gone full circle, but probably with some unreasonable overhead as a result. Maybe the whole thing of rendering templates will somehow become a part of webpack and everyone will have to configure webpack.

Good that classic web frameworks are still around and healthy, which have been rendering templates on the server side for a decade or so.


Part of why I like React (especially with Typescript) is basically that I like JSX/TSX as a templating language. I much prefer working in a full language (with JSX as relatively light syntactic sugar on top) that can leverage existing tooling and typechecking, instead of a de novo template language with its own syntax for control flow constructs jammed in, that doesn't usually have great editor support or typechecking for integrating with the rest of the server-side code.

Of course, using React has all the issues of SPAs, and trying to use something like Next seems somewhat fiddly due to keeping client-side and server-side state in sync and managing hydration. I'm entirely in favor of a framework that allows writing a server-rendered app with a decent templating language (and ideally a good path for writing client-side interactivity if I need it); it looks like Fresh might do that. I'd love to hear about other frameworks (JS/TS or otherwise, though I definitely prefer statically-typed languages) that match up with what I want, though.


> Part of why I like React (especially with Typescript) is basically that I like JSX/TSX as a templating language. I much prefer working in a full language (with JSX as relatively light syntactic sugar on top) that can leverage existing tooling and typechecking, instead of a de novo template language with its own syntax for control flow constructs jammed in, that doesn't usually have great editor support or typechecking for integrating with the rest of the server-side code.

I get that. The thing is, that you are already using a "new language" when writing TSX/JSX/whatever. The philosophy of it does not even try to prevent you from making any mistakes. It is like in Python Mako vs Jinja2. In Mako you can run arbitrary Python code easily, so you need a lot of discipline to not screw up. In Jinja2 you are mostly just handing in the data to render the template, perhaps apply some filter or so. This help inexperienced developers to keep clear boundaries between the view layer and other layers. But with TSX/JSX/etc. all that is thrown out of the window again and people will do anything they want in a place, where there should be view code only.

Whether I need to learn which elements can be inside which other elements in TSX/JSX, as it is not HTML + JS or anything and it does not allow mostly (!) arbitrary nesting like HTML, or I learn a templating language, which is more traditional ... I have to learn a new little language (perhaps DSL, or whatever one could call it) anyway.

And with the swing back to server-side rendering, the question arises, whether we really should mix all that state and behavior together again and forget mostly about the lessons of the past.


> that can leverage existing tooling and typechecking, instead of a de novo template language with its own syntax for control flow constructs jammed in, that doesn't usually have great editor support or typechecking for integrating with the rest of the server-side code.

This is exactly what I find frustrating about go templating. I lose autocomplete and type hints.


Makes me think of Scheme and SXML, where you never leave the context of the language. Transfer that to something statically typed, and you would get what you want. Maybe something like that exists in Haskell or a similar language?


Tsx is the templating language that is also typescript. So the “controller” and the template are the same function.


So, if I throw `tsc` at the `.tsx` file, you are saying it will work, without any other library (only `npm install tsc`, nothing else allowed)? Don't you have to install additional frameworks, that work with tsx/jsx for that?

Afaik `tsc` deals with `.ts` files and nothing else, making typescript and `tsx` actually 2 separate languages, as the name suggests "typescipt extended", but maybe I am wrong.



Hm:

> TypeScript ships with three JSX modes: preserve, react, and react-native. These modes only affect the emit stage - type checking is unaffected. The preserve mode will keep the JSX as part of the output to be further consumed by another transform step (e.g. Babel).

So the only non-React dependent mode would be "preserve", which just keeps it like it is. The other modes of compilation/output would need to be processed by React.

On the other hand: OK, it can be processed by tsc, so technically, they have built-in some React support into TypeScript. Not sure I am a fan of such framework specific features in a compiler. Would have been nicer to have an actual standard for web components, which is not dictated by "this is how React does it" and which can be output without being framework specific. Then each framework can choose to interpret that output however it wants.


Same. I've also found .aspx templates in .NET to have similar issues, although those are more complicated with all the data binding functionality.


Very similar to sveltekit as well. I hope in the future you can swap out the preact for svelte.

The deno runtime is very interesting.

Most intriguing: no build step. That's a big difference from sveltekit which takes very readable input files and produces hardly readable files.


Other than helping creating websites, this has almost nothing to do with SvelteKit. SvelteKit: - Requires configuration and build - Doesn't support partial hydration - Ship JS to the client by default - Can also be used as static site generator or SPA framework (no server side code)


Why this can be great:

- The dev experience is closer to the early days of PHP.

- TypeScript, Preact out of the box. No need to configure build tools / deploys much faster. It's a pain in the ass to make these working at the same time and targeting both browser and server nowadays.

- You can have interactivity without bolt-on client-side scripts that are different from other parts.

- The code could be running on the edges.

- I'm not sure what does the island based client hydration means, but sounds like Remix

Many other frameworks could do some of them but not all (Ruby needs JavaScript/Turbolink, Next.js need to build then refresh, etc)


Frameworks like Next or Nuxt often render each page server-side, shipping HTML to the client, but then also send enough javascript and json data to the client to "hydrate" the page back into fully interactive components. The whole site, then, really acts as one large javascript app once fully loaded.

The islands approach is different: pages are server rendered, but you can easily define islands of interactivity (like, say, an auto-complete search bar) where just enough javascript is sent to make those components interactive—and only when it's needed. You can control at what point exactly each component is made interactive: as the page loads, when the component becomes visible, when the user first interacts, etc. It's a great way to balance performance and rich interactivity. If the user never scrolls down to your photo carousel at the bottom of the page, the javascript is never requested.

If this sounds like what we used to do with say PHP & JQuery, you're not wrong. The difference here is we have the same javascript-based template logic and component model both clientside and serverside.

Some other projects adopting the islands pattern: https://iles.pages.dev - https://astro.build - https://slinkity.dev

More reading: https://jasonformat.com/islands-architecture/


The main dev on Slinkity has moved onto Astro. Just an fyi and yes Astro fan here. Mental model feels very productive.


Could solidjs considered also using the island architecture or at least considered a similar architecture?


Yep. We have come full circle. This is php with a better experience


So can I buy just some random hosting, upload .js file and it is done?


nope. deno runs "fresh" web server that spits out html. you would need a vps for that, until some hosting providers don't include deno/fresh as supported(unlikely in near future).


I both love it and hate it. Most of the features are similar to sveltekit/next.js. I get that the biggest benefit is the deno deploy integration for ssr on the edge. But I would have highly preferred a new flavor of sveltekit where deno deploy is a build adapter (like cloudflare workers currently are) and the script part of svelte could be set as "ts-deno". No need to reinvent the wheel yet again and split the ecosystem more.


It uses Preact, just in time bundling, and a number of other concepts that would probably be a pain to integrate seamlessly into those. Seems like a justifiable from scratch prototype.


The counter below was rendered on the server with a starting value of 3, and was then hydrated on the client to provide interactivity. Try out the buttons!

Ooh. We have supported this as a feature in the Qbix Platform since 2014: https://qbix.com/platform/guide/tools

Back then, it was called “progressive enhancement” (anyone remember it?) before they moved to “graceful degradation” where JS was assumed always on (like broadband internet became assumed always on, instead of previous generation stuff like IRC that expected netsplits, now everyone just had SAAS on The Web with online/offline status).

What we always advocated for is to build client-first software that works with JS, but then spend time to make versions that render on the server and work without it. In that order. Flips “progressive enhancement” on its head:

https://qbix.com/blog/2020/01/02/the-case-for-building-clien...

I gave a talk on this when I worked at Lab49, it might be more interesting in video form:

https://youtube.com/watch?v=yKPKuH6YCTc


Here are some of my thoughts:

It is using silly naming conventions in filenames as an alternative to specifying routes. eg /users/[name].tsx for /users/:name

It uses a manifest file. I remember when entity framework in C# had a separate file that represented the mappings of the database. The problem is the database and the manifest would get out of sync. I imagine the same thing would happen here.

There is no documentation about the islands or how they are implemented.

It uses JSX, which I don't think is even very nice.

It seems a lot of this "going back to server-side" is due to the slow and bulky nature of loading react on page load, but libraries built on top of native web components alleviate this issue to some extent. For many people, farming the rendering off to clients is faster if they don't have enough server capacity. After the first page load, client-side is faster. I am not sure how many people benefit from going back to server-side. I do think initial page load is very important though.

It says there is no build step but there has to be one because v8 runs JavaScript not TypeScript. The documentation for "swc" compares it to babel...


> It is using silly naming conventions in filenames as an alternative to specifying routes. eg /users/[name].tsx for /users/:name

Its a convention also used by Next.js, so it is just sticking to existing conventions.

> There is no documentation about the islands or how they are implemented.

Documentation is still only partial & incomplete.

> It says there is no build step but there has to be one because v8 runs JavaScript not TypeScript

This is a module for Deno, which does that under the hood, it isnt something a deno user would have to be concerned with.


The naming convention is no different to Next.js, arguably the most popular frontend framework currently


Wish we could have just skip the hard part of transitioning to the future.

At least next generations will start write JS/TS on both ends (front/back) and hopefully maintain seamlessly the state, benefit the server side and benefit on the front. Writing in same language and sharing the logic around.

Sometimes it is a nightmare switching between languages python/php/go/anything else and then js for full-stacks.


In particular, can we skip to the part that involves no js :)


Hopefully we'll be able to use a language that is more advanced than TS on both frontend and backend one day


Probably something like Gleam lang. That full stack has felt like tasting the future


The future is the past - Meteor.js.


On the part about adding interactivity:

  > "To include this in a page component, one can just use the component normally. Fresh will take care of automatically mounting the island component on the client with the correct props:"
How does a developer know what rendering is going to take place client-side vs server-side, and is there any way to control this?

Or is it all fully "magical"


If I understood correctly, it all renders server side and then adds the interactivity where it's necessary, and in a react app is kinda easy to spot which components are interactive or not, the biggest difference here, vs a traditional react framework, is that fresh is more similar to astro[0] than next.js/remix, so it ships less js.

What I read from other comments is that fresh does code splitting, so for example if the interactivity is out of your viewport and you scroll down to that element, it will then load the js required to make it interactive, while I find that idea cool, what worries my is that it could take a few ms to load the js and then other few ms to boot that component and render it interactive. But I don't have experience with any of the frameworks (besides next.js where I build a small demo project to try it out).

[0] https://astro.build


Well, you know by seeing that the component is located in the special `islands` folder.


Yeah this seems like a pain. I don’t like magic. Magic is hard to debug.


It's not magic. There is an `islands/` folder that you must place all of your client components in. 1 component per file. See https://fresh.deno.dev/docs/getting-started/adding-interacti...


Reading some of the more eager comments, I think it needs to be said I don't always my application indexed, so having an actual SPA makes a ton of sense. What good is any of this when your session is in the browser and all you can manage is serve the login page "very fast".


From https://fresh.deno.dev/docs/getting-started/create-a-route :

> Routes are defined as files in the routes directory. [...] If the file name is contact.js and is placed inside of the routes/about/ folder, the route will handle requests to /about/contact.

I can't say I'm particularly keen on being told what directory to put files in, or or being told that I must only code exactly one endpoint in each file. Why aren't I allowed to code multiple endpoints (e.g. for related functionality) in the same file?

Also, in what directory do I put my files if the url is of the form /user/{username}/page/{pagename} ?


You put square brackets around parameters. e.g., routes/greet/[name].tsx

https://fresh.deno.dev/docs/getting-started/dynamic-routes


To add on this: Looks like you can do that for directories as well. Here's how it works: https://github.com/lucacasonato/fresh/blob/4bb07f4bcebfc2056...

So the example above could be:

  user/[username]/page/[pagename].ts


Hmmm. I suspect there is some software out there that doesn't like directory names with square brackets in them.


This is the same thing I dislike about NextJS. What is the argument for using the filesystem as part of a framework's API?


In the case of Nuxt (which by default uses basically the same paradigm) I could circumvent this by using the package @nuxtjs/router.

I have no idea why this is the default behaviour, it sure feels something like a lot of people put a lot of work into and by then they didn't realise their fancy new feature doesn't make things better.


I think it’s a part of their convention over configuration philosophy. Say what you will about it, there are obviously drawbacks.

But I’ve noticed that it makes it easier to quickly understand new codebases in my organization. Everyone are using Next.js and all the apps have about the same file layout.


And it makes the routes easily greppable in VSCode/Ag/Telescope/CrtlP which I think is the thing motivating it.

I think it’s neat that having extremely fast project dir fuzzy finders has influenced framework design. Like how Django’s design is heavily influenced by how importlib works and would be completely different if Python modules worked differently.


I asked about this recently and got some interesting answers: https://twitter.com/SachaGreif/status/1534079292774445056


What is a good argument against it? It makes understanding the layout of the website easily and saves you from having to come up with your own <custom> conventions around it.


I generally prefer keeping all the configuration in as few languages as possible and preferably in a single language. Adding filesystem-based config where a config option object in the main language of javascript would suffice goes against that.

Also, given a filesystem config, now I'm forced to have many very small files around for each route where each file is most likely just a call to another service handler. I'd prefer to mash most into bigger files that handle related but distinct routes.

Less important, but comes up, it's nice to be able to match routes based on code and not just string equality ... e.g. everyone seems to like having routes for usernames start with '@'


If you need something a bit less esoteric and want a traditional, straightforward approach to building web apps with JavaScript: https://github.com/cheatcode/joystick.

Add an Express route -> tell it to render a component/HTML via a res.render() function -> it auto-hydrates everything on the client for you. Components are written in plain HTML, CSS, and JavaScript (no JSX or other funky syntax) with a quick-to-grok API.

No new concepts to learn and keeps you as close to the core technology (HTML, CSS, and JavaScript) as possible.


The most exciting thing for me here is that it uses Deno. Deno's ecosystem has been its biggest weakness so far (understandable, compared with the vibrancy of Node's), so it's exciting to see it progress


I've been doing all of this with PHP and a little JS for decades. But sure, next gen..

- Just-in-time rendering on the edge.

- Island based client hydration for maximum interactivity.

- Zero runtime overhead: no JS is shipped to the client by default.

- No build step.

- No configuration necessary.


What is hydrating the client?


> Island based client hydration for maximum interactivity.

This honestly reads like satire. It sounds like something on the sarcastic VanillaJS homepage.

This gibberish being the second bullet point in a list of core features is a huge turn off. I know it’s tongue in cheek, but still


It’s actually not gibberish.

* Island refers to this https://jasonformat.com/islands-architecture/

* Island based hydration is a form of partial hydration where the boundary is the component islands.


Makes sense. Thanks!


People downvoted you, but I completely agree. It looks like this is NOT satire, but I hope you turn out to be right. Maybe someone has some sense of humor and enough time on their hands left to pull such stunt.


My bet is on someone forget to take it out of the template when the rest of the copy was added, and it will be gone in a few days with a realistic list item there.

I think it's mostly like a tongue in cheek acknowledgement of how everyone in big techs is fighting so hard for a promotion that they aggressively brand the heck out of every library, exploit, and "new tech" they scheme up, even if the thing they're mentioning is using weird language for a concept that isn't new.


Hydration is a technique where the content is rendered server side then client side JS attaches the necessary event handlers to make the UI interactive

https://en.wikipedia.org/wiki/Hydration_(web_development)


Thanks! makes sense.


It's filling in the initial data in a front end app after the logic is sent and built up on the client side.


Why am I not surprised. Yet another javascript framework... I'm tired


Is it your job to testing every new js framework, didn't think so. Then this "new framework" situation is just funny.


The ever crushing wheel of time seems to repeat itself in many cases: wars, text editors and web frameworks.


This obviously has some level of Deno officialness. But from looking at the TODO-ridden state of the docs, it doesn't seem like it's ready for its '#1 on HN' public launch moment.


It has nothing to do with the official Deno project except for being hosted on Deno Deploy which gives you a <projectname>.deno.dev domain. I find deno.dev misleading in the sense that it gives any random project instant credibility.

[1] https://deno.com/deploy/docs/projects


the repo is under a Deno core team member tho. https://github.com/lucacasonato/fresh


I totally assumed it was an official Deno launch until I read your comment. I think it's the combination of the domain and the Deno mascot that makes it misleading.

EDIT: oh, if it's made by a Deno employee that changes it.


Thank you. I actually realised/remembered about the domain and already edited my comment. But the author of the project is a Deno employee.


I heard the word "hydration" so many times, I sort of get it, but always have question mark in mind that what's different between hydration and jquery approach?


Hydration only makes sense when you have a vdom that you have to "synchronize" with the initial state of the page (the html you got from the server). The act of performing this synchronization is what hydration is about (notice that this includes setting up event handlers too). What you do here is to reconcile the in-memory representation (vdom) with the actual contents of the page (the dom).

In the jquery aprroach the in-memory model is built straight from what's on the page (the dom). This removes an entire class of problems (there's no mis-synchronization possible) but has its own set of issues (the model you build may not end up being what you expected while coding).

More importantly, "hydration" use implies that the HTML that gets sent to the client is a Server-Side Rendered version generated from the same JS code that runs in the page. The very big advantage here is that there's a single place where the page's content is controlled from (the js/jsx/whatever code) instead of having to manually maintain the html/js relationship by hand like in the jquery case.

Furthermore, the jquery approach is global in nature (you can manually scope jquery-fiddling, but if you make a mistake you may end up modifying stuff elsewhere on the page) whereas this cannot happen in the vdom approach.

In general, the vdom approach makes code more manageable at the cost of runtime performance. That's why many people recommend react and/or other vdom-based frameworks for the complicated cases (lots of dynamic stuff on the page) but vanilla/jquery/etc. when the in-page interactivity is lower.

Today's crop of "frontend frameworks" (such as nextjs, fresh here, etc.) are trying to achieve the benefits of the vdom approach while minimizing its' drawbacks (and full-page hydration is a big drawback because it takes "a lot" of time decreasing the page's Time-To-Interactive metric).


So it's the same approach to generating html, except the part that belongs to client side (event/ui-state) is irrelevant to backend _anyway_. If you separate javascript code into 2 main parts, 1) what's only related to generating html 2) what's only related to DOM API, you get jQuery approach where on the back end javascript is just yet another backend language?


> If you separate javascript code into 2 main parts

The point is that you don't do this. You do:

  const App = () => {
    // Client-side counter
    const [count, setCount] = useState(0);
    useEffect(() => {
      setTimeout(
        () => {setCount(count+1)},
        1000
      );
    }, [count, setCount]);

    // HTML-rendering part
    return (
      <div>Count: {count}</div>
    );
  }
The framework (next, fresh, whatever) grabs this and generates:

- Some sort of "bundle.js" that contains the component (as well as the framework to render it).

- An "index.html" that contains "<div>Count: 0</div>" (and references the above bundle).

- When the "bundle.js" is loaded, it runs the App component in the browser and reconciles the in-memory vdom with the browser's dom (that has the div because it came in the .html)

- From then on, the component runs normally (i.e.: the counter keeps increasing and you see it increase on the screen).

In vanilla-land, you would:

- Write "<div>Count: <span id="count">0</span></div>" in an index.html that also loads a "counter.js"

- Write the counter.js file with something like:

  document.addEventListener('ready', () => {
    const count = document.getElementById('count');
    let value = 0;
    setInterval(() => {
      count.innerText = value++;
    }, 1000);
  });
Less code, you don't need frameworks, hydration nor anything similar and it's probably much more performant.

However:

- If you want to change the counter's initial value, you have to update both the html and the js file

- If another dev adds some element with id="count" to the html the whole thing breaks.

- If you want to modify the generated html structure, you have to edit both the html and js file

- When the logic gets more complex, you will have to implement it both in your backend-code-that-generates the html as well as in the js

That is, even though it doesn't seem like it in a simple example such as this one, the developer ergonomics of having a vdom are much better when things get hairy. The million dollar question then becomes "how can we achieve a performance and code-size similar to the vanilla approach while keeping the ergonomics of the vdom".


So js framework separates the render function which renders html and the client-side only part (count event) for you. This akin to what I said earlier. The same missing piece here is effect (e.g. http) that actually transfer state from client to sever and persist to storage. The business logic for client-only is not going to be exactly business logic for behind-the-wall rules. Some may be shared but that's not really significant; it's going to be things like optimistic ui.

vdom keeps being mentioned, but it doesn't matter here because it's implementation detail, other frameworks will do it differently.

I don't see it's ended up being different than js as yet another backend language with a separate js for volatile state on ui.


Append my conclusion here. JS frameworks aim at this SSR + hydration + island whatever is not going to obsolete client-server architecture as it sounds like, they use the word "isomorphic". No, I don't think it's isomorphic or anything one code base to rule them all. It's still server code + client code written in the same file or same module. Its actual role and operation as component in architecture is still the same.


With jQuery, your rendering is 100% dynamic, meaning, you load your JS on the client/browser and when that code executes, it fetches some data and "injects" it into the DOM (technically a form of hydration).

Hydration in context here means taking some HTML that was server-side rendered with JavaScript and then, in the browser, handing off subsequent rendering (in response to user interaction) to JavaScript running on the client.

Think of it like those little dinosaur sponge toys that would inflate when you poured water on them. The dry sponge is your server-side rendered HTML and the interactive JavaScript is the water being poured on it.


If you `s/Hydration/jQuery/` on your hydration description, I find it's still true.


Add it to the pile. It looks nice and clean but am I crazy in that FE is so fractured now? How can a developer choose the right direction/framework to focus on?


That's funny how many smart people understand that web is a giant pile of hacks on top of hacks, and how incredibly important web for modern apps distribution (I bet you buy tickets, pay invoices, do you medical check ins etc on web apps), and yet leaders of this ecosystem do not even ask the question whether foundation of web (html/css/js) is even a good solution.

Instead they create new hacks.

Island based client hydration, right.


Looks nice. A lot of ideas shared with remix but native to deno. Will def check it out when it’s production ready. Also, beautifully juicy hero animation.


Have other frameworks had the concept of ["interactive islands"]? I'd love to know more, but the docs aren't fleshed out yet. Does anyone know of other (documented) frameworks that use this concept?

["interactive islands"]: https://fresh.deno.dev/docs/concepts/islands



React has server components coming out that will also be zero-runtime with only JS hydration for interactive parts. https://reactjs.org/blog/2020/12/21/data-fetching-with-react...

This will be great for mostly static pages.


Excited about this!

Docs should include obvious link to github repo: https://github.com/lucacasonato/fresh (edit: it's already in the footer)

Also, deno/x needs an update: https://deno.land/x/deno_fresh


This looks good and has potential and all, but do we really need so many new web frameworks? Is all this effort spent on slightly improving the current tooling really worth? I think the web frameworks have had enough incremental changes, we should either stick to improving the current frameworks or creating ones that bring revolutionary, not incremental, changes.


Why not both?

Certainly everyone who creates a (somewhat serious) web framework is at least aware that they could also participate in the development of an existing on. If they feel there is something to gain by going from scratch, why not? I don't think the popular existing frameworks exactly suffer from under-contribution.


> Why not both?

My point was that it's fine if someone starts working on a new framework, but personally I would rather have all that effort be used for developing services or tools that are actually needed and do bring value to the community.

EDIT: To make it more clear, the issue is when the creators of such frameworks do believe they are going to change the world by recreating in a slightly different way something that already exists. If they are doing this for fun or to get experience, it's all good, but very often those frameworks do seem to have the ambition of becoming the next big thing and revolutionizing the way people are developing web apps.


The incentives here seem strongly skewed towards having your name on a new framework for career purposes rather than improving the ecosystem at large


That seems wrong, intuitively. If anything, having meaningful contributions to a long standing project go through should relatively advance your cause, not hinder it.


Intuitively maybe. In practice definitely not. Creator of xxx with 100 stars has obvious and easily understood impact where "contributor" could mean anything


> The framework uses Preact and JSX for rendering and templating on both the server and the client.

Why not tagged template literals, like Worker Tools[1] and lit[2]?

[1]: https://workers.tools/html/ [2]: https://lit.dev/


Fresh looks inspired by Remix. Is that right? Not that there is anything wrong with that. But given that Remix does claim to be production ready, what makes Fresh better? I have been playing with Remix in a side project. I do like their simplicity vis-a-vis Nextjs. And the fact that it is all server rendered by design and not as a special case.


I don't think that fresh is better or worse, besides being pretty early in development, is somehow different and it doesn't run in node.js, but in deno.


A bit off topic here, but can anyone identify which documentation tool this is? https://fresh.deno.dev/docs/getting-started/adding-interacti...

It's very clean and simple, I like it.


Looks like they're doing it bespoke:

https://github.com/lucacasonato/fresh/pull/108/files#diff-27...

Although the way it's implemented seems similar to mkdocs.


Hmm. I don't see enough of an advantage over Next.js to make the leap, but good to see some innovation in this area.


Did they copy the React API or the client side stuff is a wrapper on top of React?

https://fresh.deno.dev/docs/getting-started/adding-interacti...

It looks identical to React.


> The framework uses Preact and JSX for rendering and templating on both the server and the client.

On https://fresh.deno.dev/docs/introduction


Thanks I missed that!


There is a 6-KB 0-build-step web framework Petite Vue by Evan You: https://github.com/vuejs/petite-vue. How is Fresh better that this?


Because it is newer. Obviously.


Petite Vue is less than one year old.


Deno was always cool. It seems like a revolution in webdev is happening above our noses.


That animation stole my heart


Never use unmature technologies. They just lead to frustration.


"Island based client hydration"

Is this ling that I'm missing? Or a lark?


Replace island with widget. It means each part can render/rehydrate independently, I suppose


You can do this with multiple React roots on the page. It’s been done by many websites for years.



Thanks that's it.

I'm just balking a bit with the word 'hydration' entering into some kind of normative lexicon.

I think there is probably a better word for that.


Total non sense for a new user because so called documentation is very vague and no step wise step configuration nor how to use this "wonderful" thing.


Seems really fast loading, and easy to use for devs. Pretty nice!


This feels exactly the same as Astro SSR (which I’ve been using recently and is great by the way). I guess this validates their direction. Happy to see more frameworks like this.


How does this compare to vuejs? Why is it better than existing platforms (nuxt) Really keen to know, not trying to be negative, not sure this is even the right place to ask.


Server side rendering and pushing HTML to the client?


Yes


Finally things are happening again on frontend stack.


Is it official deno project? The github repo isn't on deno's organisation.

or is it like anyone can host anything on deno.dev, similar to js.org?


a deno.dev subdomain is provided to all deno deploy projects (https://deno.com/deploy)


The project will migrate to the denoland org soon.


Does Remix also not only ship JS for the interactive bits? I.e, you send over some HTML & then hydrate it with Javascript, right?


If so, how does it play out in practice? Do people really make most things in a way that does not require hydration, or do they make simple text and forms a hydrated thing, that does not work without JS?


Everything I see around deno makes me really excited. I really have to rewrite one of my pet Typescript apps in it.


I tried six pages of getting started and still didn’t see a code snippet so I guess it’s still wip


Where does Fresh perform better than Rails?


is react component compatible with fresh which is the most important thing isn't it?


Why my comments removed?


How do you get the content from the headless CMS into the static sites if there is no build step?


is fresh dependent on deno ? can it also be run on vanilla node?


I wouldn't run on node, deno loads the dependencies from a url directly (no need to install that) a convention is to use a `deps.ts`[0] file. Also with node you will need quite a setup to run this project (if it could run), you will need ts-node, webpack or similar, etc, while deno comes with all that tooling included by default.

[0] https://github.com/lucacasonato/fresh/blob/main/src/server/d...


How widely use is deno?


It’s something to keep an eye on.

There’s companies that bet on deno/deno-like runtimes. It’s a move to having more options/control on sandboxing/embedding JS on server runtimes.

Think of how Lua is used. Heavy lifting in a compiled language with Lua on top to cover app specific logic.

Node is in a sense that, but with a stronger emphasis on being a general purpose runtime for JS.

Deno (and similar), gives you more options for embedding and restricting the JS side.

I think it’s worth keeping an eye on, because I think the JS world is soon in an refinement/optimization phase. It’s settling and stabilizing towards a set of core ideas and Deno might be part of that.


it looks amazing solution, how much it cost?


It's open source, and runnable on any server that you can run Deno on (so any reasonably modern Linux, macOS, or Windows) system. You can also host it on our managed edge runtime offering, Deno Deploy.


> Island based client hydration

Software development already has its own vocabulary, but I feel quite ignorant now.


Knowing what it is now, describing it that way makes total sense... but it only works for people after knowing what it is... kind of like an inside joke.

For anyone wondering what it means: it ships only the JS for select components (usually ones that have some sort of client-side interactivity, such as the incrementing counter on their demo page), so it only has to hydrate that part (where hydration is reconciling the client bundle execution result with what is actually on the page).

This is opposed to the standard way React works which is the entire JS used to render the page—even the static non-interactive bits like plain HTML—is shipped to the client and all your function component functions are run (just without the actual inserting DOM elements again if SSR is used).

React is currently developing server rendered components that works with streaming SSR introduced in React 18 that delivers this same functionality. Basically your app can then be composed of server components and client components (essentially what all current react components are), and only the JS for client components is actually sent to the browser and hydrated.


I'm not an Uncle Bob fan by any stretch of the imagination, but he wrote a somewhat famous article about this phenomenon: http://blog.cleancoder.com/uncle-bob/2014/06/20/MyLawn.html

With each new wave - we get people that either lack the time, the willpower or the conditions to understand what came before them (the ground they're standing on), and this is how we end up rediscovering things every 2-3 years. It's much more tempting to just discard old knowledge, reinvent from scratch.

People were doing "SSR" with PHP on the server-side 30 years ago, and still do today (can you imagine just how much progress that platform has made, and how much collective knowledge has been developed around it?). It's just not hip anymore, because it wasn't invented within the past 36 months.


> People were doing "SSR" with PHP on the server-side 30 years ago, and still do today (can you imagine just how much progress that platform has made, and how much collective knowledge has been developed around it?). It's just not hip anymore, because it wasn't invented within the past 36 months.

Well, kind of. The thing is that the natural progression from server side rendering/content-generation with only html and css, via Asynchronous JavaScript and XML (AJAX) to the current js-first paradigm - was a transition from hypertext document system to "movable code" (In the terms of Fielding's REST thesis)[1].

Keeping state/cache consistent works differently in the two paradigms (and they have different benefits/trade-offs).

There's a real tention between "application" and "document system". And a lot of what seems insane about contemporary web dev, is when people take something that is clearly easily/well solved as a "document system" (eg: blog/homepage) - and implement it as an application (essentially writing half of a web browser in js - with custom routing/addressing and widgets).

It's the reverse of the problem react et al tries to solve: writing an application using a document system (ie: a complex php app).

[1] Ed: In particular "Mobile Agent" - the last section in https://www.ics.uci.edu/~fielding/pubs/dissertation/net_arch...


> There's a real tention between "application" and "document system".

Not really. The web implements a "document as application" or "living document" model. Since the most rudimentary software is just printing static text (and static graphics) you get quite a bit of mileage with just HTML & CSS. The web scales nicely from this to fully interactive applications, and I think some of the tension you are perceiving comes from the fact it's easy to disable or override features built into to the browse, and often developers find a way to do something novel, and in doing so, break thinks like the clipboard.


I think you misread my comment - obviously hypertext systems are "applications" for some definition of "application" - but there are distinct types, constraints and architectures. Mysql and php are not my first choice for creating a multi-player driving game - and even for an irc client they're not my first choice. However, for an asynchronous message board, news or mail client they might be fine.


I’ve been developing for the web since, well not quite 30 years ago (PHP isn’t even 30!), but let’s say 25 years ago. I’ve used so much different tech in those years that I have a pretty good sense of it, and I even contributed to the development of both PHP and React. And you are totally off base. Modern frameworks are far superior for building modern web apps than older serverside tech. Demands have changed and so we need new tools to address this.

Does this mean every single “new hotness” is truly a set change? No, of course not, a lot of things are just fads. Actually I would put SSR slightly in that camp, for while it’s certainly beneficial it’s not really a game changer. It’s just a modest performance optimisation (in some cases!).


While I completely agree with you (and also think that the grandparent post was a bit inflamatory), the argument here is not that newer technologies aren't better. The argument is that the techniques themselves were "forgotten" and are now being rediscovered.

This is easily verifiable by people claiming that the term "Server Side Rendering" can't be retroactively applied, even though it unambiguously means what PHP used to do.

I personally enjoy both the old and new techniques, and I think it's a natural progression to use those new frameworks for also rendering on the server. Not only because of code-sharing, but because I think they're better than old templating engines.


> argument is that the techniques themselves were "forgotten" and are now being rediscovered

Which is a silly, lazy argument.

> by people claiming that the term "Server Side Rendering" can't be retroactively applied

Show me one such claim. The only people I've seen use this argument are the detractors. The ability for JS to run both client/server means we can do things with it that are not possible with PHP or other server-only languages, and frameworks like React are pushing that boundary. Pointing out that JS has these unique opportunities is not even remotely the same as "forgetting" and "rediscovering" SSR.


> "The ability for JS to run both client/server means we can do things with it that are not possible with PHP or other server-only languages, and frameworks like React are pushing that boundary. Pointing out that JS has these unique opportunities is not even remotely the same as "forgetting" and "rediscovering" SSR."

Can you provide even a single example for something we can do with JS running on both the client and the server, that was otherwise "impossible" (quoting you here), or very difficult, without this capability?


Doing "new" stuff and being more performant is really in the domain of the web browser, not in the javascript libraries that run on browser engines. Saying React doesn't enable "new" stuff in that regard isn't really fair because React isn't meant to do that. What React does do is allow for the development of maintainable complex applications such as most major web apps you use including Instagram, Facebook.com, Netflix.com, etc.


Single codebase for data object definitions, without an extra data definition language and code generation layer.

Which leads to things like... data validation logic that is exactly the same on client and server.

Same for serialization/ deserialization code.


> Show me one such claim

In this thread: https://news.ycombinator.com/item?id=31723357, https://news.ycombinator.com/item?id=31723746, https://news.ycombinator.com/item?id=31724418

> The ability for JS to run both client/server means we can do things with it that are not possible with PHP or other server-only languages, and frameworks like React are pushing that boundary.

Sure it does, but this doesn't mean that PHP wasn't rendering on the server. The term still applies to what PHP was doing.

> Pointing out that JS has these unique opportunities is not even remotely the same as "forgetting" and "rediscovering" SSR.

Pointing out those unique opportunities is not a problem, and I agree with you on that. But it has absolutely nothing to do with the usage of SSR or the term SSR. Also, using the same language on server and client is older than modern JS frameworks, which definitely counts as something being "rediscovered". And yes, is definitely a cool thing, nobody is claiming the contrary.


SSR isn’t a technique though. Functional UI composition is a technique, and SSR is an optimization being brought back to this technique.


Yes it is. It's a method of achieving something or carrying something out. But not only it doesn't matter, I don't think anyone else is interested in a stupid semantics discussion.


I agree with you SSR is an optimization but I wouldn’t call it a fad. Take a page that is static for 99% or even 100% of all users—it wouldn’t make sense to have distributed rendering at scale by each client. Instead SSR allows the server to render once and then just serve from the cache. Not only does it speed up the page for the user it saves energy. So I would classify it as a fundamental optimization here to stay.


> It's just not hip anymore, because it wasn't invented within the past 36 months.

Yes, technology moves in cycles but statements like this grossly generalize.

The amount of Javascript running on the web has exploded over the last decade plus to the point that frameworks like React were released to create more modern, interactive experiences because that's what business demanded. Then the industry started figuring out that the virtual DOM was kind of a scam from a performance standpoint and started looking for ways to achieve better performance and SEO, which brought us back to SSR and more advanced client/server architectures like this.


> Then the industry started figuring out that the virtual DOM was kind of a scam from a performance standpoint

Could you elaborate?

Not a frontend guy but my understanding was that the virtual DOM is what's necessary to enable a simplified model where you can code as if your entire view was rerendered from scratch when there's a change.

Mutating the DOM only where and when you need to in a handcrafted fashion is always going to be faster, it's just not scalable, I guess.

So, in that context the virtual DOM is not really a scam. Not sure in what sense it is? Was it claimed to be performing better than it does?


Your understanding is correct.

Virtual DOM never promised better performance than pre-rendered static HTML. It only achieves better performance (and better UX) than dumbly re-rendering the whole website whenever the underlying data changes.

The reason people are doing SSR with modern JS-frameworks is quite simple: some kinds of content aren't a good fit for SPAs (or maybe there are other constraints that favour rendering it all in the server), but it might still be desirable to use modern JS-frameworks. For example: you wouldn't implement a blog as an SPA, but it might still be desirable to use React to render everything.


Yeah, he should know. There so much irony of this criticism coming from a guy that contributed to the max to the entropy of dubious and cargocult terms and "principles" in the field.


Others have responded to the technical deficiencies in your comment.

I’ll just say it’s a bit rich to be telling the developers of React - “all this can be done with a server side rendered PHP”. The developers of React are well aware of what PHP can do, because they work at a company with one of the largest PHP (-ish) codebases in the world. Almost all pages on Facebook were completely server rendered, but they’re gradually moving towards React based web pages.

Please consider that the people making technical decisions might know what they’re doing. Please don’t condescend.


The funny thing is as far as I know they still use PHP to render React (via embedded javascript interpreter) on the server.


You have a source for that? The React code I've seen at Facebook is client rendered.


> Whenever they want to run JavaScript on server (e.g. rendering React on server) they just use V8 directly rather than using Node.

I forgot where I read it, but it was someone on the internet where they embed V8 and call from PHP (Hack).

https://hashnode.com/post/10-things-you-probably-didnt-know-...

It's true they don't really use SSR that widely, as Facebook.com doesn't really need it, but I'm pretty sure they do use streaming SSR, which offers UX benefits.


From your link

> Yes, Facebook uses SSR heavily. However, according to Lee, there are very few areas where they use React to render components on server. This was primarily a decision based on their server environment which is Hack.

Yeah this is my whole point. That person was attempting to school devs at Facebook on the benefits of server side rendering in PHP.

And whatever you said was quite niche.


I mean, sure, there were "web pages being rendered on the server" back in the day.

But that was the rule, and there was no name for it. Wikipedia's SSR page used to be called "Server Side Scripting", which is kind of a misnomer, and is very generic. That would also include serving JSON from the backend... and "scripting" not only limited in scope but also non-ambiguous.

So, someone had to invent a term. The name "Server Side Rendering" is quite good actually, it describes what's actually happening rather than being some random buzzword.

(Of course, there will be people claiming "ackshually SSR is only when a Next.js-like-framework does it", but that was never agreed upon by the majority of devs)


You’re only looking at the outputs or end result and from that perspective React is just going to put out what is very similar to the “DHTML” stuff back then. At the end of the day everything is still just pushing HTML to browsers.

What React introduces is functional composition to application design (if you do it right at least), and the main benefit of this is in maintainability, scalability, and reusability. For most simple apps this benefit can be completely moot, which is why some can feel the setup or learning of a new paradigm for seemingly no benefit can be a regression.

In fact if you wanted to use React in the same way that PHP was used a decade ago, you still can, and it isn’t any more difficult to do so. You just end up with much less maintainable code. You can render React server side just like PHP, and either mount dynamic client side interactive components purely on the client side just like the “good old days,” or you can also render them server side and take advantage of client side hydration at the component level (like the previous method but you also get an initial server render). While this sounds complex, it really is just about as complex as making a server rendered PHP with some interactive JS bits, but just adding some new words to describe the process.

The goal of new React features like server components is to bring the maintainability of functional composition and get this optimization for free while still being able to define your UI in terms of reusable functions.


You can't compare JS SSR with PHP, SSR implies there is CSR which is not the case with PHP.

Then having one language, no, one _shared code base_ for both client and server, which automatically can only send the minimum necessary over the wire? That's a _huge_ step forward.

And I say this as someone who was developing CGI scripts before PHP came along.


> "Then having one language, no, one _shared code base_ for both client and server, which automatically can only send the minimum necessary over the wire? That's a _huge_ step forward."

This is exactly the kind of non-nuanced, buzzwordy and handwavy advertising I was ranting about.

1. Why should having the client and server share the same codebase even be a goal in the first place? This should be a nuanced conversation with different trade-offs, there's no one-size-fits-all here. It's disingenuous to paint this as an ideal we all have to work towards on a whim.

2. I don't even know where's the innovation in being able to "send the minimum necessary over the wire". This is a pillar of decent software engineering. We've been severely regressing here due to the pervasive use of bloated abstraction layers, combined with a deep lack of understanding of how things work (ref. the Uncle Bob post I've linked earlier).


> 1. Why should having the client and server share the same codebase even be a goal in the first place?

Because, in building complex and dynamic web apps, the alternative is to repeat a lot of the same logic in both frontend and backend.

Of course if you are building a blog or a simple static page this is not useful, most of the new techniques are a response to more advanced requirements.

For instance, right now I'm building a "buy form" for internal use (by employees only) in a shop. There is a ton of domain login used to select which field to display, what validation login to use, how to autocomplete some fields ecc ecc. The system is a PHP/Laravel server with SSR rendering and vue only used for some specific advanced components. Most of the logic inside the form has to be written two times: in PHP and in vue. Most enum types are repeated (and must be kept in sync). Having one shared codebase would simplify A LOT the development.


> "Most of the logic inside the form has to be written two times: in PHP and in vue"

Why?

That's just your choice of how to build your app, right? You could've avoided this by rendering templates on the server and sending static HTML to the client, keeping the business logic on the server.

> "Most enum types are repeated"

Here's just one of ten-thousand other battle-tested options you can use: https://github.com/apache/thrift/


> That's just your choice of how to build your app, right? You could've avoided this by rendering templates on the server and sending static HTML to the client, keeping the business logic on the server.

No, that's a requirement on most business cases, my comment stated 'complex and dynamic web apps'. Re-rendering the whole page everytime the user checks a box or clicks a button just to replace some fields in a whole page is (a) terrible UX, (b) hard to track the state between page refresh, (c) wrong practice and (d) bad performance.

> Here's just one of ten-thousand other battle-tested options you can use: https://github.com/apache/thrift/

Sure, I should setup a complex and huge dependency for just one of the many problems I highlighted. What a great idea


> Re-rendering the whole page everytime the user checks a box or clicks a button just to replace some fields in a whole page is (a) terrible UX, (b) hard to track the state between page refresh, (c) wrong practice and (d) bad performance.

In practice, the whole page re-render-from-server is often much faster. Compare how long loading indicators last on full-fat GMail versus Basic HTML Gmail and its full-page reloads. Fastest "web app" I've seen in the past five years was pretty complex, and it re-rendered on every action, even menu navigation. The backend? PHP. I'm in the center of the US and it was served from somewhere in Asia (Singapore, IIRC?). Still the fastest thing I've seen in a long time.

I'm pretty sure the "it's for performance" argument has been dead since we (the industry) stopped sending XML and HTML snippets for direct injection, and started sending JSON and then doing a bunch of processing on it before finally generating some DOM nodes and rendering something. In the wild, what we're doing is killing performance, not aiding it. At least two of your points are simply wrong (c, d), another is highly debatable (a), leaving only one (b) and I'm not sure that's worth the performance cost, at least in many cases.


> No, that's a requirement on most business cases, my comment stated 'complex and dynamic web apps'. Re-rendering the whole page everytime the user checks a box or clicks a button just to replace some fields in a whole page is (a) terrible UX, (b) hard to track the state between page refresh, (c) wrong practice and (d) bad performance.

You can have the backend render partials and only send the affected part. This has been widely in use and battle-tested for about two decades in .NET WebForms, PJAX, Rails Turbolinks and other technologies.

Also, wether the app does this or that is completely orthogonal to how you build it. You don't need tho share code with the backend.

> Sure, I should setup a complex and huge dependency for just one of the many problems I highlighted. What a great idea

Except for cases where the team doesn't know anything other than JS, using this is significantly simpler than forcing the whole backend to be in JS. Also, there are several other options.


> You can have the backend render partials and only send the affected part. This has been widely in use and battle-tested for about two decades in .NET WebForms, PJAX, Rails Turbolinks and other technologies.

No, rendering partials is not a solution once you have a moderately complex app.

Example: The user submits a form to change an entity, you need to send a partial back for the successfully submitted form, but you also need to send partials back for potential 2-3 other places that the entity is displayed on the page, even if they are not displayed on certain pages.

Just tracking and updating them whenever they change is a pain in the ass, not to mention the increased processing/bandwidth for no reason.


The solution to this completely hypothetical an unrealistic problem is to just not have too many partials to begin with. Which is the reality of 100% of the apps made with the libraries/frameworks I mentioned, WebForms, PJAX and Turbolinks.


Ok then what benefit is there over a regular page load?


Faster response to user actions, smaller payload, not reloading the whole page, keeping scrolling position, keeping focus and keyboard cursor position when possible, allowing transitions and animations, no need to change browser history unless necessary, very minimal javascript necessary, allowing user to disable javascript in some cases.

Just because this is a counterexample to something that you deem good, it doesn't mean it has to be absolute shit. The world is not black and white. I would suggest at least doing some research about how things work before criticising them. They might surprise you.


> Except for cases where the team doesn't know anything other than JS, using this is significantly simpler than forcing the whole backend to be in JS. Also, there are several other options.

Of course you shouldn't use a full-js stack if your team of developers doesn't know JS, no one is arguing that because it doesn't make sense.


I never claimed you argued for that. I'm simply stating the cost of each solution. Thrift is definitely much simpler in practice in all cases, except the case I mentioned as being an exception.


> "Sure, I should setup a complex and huge dependency for just one of the many problems I highlighted. What a great idea"

This too, demonstrates the point I raised earlier.

Instead of using a mature, widely adopted and battle-tested system which allows you to efficiently share code across multiple languages without introducing runtime hits - you instead discard it as some "huge dependency" and insist on having JS everywhere, using the latest hype of the week, consequences be damned.

Why actually spend the time to solve a problem in a mature way, when I can just add this week's shiniest NPM package to imports.json?


> Why?

To take just one of many examples, so that you can do the same validation client side and the server side. You have to do the validation on the server because the client can't be trusted, but if you only do it on the server you have to do a full page load to validate any of the inputs, give feedback, or vary the form fields displayed.

e.g. how many times have you filled out a long and complicated form, pressed submit, waited several seconds, and then get dropped back at the same form where you have to hunt for the error message, change the requested field and try again. And heaven help you if you got multiple fields wrong, where the data from one informs the validation of another. Client-side logic can make this process much lower friction.


> That's just your choice of how to build your app, right? You could've avoided this by rendering templates on the server and sending static HTML to the client, keeping the business logic on the server.

Exactly. This way of working is such a breeze. PHP does the logic, the state is firmly in the database, and I'm from a time when peopling talking "frontend" meant HTML and CSS. Occasionally some plain JS, and I'm good.

Years ago I was out of the webdev field for some time, and I must admit that HTML and CSS alone have made big strides.


> and I'm from a time when peopling talking "frontend" meant HTML and CSS

When that is the case, the stack you are describing is just perfect even nowadays.

The problem is, in most cases that is just not the case anymore.


I guess that's it. It's the difference between the "application" and "document system" that some other commenter talked about. I guess the wisdom comes down to knowing which one to choose in what situation.

I myself am looking to find the edges of building a web application with the "document system". Clicks and requests for a full page load don't matter that much if you're able to keep your app simple. Which is also defined by context, not only by programming skills. Certain situations or applications are just not suited for the "document system".


> This is exactly the kind of non-nuanced, buzzwordy and handwavy advertising I was ranting about.

I spent four years working on a Meteor.js app. Meteor's main appeal is isomorphic code and a high degree of reactivity. As the app evolved we replaced the in-built MongoDB with GraphQL. It was nice to have one language and GraphQL was very useful. Eventually this kind of model may be the future, but the next project I did, instead of Meteor, I went with a more conventional Vue front end with REST & websocket api for the backend. The reason for the change is that while Node is really nice, Django and Go get predictable outcomes with less time spend on tooling, patching and updates. With a small team, losing lots of hours to "our build broke because an upstream dependency changed a function signature" is not a good thing.


>SSR implies there is CSR which is not the case with PHP.

Wait a minutes, so now SSR is explicit to JS and with JS CSR?


That's definition is definitely not widely accepted, however some people seem to believe it is. So I guess it's open for debate.


I've been doing both, or rather different kinds of combinations from almost full PHP to PHP with jQuery/Backbone/Angular, and since a while React SPA to Node based including Nextjs. Also just using dependency less JS and htmx as well. All depending on the scope and requirements of the particular projects. For side projects I've built stuff with quite few other languages. This is for professional context.

My conclusion so far is that most of the criticism towards JS based solutions is largely outdated if you pick your tools and libs well. The leverage of something like Nextjs and similar is pretty significant over traditional SSR, even for cases where the latter made more sense a while ago, increasingly so. I don't think PHP is going anywhere in the near future, but I see fewer and fewer reasons to use it at all.

1. When people talk about isomorphic code for frontend related stuff, they often pick form validation as the example. But this is just one of many things. It's also the tooling, testing, types and other integrations that you miss if your frontend logic is spread across language boundaries.

2. Unoptimized performance of a PHP vs Node/Next application is quite significant as PHP needs to recreate the whole application state with every request.

3. Frontend without or "minimal" JS is a pure luxury that you almost never get to have and if you do, it comes with its own complexities.

4. Websockets and other features are a pain to use (if you can use them at all) in PHP.

5. Development is faster and more responsive with a React/Nextjs/etc. based implementation. The feedback loops are faster, tooling and libraries are more integrated.

6. There are quite few good libraries popping up in recent years for JS that give you more leverage than what I'm used to from PHP.

7. Commoditized hosting PHP has been one of its strengths, but even that is being outcompeted slowly but steadily.

8. By default there are things you cannot do or only with additional (unnecessary) effort if you split your frontend logic into two places, rather than having a single, comprehensive codebase. Frameworks and libraries like Next, Remix, this one and others are leveraging that. Yes it gets more complex but you also get more optimizations

Please note that I don't particularly like either JS or PHP. I think both languages and ecosystems are quite terrible in their own ways and have to be tamed by pragmatic developers. So no emotional attachment there.


Cost is a big reason in many countries. PHP is still the cheaper option for dynamic websites. Shared webhosting costs peanuts.


That's true but Nodejs hosting slowly puts out more and more competition that is even free or similarly cheap. This is still one area where PHP shines (indirectly). The other one is legacy/ecosystem. There are still quite a few hairy things implemented in PHP that the JS ecosystem misses and that you cannot possibly amortize in a single, typical web project.


[flagged]


While Vanilla-PHP was definitely mostly just doing text substitution and had injection issues, there were other languages and template engines that were able to fix this by having more sensible defaults where raw-html interpolation is not a default.

Also, keep in mind that injection attacks are definitely still possible with modern frameworks. The same old techniques apply when you're dealing with data originating from users. Sure, the raw-html interpolation is not a default anymore, but like I said that's also present in old backend tech.


The comparison I am responding to is original PHP ("30 years ago", which itself is garbage of course, given the language did not exist 30 years ago), not the new goalposts you have planted.


I'm not disputing anywhere that the original PHP has issues, and in fact I'm acknowledging that in the beginning of the first paragraph. Adding additional information and context is not moving goalposts.

What I'm actually reacting to is the rude provocation at the second paragraph that grandparent must "Look at the actual tech here. Do a tutorial. Discover what the difference is".

They probably don't have to, because this specific advantage is not novel.

...not that this deserved any answer, considering the original reply is now flagged/dead. Maybe someone should tell you to check the website guidelines.


My reply was not intended to be rude or provocative.

"We did this ages ago" is said so often, rudely and in ignorance, as it was here. And while there are often parallels to support the claim, the important distinctions are missed in ways that are best taught with hands on experience. Hence the suggestion to actually learn the new tools. If course, this is arguing in the internet with a curmudgeon, which I should not have bothered doing. I just don't want to see hn become slashdot, where this poo pooing of new tech happened in every thread.


Reading about SSR feels like we went full circle.


We did… in arguably the dumbest way possible… it’s getting slightly better now as frameworks innovate on various ways to improve SSR … but when SSR first started being a thing it was literally stuff like “how do I run a j entire desktop browser, headless on my web server (or wedged in between with weird reverse proxy setups…) to pre-render the page HTML using the my Client side JavaScript of my single page app because I want to do things that my client side only framework wasn’t designed to do”

It wasn’t universally that bad but at its worst it was totally this bad.


My understanding is that you're using a single codebase for rendering both on client and on server. And, of course, after loading server-rendered page, client code continues to work in the browser as expected.

So it's kind of best of both worlds and it makes perfect sense.


Full circle make it sounds like we’re doing it only once. :)

I like to think of it as a Pendulum! Back and forth it goes.


it would be interesting to understand what is different now. Because IMHO it's never 100% full circle. It looks like we're back at the same point, but there are usually crucial differences in the implementation details that makes it quite different.


>Because IMHO it's never 100% full circle. It looks like we're back at the same point, but there are usually crucial differences in the implementation details that makes it quite different.

Your intuition is correct.

The "SSR" looks confusing because people are using that in 2 different ways:

(1) Server-Side-Rendering means using Client-Side frameworks like React on the server : https://en.wikipedia.org/wiki/Server-side_scripting#Server-s...

...or...

(2) Server-Side-Rendering means _any_ dynamic page generation on the server side regardless of programming language or framework. That has been done since the beginning of the web with Perl+CGI, PHP, ASP.NET, etc. For this mindset, SSR means "we've gone full circle" because it ignores the Javascript evolution from clients to servers.

The 2 groups are talking past each other. For people using "SSR" definition (1), it's doesn't look "full circle" because a language like PHP was never in a runtime in browsers so there was never a client-side-to-server-side re-use of the rendering code. "SSR" is probably a bad name to describe that trend.


Correct but it’s even more (1) is more like using the client side code on the server AND the client at the same time, significantly reducing duplication of effort and offering the best of both worlds

So you have one view library (and routing/models etc) and the innovation is that a) it very intelligently only delivers the bare minimum JS of what’s needed to make a fully interactive front end app (which is why you needed React/Vue in the first place due to limitations of pure server side code) plus b) using the same patterns and awesome libraries everywhere on your site, not just specific interactive components but blog, about pages, admin panels, etc.

So you’re not doing python jinja or Ruby ERB templates/helpers/routes/etc on the server mixed with tons of duplication with React/Vue on the client.


> The 2 groups are talking past each other. For people using "SSR" definition (1), it's doesn't look "full circle" because a language like PHP was never in a runtime in browsers so there was never a client-side-to-server-side re-use of the rendering code.

Exactly. As a PHP guy I'm not thinking in terms of SSR or CSR, but backend and frontend. PHP never had to do things on the client side, that is what HTML and CSS are for (and some JS). JS apparently has to go from client side to now also server side.

I don't think it's necessary an advantage to have one and the same language doing both the backend and frontend. I also don't know of any disadvantages, I have no experience with JS frameworks.

But everybody is hitting some very interesting points in this whole comment thread. You have some smart remarks about talking past each other. And I also think your observation of PHP never having to run in the browser is a sharp observation. Thx.


Yeah, i guess the tooling around it got way better. No need to download scripts from hundreds of websites. Just npm install and off you go.

I'm just not a huge fan of SPA in most cases. I had to commute a few years ago everyday and it was near impossible to read any news website, because their stupid React or Angular SPA toy just decided to stop loading. Mobile reception isn't that great in Germany, when you're sitting in a train. The weirdest thing was to watch the content load and then like 10 seconds later going blank, because the connection timed out.


> React is currently developing server rendered components that works with streaming SSR introduced in React 18 that delivers this same functionality. Basically your app can then be composed of server components and client components (essentially what all current react components are), and only the JS for client components is actually sent to the browser and hydrated.

Isn’t the difference here that with React Server Components you’re still fully rendering client-side, but you can ship the data alongside the JS? Whereas with Astro / islands of inactivity / partial hydration you ship actual HTML and then after it renders you make interactive (hydrate) only the relevant parts?


> it ships only the JS for select components

So, even if the page has a single React component, does it ship the entire react + react-dom bundle?


Instead of sending a rendering routine which then fetches data and renders everything on an empty page, they prerender a part of a template (constant) and send it along with fetch-data routine which only fills in missing values later. That’s why you sometimes see a form but its values are shaded for a while. This makes them think that you’re less annoyed because at least something is visible quicker.

Island-based probably means that hydration is not whole-page but granular, making parts unshade at different times, to your enjoyment.


Isn't this how webpages used to work with jquery? Are you telling me everything just went full circle back?


Yes! But with Jquery you had to manually separate out the interactive elements ship those to the client. With newer frameworks (Svelte, Fresh), the framework itself automatically parses out the interactive elements and ships them. There's no context switch between interactive and non-interactive either, since all rendering is done with the same language and framework, and even the same file.

I think this is great.

I didn't hate jQuery. I just hated the context switch between regular HTML templates, and creating a jQuery component. If the entire frontend can be treated uniformly, that's a huge plus.


I guess the lack of context switch seems like it would be easier on the brain. I guess the actual working of static + dynamic bits unlike how it is now with everything dynamic is more of a "how it works is similar" thing while the coding is coherent unlike jquery.


In the same way that C was "full circle" from assembly because it had loops.


Yep, we’re reinventing all the classic server side templating tech, but with “front end frameworks”… which I suppose is actually making them “full stack frameworks” but that’s beside the point.


I wouldn't call it reinventing.

We're just using newer client-side frameworks to also render things on the server, because for lots of people there is a clear advantage in using such frameworks rather than old templating engines. Advantages often include more ergonomic APIs and code-sharing.


> which I suppose is actually making them “full stack frameworks” but that’s beside the point.

No that _is_ the point.


> This makes them think that you’re less annoyed because at least something is visible quicker.

Cynicism detected!


This page does a good job of explaining it: https://www.patterns.dev/posts/islands-architecture/


Goes to reason that you’d need some hydration on an island.


Some of the "modern" terminology in software development makes me want to puke. "Hydration" is one of those examples. And of course ingress / egress rate sounds way more important than read / write speed / rate.


Why? This is just what language does - it's like a constantly evolving entropy code. Jargon emerges to concisely describe unique concepts. Hydration is a great example: it's important in the process, so it gets talked about a lot, but it's distinct from rendering, so it gets its own term.

Likewise, ingress/egress is not quite the same as read/write. It gets its own terminology because it's distinct.


Egress/ingress rate - rate of the traffic that exits / enters an entity or a network.

I see zero difference with entity/network read/write rate.

As for hydration - sounds highly inappropriate to me but whatever warms the cockles of their hearts.


Often read/write are only used when talking about storage.

Ingress/egress are also important in networking because they communicate direction -- lots of pipes are asymmetric, and a firewall allowing all egress is very different from allowing all ingress.

Ingress/egress also measure total data, when the gadget might only "read" a fraction of the traffic. For example, if a hardware-accelerated router makes decisions based on just a few fields of an IP packet, did it "read" the whole packet?


I bet it is some old stuff re-branded. Common theme in IT.


It only makes sense to the con artist that put it on their resume first.

Let me try:

Hair dry volcano hydration off the cliff only, no added sugar.


I thought by now enough people have framework fatigue that no one bothers with yet another way to develop a website.

Is there a reason someone like me should care? Does it improve anything by atleast an order of magnitude?


> Island based client hydration for maximum interactivity.

> Zero runtime overhead: no JS is shipped to the client by default.

Translation: the UX most people on HN complaining about JS want. It’s just the interactive parts, none of the treating a web page like it’s an app, but devs familiar with developing sites that way can use it that way without jamming MBs of JS down users’ browsers.


I think the reason to be interested in this is because it’s deno specific. I’m not up to date with the deno landscape but I believe this is pretty novel for deno.


to me it seems to be promising a more substantial improvement than most frameworks + the deno factor definitely makes it stand out. minimum js needed for interactivity being sent to the client is much better than what we do currently


Sometimes I wonder how much time humanity as a species has collectively spent first inventing and then trying to solve the problem of "making a website".


If it weren't websites it would have been native apps.

Microsoft alone has made more proprietary native app frameworks for Windows in the last 15 than hipster Javascript developers had to learn new frameworks for work.


Far less than humanity has spent on "convince people to buy product X" when X has many alternatives that are perfectly serviceable...or even superior.


The funny thing to me is that "we" went from server-rendering to client-rendering to server-rendering again.


Until technology becomes transparent on our interaction with reality xD


[flagged]


Even if you wanna be cynical, this is a really boring and overplayed take in my opinion. Most frameworks are indeed kind of bloated for running useless hello world demos. Most C compilers give you some kilobytes of code that isn’t necessary for hello world either, even worse for other respected and modern languages like Rust and Go. It can be forgiven if you consider that most of these things are not tuned for optimal hello world.


This comment is the overplayed take that because a lot are heavy frameworks, that this should be acceptable. As soon as something is classified as a framework, it seems ok to be >200kb.

If you take Tailwind CSS for example, when correctly using their CLI tool, it only includes the size of the css classes actually used, keeping it to a minimum, when compared to people just doing a standard import of the entire library. I like this mentality because it's offering the ability to be very lightweight, or as large as the 'framework' it offers. NextJS offers this as part of their build process, but not sure how big their assets are with it for a simple usecase.


The word framework doesn't really actually mean anything. People have a feel for it, but there is no concrete "this is a framework, not a library." However, I think that for most people, the criteria isn't actually related to how large the software is, but rather the feeling of using it. When you use a non-framework library, it feels like using a wrench or a drill; it's a tool. When you use a framework, it feels like you're writing code inside it, not using it. Frameworks can be small. The term "microframework" exists for this exact reason.

Semantics aside, the existence of things with different philosophies doesn't immediately invalidate everything that doesn't give you the same tradeoffs. For one thing, Tailwind deals with declarative CSS output, not imperative modular code. I'm not saying that makes it stupid or anything, but it's very apples and oranges. There are very few JS libraries or frameworks that can offer starting-from-zero KiB JS; maybe Svelte comes close? Ironically, if we're talking about client side bundles, it seems as though Fresh actually does start with 0 KiB, as it does not default to shipping JS code to the client at all.

This doesn't feel like a rational discussion at all. It feels like it's just necessary to come up with a cynical take because there's a new JavaScript thing. In a few weeks there could be some Rust FRP webassembly UI thing that has a 1.2 MiB hello world and hardly anyone will care.


Is this satire? I honestly can't tell, but it made me laugh.

If not, why in the world would a hello-world demo need optimization? By definition it's supposed to be the simplest thing you can build to showcase the features of what you're using. If the simplest project has the properties GP mentions, then it's not a good demo.


[flagged]


The difference is that it doesn't take 600ms for a Rust binary to render "Hello world!" to stdout. In the context of CLI apps, binary size is irrelevant to the functionality of the demo. In the context of web apps, bundle size and speed _does_ matter, so 108KiB and 600ms are very relevant data points.

Besides, I'm pretty sure that in this framework's case the final bundle can also be trivially stripped, but would that also reduce the render time? It's difficult to tell, and maybe those things should be mentioned in the demo. (Sidenote: I haven't confirmed whether what 0des posted is true or not; I'm just going by the fact that you're defending it.)


I know this comparison isn’t exact, but I didn’t attempt to engage with the supposed latency measure. That kind of measurements are really variable and hard to pin down. You’d need percentiles to make even an uneducated judgement, not a single random sample.

Still… a 100KiB bundle is not that bad of a starting point. It objectively isn’t, if you measure it up to other popular frameworks.


I'm in Australia, and I see 18ms (with cache disabled) for the main page, then another 14 to 30ms for the js bundles (looks like some are done side by side).

The image of the dinosaur drinking takes 386ms, and the favicon is 132ms.

So for text and a button to increment/decrement specifically, it seems quite quick.


If you want to render random text and a button you shouldn't be using such frameworks at all. For a full-fledged web application a hundred kilobytes overhead isn't as crazy as you are making it seem.


I find it funny that the real egregious stat is the 600ms render time, but every retort to GP's comment is about the size.


[flagged]


I'm really sad that this sort of attitude is common in Hacker News. I am not a huge fan of JS, but what I hate about this is not that it's critical of JS, but that it's clearly just a knee-jerk cynical reaction. This framework in question is not doing what PHP/Ruby frameworks were doing. Whether what it's doing is a good idea or not is neither here nor there, it's just simply that you aren't understanding what it is that it's doing, and yet still criticizing it as if you do.

We don't need this circus every time a new JS thing is released.


The fact that you think it’s materially different is amusing. The only “magic” here is injecting JS shims to handle bidirectional syncs for certain components (“islands” in their parlance). That is also something we’ve had for a very long time, though admittedly it was kludgy as hell two decades ago (ajax polling, SSE, long-polling, comet, etc).


Early PHP and Ruby frameworks don't provide anything to help you write client side interactive code, so you are on your own there, and you have to bring your own tools to do this.


Yeah, I remember how I used to have my users manually dll inject a plugin into Internet Explorer so I could make the buttons go brrr.

I always wished there was something convenient, like an html tag that made it easier to add interactivity, but I never found it.

I can’t believe we got anything done back in those days.


Take me down with the ship too, this cracked me up.


Did Ruby run on CDNs? Didn’t think so. That’s what “on the edge” means.


Well, we didn’t have CDN’s, but we did have geolocated servers. And yeah, we’d run Ruby on ‘em sometimes.


If it's in this direction, at least add jquery.


[flagged]


That’s because it’s literally, not conventional templates. It is, from what I gather, Preact components, which can be rendered on the server and client isomorphically. The same component code runs on both sides. That’s not templates.


> which can be rendered on the server and client isomorphically. The same component code runs on both sides.

You mean portably. If the same code runs in multiple environments, it's running portably.


Isomorphic is the accepted term for this pattern.

https://en.wikipedia.org/wiki/Isomorphic_JavaScript


That is a weird term for two reasons. First, isomorphism is an invertible structure-preserving mapping between two structures. While the trivial case of that mapping being an identity is technically also isomorphism, it renders the relationship between the two structures (here code bases for different environments) into an identity as well. At that point the multiple code bases you're talking about are identical, not just isomorphic. It's like calling humans "vertebrates". While technically correct, if you're talking for example about the consequences of wars, do you talk about the loss of human lives, or the loss of vertebrate lives? I imagine it's not the latter.

And second of course, "portable code" had been the accepted term for this "pattern" (if you can even call it this way) for decades already. I'm not quite sure why one would feel the need to randomly rename things that have already had perfectly functional names.


The word portable is too general to be meaningful here. If I said my JS library was “portable” most would assume it meant that it ran across multiple OSes under Node.JS. If I said it was isomorphic, I need no further context, because it’s a well-understood bit of jargon in that context.

Personally, I don’t really think it’s useful to dissect the semantics of every bit of jargon; plenty of it is semantically imperfect, like the word “factoid” used to describe factual information or figurative uses of literally. It’s just a word being used to describe a somewhat specific concept in the context of a niche. You would still need to be initialized in what it means in context even if it were semantically correct. The helpful thing here is that a simple search like “isomorphic js” gets you up to speed almost immediately. OTOH, if I search for “portable” libraries on NPM, it’s all “cross platform” stuff. If this term didn't exist, that would make it hard to find libraries and frameworks satisfying this niche.

It’s neither here nor there, my chief annoyance with this general thread was people claiming that “this is just like PHP” or “this is just like Rails” and the problem is, it’s not really like that at all. It’s not like older attempts at “live” server code, nor is preact components really much like templating. The different approaches have their merits, and achieve similar end goals, but the developer experience is starkly different in ways that are maybe difficult to understand from simple examples, but absolutely definitive in real world apps. Don’t look at me, though. I write all of my backends in Go.


This is needlessly condescending.


It is true though. These were exactly my thoughts when I was reading the page. Well, not exactly, because I instead said "Duh".


Ryan Dahl is at it again?


From what I can tell, Ryan Dahl doesn't have anything to do with this (other than it using Deno). At the very least, he isn't a contributer to the Repo.


Deno is repeating a lot of the same things from node, but its in typescript now, and there's some rust involved, so that makes it good. Wait till you're this deep into your career and people are still hammering square pegs into the same well worn circle holes and you'll be the same way.


Did you respond to the wrong comment? I don't see any path from what I said that leads to what you said.


[flagged]


Gotcha. So you did respond to the wrong comment. Thanks for clarifying.

edit: 0des edited his previous comment to be much less hostile after I wrote this one (without indicating he did so), and then told me to "settle down sport" now that my comment seems a little aggressive. Really bad etiquette.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: