This can be the killer app that Deno needs. As of now there is no specific reason for most of the people to try Deno as Node with its plethora of frameworks works and every one can choose their stack as per their taste. But if they can make Fresh sufficiently better (more cohesive and/or performant on both client and server side) and more feature rich on the backend (ORM + Workers/Queuing) with instant deploy then more people might consider switching.
I had no joy trying to get a node JIRA library to run under deno with the various platforms. So still some way to go. Didn't seem far off. IIRC axios was the problem.
In 2007 I was writing websites that were mostly rendered on a server, with JavaScript used selectively in some places for interactivity
I remember writing at the time that moving everything to the client was solving one problem (poor encapsulation of JavaScript-powered front-end components) with another (needlessly rendering everything on the client)
The industry moved toward CSR anyway, and I had to learn it to continue having a job. And now, here we are.
Because this isn't the same as what you were writing in 2007. This is an improvement on that. There is some degree of chasing what is new, but I think it is more so that all of this is a process of learning which causes the focus to oscillate between different extremes.
I'm guessing it's a bit of a pendulum, this is seen in so many areas of human society. We move in a direction and someone says "we ought to move in the complete opposite way" and so we do, then after some time it is clear this new approach isn't working out and we come back to the other side of the pendulum.
With time, of course, it is likely that we end up in an equilibrium at the middle. In reality, we don't know what we like and what we don't like until we experience it.
While I know it's not trendy, I'm still building plenty of both informational websites using Umbraco/.NET as well as web applications, using progressive enhancement. On occasion a client will have a concern that requires doing CSR (such as adding an unlimited number of "rows" in a virtual table or similar where a round-trip would prevent them from getting work done in a timely fashion), but for the most part the websites and applications I build server-rendered are very fast and performant. I still use front-end tools for minification/bundling/transpilation, along with caching, to get fast load and execution times.
As posted in a similar HN thread: thesis, antithesis, synthesis.
An idea comes up. It solves problems, but it also has shortcomings. An antithesis that deals fixed the shortcomings comes up and is adopted. Turns out it also has shortcomings of its own. The synthesis merged them together.
And repeat.
(these are the ideas from some thinker whose name I forget)
I just thought of a possible reason. Maybe because the internet is generally much faster and lower latency these days? So rendering on a server can provide a good UX today, but it couldn't in 2007? Was "the edge" such a prominent concept back then? I honestly don't know if this is a good explanation or not, I have no data at hand to prove it, but I think it makes sense.
The "edge" didn't really exist at the time, along with concepts like cloud, serverless. I seem to remember that even CDN was an evolving architecture idea at the time.
AWS et al had not yet turned web servers into a commodity, so it wasn't feasible to "just deploy the software to multiple regions" to improve latency.
Indeed, I love how the front-end world is following the wheel back to server-side. It's like how all the cool kids are dressing like it's the 1980s/90s.
EDIT: I'm actually a real fan of Deno, but that was never because of its security promises. Security is both technical and cultural, and I think cases like this suggest that while the technical side was always shaky, the cultural side is just as weak. If first-party material promotes the idea of running scripts with `-A`, then that's the direction the community will be led.
Security makes more sense for deno when your host your app in serverless v8 hosting environments like deno. They don't even have a way to specify all permissions. v8 isolates are the new cool thing in serverless hosting especially with wasm providing way to run c/rust code on v8. We have to see how this will pan out compared to containers.
Isn't the hosting environment providing the security in that case? I can run Nodejs code on AWS Lambda, and CloudFlare also has their own edge runtime environment which is based on v8 isolates but isn't Deno.
What I meant was that you could emulate locked down version that deno or cloudflare provide in your local during development with the granularity of permissions. I though lambda was a full container running app, did they add isolate environment to it?
Oh I see what you're saying. Yes, while I'm not an expert I suppose Lambdas are less isolated than CF workers or Deno Deploy, while being pragmatically more isolated than e.g. a regular VM.
> But client-side rendering is expensive; the framework often will ship hundreds of kilobytes of client-side JavaScript to users on each request. These JS bundles often do little more than rendering static content that could just as well have been served as plain HTML.
server-side rendering for each request in V8 might be more expensive to host than client side rendering in V8?
for a properly-chunked app, hopefully most requests for the larger vendor bundles are cached/JITed, and each request doesnt actually download much extra JS.
there are also many frameworks faster than React (Solid, Marko, etc)
To add to the list: Svelte is also faster and lighter for most cases. And with SvelteKit the server sends only the bare minimum JS to make the page dynamic while being SEO friendly depending on the adapter of choice.
The cost of server-side rendering feels pretty trivial to me these days.
Fly.io will sell you a 256MB of RAM container for $1.94/month, which is perfectly capable of server-side rendering dozens (maybe even hundreds if you write efficient code) of requests per second.
I'm sure you can get even better deals if you shop around.
Is that kind of a container less capable than just about any client (except embedded of course)? Sorry if it's a stupid question. I'm not familiar with web SSR.
Yes, and if you've got a fairly complex website it's unlikely it could render more than a couple dozen rendered requests per second. If you're not rendering and instead getting them from a cache, that would be different of course.
The question makes no sense and your answer is wildly misleading and inaccurate.
A server is not 'rendering' a website the same way a client does.
A couple of dozen requests a second? In drupal in dev mode maybe but even then... I feel like we need to have a bit of a knowledge reset before spouting supposed info about csr v ssr
> A server is not 'rendering' a website the same way a client does.
While this is technically true, there is still "rendering" happening on the server when you JIT-compile JSX/TSX templates and transpile everything from your ES6 modules and includes to HTML/CSS/JS that a browser can parse.
In a sense you're splitting the load between client and server because you're right some stuff always has to run client side (like the actual layout engine work).
Very much depends on the code you are running. If you're running something like React server-side rendering it may not be enough resources - in which case you'll probably want an instance that costs $10-$50/month instead.
Most server-side frameworks I've worked with will perform just fine in 256MB of RAM though. These days I'd expect people to get the best results from Go or Rust, if they want to be able to run on as little RAM or CPU as possible.
I know not the same use case, but nginx will serve near 100K requests per second of a static site on a moderately powerful server. Server side generated SPA empowered by K8 solution drops that to a dozen. Those number discrepancies are comical.
I must be too old to enjoy recent resume building architectures....
they just use docker api, hence why I don't think there is the resource waste typical of k8s
I haven't run benchmarks but speed wise containers shouldn't slow down app requests (except in a few cases with very specific kernels, which I unfortunately experienced in production - but I was told it was just bugs)
fly.io is simply a VM provider. You can achieve the same requests per second there as any VM host. The person you're replying to just has a comically low expectation of performance.
This was my thought, too. In reality, client side rendering today is “cheap” because you typically put a CDN in front of your application to serve static resources and scripts, and your backend only serves the API requests (if needed). Rendering everything on the backend means by contrast that you’re limited to the capacity of your server and the latency based on where it’s located.
Counter-point to this is that edge computing addresses the latency for compute in the same way that a CDN does for static assets.
SSR definitely has drawbacks in the old "run your app from a single VPS somewhere" model, but in a globally distributed edge computing model, you remove a lot (but not all) of those drawbacks.
modern JS runtimes are extremely fast on all hardware that's not a 2000-era phone or an embedded micro-controller with 4MB ram. if your CSR is slow on a modern client, then that same JS code will bring your server to its knees when you simply migrate it to an equivalent V8-powered JS backend. scaling the server will not solve this; you just have to write fast JS code and pay attention when you write it, not merely optimize as an afterthought. test on crappy hardware, test with huge datasets, test on slow connections, test with asset caching disabled, target 60fps+.
The other is that those API requests need to be rendered with JS which means downloading the JS upfront + rendering time. JS rendering will always be slower than pure HTML rendering.
I will agree for small apps this is negligible, but for bigger apps this means downloading, parsing, and executing MBs of JS. See the Google Cloud console for example or the Spotify web app.
And yet there is another point which is that with SPAs, ignorant developers can cause more "damage". The other day I opened a simple password recovery form, and it rendered dozens and dozens of divs and downloaded (I shit you not) 2MBs of JS.
Also it's not like CSR is mostly used just to render out a flat document. It's used mainly for interactive stuff, often just banging internal client side state and not even talking to an API. And even when the client is talking to an API to update a small part of the document, this is more straight forward, than having a full client-server round trip, just for updating that small part.
Nowadays i prefer static site generation and if interactivity is needed, additional client hydration.
The only scenario, which i can think of, which does not match this model are highly frequently updated dynamic content sites. Am i wrong?
Even for moderately frequently updated dynamic content sites, statically rendering say 10 times a day might suffice. When the hydration kicks in, you can always fetch the info, which changed between the 10x/day updates and still have a low time to interactive.
SSR with caching maybe comparable in terms of time to interactive, but does feel less elegant imo.
> server-side rendering for each request in V8 might be more expensive to host than client side rendering in V8?
Considering mobile, embedded or even considering that my grandmother still owns a i3 4th gen, maybe would be better SSR, but for modern machines i'm not in for SSR, still feels a little bit clunky to me.
> for a properly-chunked app, hopefully most requests for the larger vendor bundles are cached/JITed, and each request doesnt actually download much extra JS.
Yes, cache here helps alot, but saving mobile data still be pretty good, considering some SPAs doesn't chunk it's script files very well.
> server-side rendering for each request in V8 might be more expensive to host than client side rendering in V8?
Expensive not in money terms, but in user experience.
Also: if you are doing e-commerce, 100ms added latency can cost you 7-8% in conversions. Spending 5% more on hosting to do SSR just makes economic sense.
> 100ms added latency can cost you 7-8% in conversions
i hear this metric (or something equally absurd) cited frequently and have never seen it to be true in my own experience.
i guess if you have to load a product page with 100 images (or assets) and each has 100ms network latency, then it will add up to much more than that. but 100ms for a single interaction or network request (e.g. process payment POST) is not going to move the needle on conversions. 100ms will feel instantaneous to 97% of users, and more than satisfactory for the remaining 3%.
(i say this as someone who profiles aggressively and strives to optimize every stray 5ms in JS and every 1kb over the wire)
I did a study on this (N>100m) at a previous employer who had quite a lot of e-commerce traffic.
It's sort of true but also a massive over simplification. The relationship between conversion and speed is not linear. Some people are beyond help, and others already have it so good they only notice the most extreme degradation in performance.
It's also hard to isolate confounding factors like users who have fast infrastructure tend to be rich and rich people buy more stuff. Making pages load faster doesn't give them more money to buy stuff.
Overall faster is definitely better but the specific magnitude will depend on your customer demographics. In our case the Amazon 100ms saved = 1% more sales was close enough for a rule of thumb.
I could see the 100ms/7% impact coming into play for a super impulsive shopper going from page to page compulsively until they pull the trigger on a product. I wouldn't be surprised if that was a lot of Amazon's business—people buying things they had not set out to buy.
Makes less sense the smaller an e-shop gets since the consumer already needed intent to shop there. If I know I already want product X on Shop Y today, 10 second load times are rather inconsequential.
I've been thinking about that a lot lately. I have a medium-sized e-commerce. We sell hyper-customized products and, to be honest, the site's overall performance is not good. But when I looked into other stores in the same niche as ours, who are making a lot more money than us, I saw that their performance wasn't better than ours. I don't think people shop impulsively in our store because they have to customize the product and then make sure the customization is correct (it's a pretty complex customization). I believe that because the journey to checkout is so long and requires so much attention, the customers just don't give up due to high latency or slow responsiveness overall.
I've worked in e-commerce for a while and have heard similar stated from time to time and seemed like someone was trying to fit a linear line over what's almost certainly an s-curve distribution.
At lower latencies no one is going to leave your site because it takes 1200ms instead of 1000ms to fully load a page. But at some point almost everyone is going to leave your site rather than wait.
I also find these stats really confusing because whenever someone talks to me about site performance they're always talking about a different metric (server response time, first paint, time to interactive, etc). If you're first paint is quick then users aren't going to care if content half way off the page doesn't load instantly.
I've always tried to focus on how fast things feel rather than worrying too much about specific metrics. If you can just get something (ideally the important bit(s)) to load really fast a site can feel extremely fast even if it's mostly just an illusion. Users like to click things and see stuff happening. It's things like reloading the page when adding items to cart then making them wait on a white screen for multiple that increases bounce times as abandoned carts in my experience. Making add to cart buttons an ajax request probably helps far more than making the page load 200ms faster.
I guess that's not really performance, but I agree that UX is one of those things that can subtlety annoy someone enough that they'll just give up and go elsewhere. That's a pretty extreme example, but even simple things like asking the user for too much information over too many pages at checkout can dramatically increase bounce rate. If a user can't land on your site, find what they're looking for then checkout within a few minutes, something is wrong.
On payment failures, an interesting solution an ecommerce I used to work for came up with was just to place the order on payment failure. They figured it was better just to send out an email after the fact asking them to try again and if that failed they could call and take the payment over the phone if need be. We targeted an older demographic and sold fairly pricey products though. I guess that model wouldn't work so well if you're selling $10 tshirts. I believe Amazon does something similar. I know they've sent me emails in the past letting me know my payment failed after placing an order.
Google did a large study. If your page load is 5s then adding 100ms does not matter. I got the raw data from them and it mattered in the curve closing in on the whole page taking 200ms or so. It’s level, and then around 2.5s, each 100ms made more and more of a difference.
Yeah I think the main thing for a website to hold on to users it should at least be interactive and responsive quickly (with clear loading indicators) even if it is taking a while to load everything on the page. If a page looks like it is stuck or feels like it is stuck then I am likely to abandon it. But if it has clear indicators and animations to indicate something is happening, I will be more patient with it.
I think if frame it as, “each 100ms you shave off a critical consumer shopping checkout flow page increases conversions by X%” it starts to seem more defensible.
Agreed that in the general case 100ms to full paint (with interactivity not far behind) is really good.
Doesn't really matter if the relationship isn't linear like that. I suspect it's more likely conversions start to drop off after certain thresholds are met e.g. 3 seconds causes conversions to drop by 5%, 20 seconds causes them to drop by 30%.
Just averaging out those numbers could result in engineering time being wasted chasing rapidly diminishing returns, if the site is already below one of those thresholds.
It seems like an inference from correlation that someone extrapolated out in a linear fashion.
In isolation, I don't think that 100ms in delay before paint really causes enough impatience in people that you'd lose 7-8% as a direct result. Humans don't really make that kind of decision within such a minuscule window of time; to a degree, we expect our devices to have delays.
More likely, that loss of conversion is correlated because sites that have longer than 100ms latency have latency that's extreme enough to reach the "I give up" threshold. A site where its pages take 3 or more seconds to load would see a big difference, but that doesn't mean that a site with exactly 100ms latency will see any meaningful loss of conversions outside a margin of error.
People will also be much more patient with a high-value site. For instance, I'm willing to wait a few seconds for each page to load on the McMaster-Carr website because it's an otherwise good experience and a high-value store. But I'm far less willing to tolerate delays on some horseshit Shopify page plastered with ads and reselling crap from Alibaba. And if you've got a blog that's poorly formatted and overall signals low value content, I'll dip out the moment I detect the slightest monkey business slowing things down.
A couple more for folks interested in prior/similar art:
- Marko, made and used by eBay, which had been doing islands/partial hydration for years
- Qwik, made (and I’m pretty sure used) by Builder.io (and one of the original Angular creators), which only loads/executes JS as needed for interaction. Not quite partial hydration, it resumes state serialized into the HTML (sort of conceptually similar client side to Alpine etc)
From what I gather, it seems like this would be a good choice if a portion of clients are on low end devices and it is more of a MPA/SPA hybrid - not a full blown web app. A lot of E-commerce and informational sites do fall into the MPA/SPA hybrid bucket.
If you really are making a web app (think dropbox, cloud based enterprise software) I think you'd reach a point where there is so much interactivity that you are essentially shipping Preact, custom component JS, and just a little bit of static HTML. The latter bucket is the only one which saves the user network and parsing time (and I doubt much of it.) I'd still go with a properly chunked SPA for true web apps.
Yes, but the templates are written in JSX. No quoting nightmares, nesting is a breeze, the server side APIs speak like the browser ones, and it's really fast. With deno.dev hosting on the edge, it's server side like php, but way better. I used to like WordPress, now I love Deno.
thx for your comment, as i said i might get all wrong. Still think PHP template engines like Twig are well designed and make you highly productive. Personally, I feel JSX is a code smell, as it mixes UI snippets with code. But this is maybe my old school software engineering attitude. I am just waiting for the next OSCommerce package written in JSX...
You can never really avoid it unless you only use HTML. Templates are code. I think JSX is nicer because it's fully featured and the same syntax as regular JS, not a subset limited template language
I haven't used Twig, but have used other PHP templating engines. Scope is the big deal for me. JSX has actual scoping (it's just JavaScript), function calls, etc. PHP templates have one scope, even when you enter an included sub template.
Laravel released template components in version 7, but I haven't been able to try them yet and I'm hoping they'll improve things.
I'm excited for server-side rendering to return to the mainstream so that in a few years I can revolutionize the industry with my ingenious new idea to render things on the client with Javascript!
Yes, because true innovation rarely happens quickly and most things are just cyclical and hype driven in software.
I'm patiently waiting until we either discover Java or reinvent it all over again.. but this time it will be cool because it won't be called Java, it will be rather "JMocha" and really drive that productivity you're looking for when mixed with your Latte's 3.0 strong typing.
I suppose I'll be in my not-so cool corner working in my monorepo, single server and outdated Rails app on Postgresql; If only I could achieve Github scale with that..
"... and the client is only responsible for re-rendering small islands of interactivity. A model where the developer explicitly opts in to client side rendering for specific components. ..."
actually sounds more like ASP.NET (the original Web Forms, not Core) support for AJAX:
And not everything, just many things, was wrong with PHP. No build step was the norm for PHP the last time I wrote much of it. I love things like this, ES modules, etc, that make it easier to have build-step-free apps.
For those websites that can use it (like a blog), it seems like static site generation would be better, to cache the database requests? Then you can run in the free tier of Netlify or using GitHub pages, for example.
Or to do the opposite, for a single-player website where there aren't any server requests to cache, a client-only website can run offline.
Fresh looks good for multi-player websites that are unavoidably making database requests interactively, though.
I similarly wish Fresh did full server side generation. Even Next.JS has a tool to export as a static site. Maybe it will come to Fresh at some point, or maybe it’s there and just not well documented.
I’m using NanoJSX with Deno to render JSX on the server and spit the result into a file, but I’m having to write a lot of SSG stuff myself, (like logic to loop over files).
Yeah, for their examples there isn’t even a reason to use dynamic rendering instead of static rendering (pre-build a static site), as the response doesn’t change for different requests. Dynamic rendering only pays of if you do something more complex, like authentication, etc.
But I guess static rendering would go against their mantra of “no build step”, which even currently is actually false, as they need to generate the manifest file.
They're probably planning on making another vercel way and tie in future customers to their platform running v8 isolates.
I understand it from a business point of view but it's bad for users and for performance. Same thing happened to images in next. The plugins to have optimized images in next are all broken and the official one use a "free" dynamic route offered by vercel.
React is here to stay and will have jquery like longevity.
Sure, performance wise, it isn't the best, but performance isn't an end all be all.
What separates react from the rest of the group is the maturity of React Native.
If Discord is betting on React Native, so too can it be used for a lion's share of applications out there.
When you're a startup deciding between technologies, usually you're a 1-5 man shop.
If you have the capital to blow, you can build separate teams for native + web, and separating native even further with Kotlin/Java and Swift with your choice for the web platform.
(I'm not trying to the Dart ecosystem just to only use Flutter, in which, Flutter-Web isn't SEO friendly)
Or you can just use React + RN and essentially have one team and save you anywhere from 500k-2m in hiring.
This is why React is so strong.
It's not because it's "performant", it's because it's practical, and even for performance it's good enough.
Interesting. My company (small startup) went with flutter and vuejs and now we're switching to nextjs for all of our web apps and we're loving using it.
Flutter is awesome for mobile dev. For web? Not so much.
The issue I have is, if I understand correctly, you need to go to the server for everything. Their sauce is of course that server is close, maybe 10ms away. But it needs to be online, and so do you.
If most things interactive on your site need a round trip BEFORE the interactivity (e.g. a Like button) then this is OK.
However if you are writing a web app which does a lot of stuff offline and there is no reason for it to not work offline then this is an issue.
Where it is a bit sucky is something like clicking an expand button, and needing a network request for basically setting a {display:'block'} style on an element.
I might be strawman-ing a bit there because you would probably chuck in a tiny JS script to do that one job and not religiously follow "no JS".
But I am sure there are good examples of wanting offline capability.
However this is not a showstopper: This model is great for some things, bad for others. It is a welcome new choice in the palette of web dev options.
I imagine it might be premature optimization for many sites though, given that it is a trade off.
If you care about minimal JS to the client, another approach might be to use https://svelte.dev/. I have not tried it, but I am curious.
I really like the idea of Deno and Fresh's philosophy also resonates with me (I'm a Rails guy). I would like to see an integrated ORM (can be an existing one) and testing framework though, before I would consider it a "full-stack" web framework.
Since people are voting for Tailwind support, I have to throw in my vote for CSS modules! Even though Tailwind has a lot of momentum now, there are still a lot of people quietly getting things done with CSS Modules, CSS-in-JS, etc. :)
The JS ecosystem seems a lot more hesitant to bundle features into frameworks à la Rails/Laravel & friends, but Redwood.js sounds like what you're looking for. The ORM it uses is Prisma, which works really well on its own too.
Prisma is a nice product but I wouldn’t use it in production yet. It is prone to race conditions as it does not use native upserts, opting instead for Rails-style check-if-exists-insert-if-doesnt.
Redwood and Prisma look nice! But what I like more is a focus on serverside rendering and as few build steps as possible, like Fresh. I also find Deno more appealing than Node (simplicity).
But yes, I guess if I would need to pick a more mature framework with JS, Redwood would be the way to go.
Am I the only one that think nowadays anything can be called "full stack"?
For me "full stack" is something like Rails, Laravel, Django, etc.
Or is it just being able to run code on the server enough to be "full stack"?
I'm missing the translations system, the validations, the background jobs, the authentication system, authorization helpers, email sending, ORM or data access layer, testing framework, CSRF and related security protections, logging, error handling framework, etc, etc, etc.
To me that's a "full stack" framework. If just running code on the backend and on the frontend is enough, then I can make a bash script a full stack framework, right?
Node/JS solutions typically don't come "batteries included." "Fullstack" means different things depending on the ecosystem and for Node devs it means "hey there's a solution for the front-end and back-end."
Totally understand about Django, Laravel etc. but typically if I'm building in Node/JS I'm pulling all of that extra stuff in from npm piecing it together myself.
This is actually part of why I created Nodewood [1], because every new Node project required pulling all that together, and every new SaaS idea I had had the same basic requirements (user management, subscription management, teams support, etc). Then I figured, if I found this useful, surely others would too, so I packaged it up and have had a few happy customers since then, who have helped me refine it, which feeds back into making my projects better, too.
To each his own, personally I stay away from Django and full fledged frameworks (context: I run a few small saas and launch new products frequently).
I find that having to dig to find out how things are done and trying to hack around the framework is the cause of bugs and frustrations.
I still manage two services based on Django and they're a continue cause of pain.
I'd rather spend a week at the beginning of the project and set things up the way I want them with minimal dependencies.
Then evolving things is fairly easy, just a matter of skimming through the code.
> I'd rather spend a week at the beginning of the project and set things up the way I want them with minimal dependencies.
Same.
The only time that I've gone away from this strategy is when I need to get other developers to buy-in to a framework (or hell just a "system of work"). In leadership roles, I've found that a highly-opinionated framework with lots of batteries included really helps keep people coloring inside the lines. For me anecdotally that's more important than having a low-dependency codebase that's very appropriately sized to the problem it's solving.
I've found it's way easier to dodge the inevitable "X way sucks" from my colleagues/subordinates by using "full framework" tools that I can pitch to non-technical management vs. a bespoke solution.
Batteries included frameworks solve people problems more than they do software problems.
---
Just want to reiterate that I absolutely agree with you and the only time I find tools like Django appropriate is when other developers need tooling that keeps them "in the lines," and/or when I'm expecting to have to pitch my solution to non-technical peers/leadership.
> I'd rather spend a week at the beginning of the project and set things up the way I want them with minimal dependencies.
Me too. I don't use create-react-app or anything else to set up my Node.js/React/TypeScript/SWC projects. I install each package I need manually, and configure each item precisely how I want it (tsconfig.json, webpack.config.ts, .swcrc, even .eslintrc.json). Makes for a nice, lean project every time.
Yeah, I do the same. Most of what I do is Next.js + a good bunch of npm packages to get the "full stack". But I wouldn't call Next.js full stack because it can do SSR, or even API routes. But again, that's just me confused about what is "full stack".
> But again, that's just me confused about what is "full stack".
There term has definitely become more amorphous and ambiguous. I spent a few years mostly working with physical hardware, networking, virtualization, scripting deploys for DBs and OSs, and handling the DNS alongside some business oriented C#. Then I moved over to working with AWS and Azure.
Now I primarily work with Node, Golang and Python.
With modern tooling and a team that supports it, I've found myself mostly working on services, APIs and React front-ends. Even with Next.js, it's hard to know where the line on what "full stack" is.
It's sort of come to encompass the frontend and backend of an application, less so the architecture of it. Mentally, however, I still lump it all together when I think of what qualifies for "full stack", as I personally think the term represents an understanding of how everything works together, which definitely helps when troubleshooting the source of a problem.
Yeah - I really hate the term "full stack" as it can really mean vastly stuff in different contexts, between different companies, hell - even in leading teams I feel like I have to nail down what "full stack" means between the members.
> But again, that's just me confused about what is "full stack"
Me too because it's grown into an amorphous definition =/
> Or is it just being able to run code on the server enough to be "full stack"?
To me, "full stack" would entail having responsibility for logic on both the front-end and back-end, which Fresh does.
Personally, I would not consider Laravel a full stack framework (Although it can work well with front-end frameworks to make a full-stack application). Instead I'd consider it a back-end framework.
In the case of laravel, I think that having by default a pretty awesome templating system (which even allows for components), a webpack based build system for frontend assets, easy way to serve, cache and bust assets, and a trivially easy way to submit forms and validate user input makes it pretty full stack. Same for Rails, and not event talking about HotWire/LiveWire with might not be considered parts of the framework per se.
To me Laravels is pretty more close to a "full stack" framework than Fresh to be honest.
Not here to defend Fresh, it has lot of flaws, but
> I think that having by default a pretty awesome templating system (which even allows for components), a webpack based build system for frontend assets
Fresh literally does that "by default" without having to include any library/dependency.
> easy way to serve, cache and bust assets, and a trivially easy way to submit forms and validate user input.
It's almost the same effort in Fresh.
> makes it pretty full stack
Define fullstack, but even defining it, Fresh totally covers what you said.
To be "fullstack" to me means to be filling in all stacks, being in web? Frontend and Backend.
To me, having a templating system and having things rendered in backend can be "frontend", but you saying "Laravels is pretty more close to a fullstack framework than Fresh" is not something very logic here.
Full stack * is * just running code on the backend and frontend. There is no complexity requirement for something to be considered a full stack. You could have a super simple todo list with native javascript that persists by writing to a single sqlite table and that is a complete "full stack" web application.
You could have something incredibly intensive on the front end using a bunch of modern frameworks and visualization tools. Or something incredibly intensive on the back end with some of the features you mentioned in your post. But neither is strictly required and many apps will function with just the basics on either or both ends.
I agree. I see technologies like Next, Remix, etc claiming to be full stack, but it doesn't "feel" full stack to me. They have offerings like backend functions, which is really just an HTTP response.
But, when I think of a "all in one" / "full stack" langauge, there needs to be an existing ORM inside the framework. That works across the View/Communication/Model.
These frameworks have you needing other tools to do this.
I agree, this is becoming really confusing! And then also "center-stack" which I heard somewhere recently re one of the frontend frameworks.
When I hear of a full-stack framework (in a modern setting), I think of stuff like Meteor, Redwood, and Wasp-lang (disclaimer: I am one of the authors of that one). RoR also always comes to mind, although it's more commonly used as an API server nowadays.
I thought full stack means going all the way down to tuning the database, the kernel, building your own CI/CD, and performing your own deployment and scaling.
In practice, I've seen it used to describe someone who is fluent on the backend (DBs, APIs, DevOps, etc.) and competent on the frontend (React). But that could be more of a product of the talent pool, I think there are more backend engineers who dabble in the frontend (maybe because that's where the perceived party is?) than the other way around.
> For me "full stack" is something like Rails, Laravel, Django, etc.
Rails, Laravel, or Django are completely agnostic about the frontend side of the full stack though :-) Well, maybe Rails will have an opinion about Stimulus; but this fresh thing continues the journey after the response has reached the client.
Historically, I don't think you could call something that sits on top of Apache full stack if you didn't also control Apache, so maybe, depending on whether you're just dropping on some webhost?
If you get to ignore threads/forks and number of workers as well as memory management of them because it's someone (or something) else's responsibility, then that's not full stack. The stack is OS (slightly, and can often be ignored), webserver, application, and client (HTML/JS). If you hand off responsibility of any of that then it isn't "full".
The term originally came from referring to people that handled all these things. You could be a back-end developer, dealing with the OS and webserver and maybe application, front-end developer where you dealt with HTML, JS and maybe some application code, or you could do both and be a "full stack" developer.
They're "full stack in the sense that you can render templates and manages static/media files. Back when we only had websites, this was fully full stack for sure.
At least for Django, it handles creating client side views, client routing, etc. You can of course use a different frontend, but that's included by default. See also its wonderful admin interface.
Some of these newer frameworks are certainly using the term "full stack" in a very different sense than how it's been used historically. Could you fault someone for a cursory reading of the homepage and coming away thinking that Fresh/Remix do all the features you listed out of the box?
There's enough confusion here I suppose I'll start using the term "batteries included framework" for the technologies you listed.
I personally believe full-stack is fuzzy, because what if we also provided a UI layer, or a payment interface framework, or a notification handling system, or similar as part of the "full stack" system? It's hard to draw a line as to where something becomes "full stack."
However... it's definitely closer to what I want "full stack" to mean than... this. Or a pile of bash scripts.
well, I agree.... but to me the baseline is, as I said, Rails/Laravel/Django... there's a common subset of things you need in like 99% of web applications that are not just a rest API or a landing page.
I'm just picking out one from the list, but on background jobs, a nice thing about async runtimes like node/deno is that you really don't need background jobs for most apps. You can just use web workers and async/await to do stuff in the background. You might want a more robust system down the line, but it's not needed in the beginning imo.
Most other frameworks don’t include things like websockets and Web Push notifications, as well as support for WebRTC, payments etc. So where do you draw the line?
I listened to a conference talk[0] where Dylan talks about fresh framework and he says quoting: "We have this little project at Deno ... it's not a web framework that we're really promoting or you know intending to utilise long term. More of kind of a demo of what this post unix web frameworks might look like"
Is it worth investing in this now? It really put me off.
Fresh has had a lot of development since then. See the "Production Ready" section in blog post for the latest thoughts:
> Fresh 1.0 is a stable release and can be relied upon for production use. Much of Deno's public web services use Fresh (for example the site you are reading this blog post on now!). This does not mean we are done with Fresh. We have many more ideas to improve user and developer experience.
The way people use modern Linux systems is moving past how most people used Unix systems back in the day. Instead of piles of shell scripts glued together we now use a monolith (systemd) to manage most of the low level services and initialization. No one really cares or tries to setup proper multi-user systems and instead we focus on multiple roots with each application given an exclusive root filesystem, view of the system, etc. (i.e. containerized). Computing is a lot different than it was 50 years ago, and that's not a bad thing.
well many of us think that the excellent ideas of computing in the 1970s (those that produced Unix) are no longer the best ways to do things, and believe that the community should be innovating much more.
a lot of those innovations are over 50 years old, now. do we REALLY believe that the Unix Philosophy is the best thing in the history of computing? past and future? really? I don't. and what about in another 50 years? how about 1000 years? at some point, many much better ideas will emerge.
that's what "post-Unix" means. the things that come after this Unix thing we're hung up on.
This argument is fuzzy. Your goal shouldn't be replacement just because it's old, but rather, because you actually have a more efficient or easy to understand approach.
As far as I can tell, there's a marketing term ("post-unix") but not any concrete ideas backing it. The Ryan Dahl post linked above is equally fuzzy as it seems to just be saying "use an encapsulated runtime for scripting that replaces existing Unix-level dependencies with Web Assembly code that runs on top of Unix."
Not to be rude but it really just doesn't make a lot of sense. Is there a blog post or video explaining the general thesis here?
You're over-thinking things. Post-unix clearly just means "something more modern than 70s Unix design". I don't think anyone ever tries to make something newer but not better so that's never a goal.
I'm not over-thinking, I'm asking for a clear definition of what that means. The original comment that kicked off this thread said "what is post unix."
There is a clear-ish definition (not running JavaScript in a Linux container like Docker but running it directly on top of the OS in its own containerized runtime) explained in the linked video that comment was replying to, but no, it's not just "something more modern than 70s Unix design."
it means simply that there is always something better, waiting to be discovered, and that we should not let ourselves become acclimated to inferior things simply because we've used those things a lot, and know them.
there is always something to improve. find it. improve it.
still fuzzy?
it is our moral obligation to improve ourselves and the world around us.
I agree in general that yes, you should be seeking improvements but is there a specific improvement tied to this term (post-unix)? The term itself implies that there is a concrete idea or set of ideas it involves.
The post seems to claim that there is no drawbacks others approach with the 0 bytes of js, and link to a blog post [1] that absolutely tells nothing how to properly manage back/front state and rendering consistency between the rendering and the «islands» on the client.
Without having built in the "Island Architecture", it looks exactly like a page of old where you serve an entire page, accept form submissions, return the updated page, and make small elements interactive (islands). These islands contain only minimal local state and global updates happen through the aforementioned form submissions.
The difference being it's all one isomorphic (?) React-y Deno codebase instead of the tangled PHP+jQuery mess.
Now only get Deno Deploy to implement dynamic imports, so fresh can remove its build step.
Yes, they also firmly keep the claim that fresh doesn’t have a build step. Hint: It does [^1]. They’ve just hidden it for most people by making it part of the local testing.
We are using the pre release version of Nuxt v3 in production. I really like Nuxt’s developer experience and it is my go-to for front end development. As a backend, I truly don’t pay much attention to whether it’s Firebase + Workers, Vercel, Cloudflare, etc. The framework basically does it all under the hood for us.
I think as these edge runtime things get more feature rich, it's going to pay off to be all-in on one that does it perfectly exactly optimized with an easy pathway vs expecting third party packages to figure out an implementation and stay up to date.
Have recently been getting pretty deep into Hugo for a 3rd project and while I love how fast the static sites Hugo builds are. I absolutely hate how every tool I love using takes hours to setup and how it take hours to debug random tooling integration issues between Hugo and node like postcss etc. so much so that despite how much I know Hugo and static sites in general are more often than not the way to go for a lot of things I have been recently reconsidering how much perfect optimization is worth if it takes so much effort to get when something like fresh can deliver what looks like a much better developer experience though perhaps without the perfect optimization
This looks like something I'd like to try but I don't like JSX (TSX). Is it possible to use Fresh with Vue in place of Preact, and write Vue components that split HTML templates from script and style tags?
Just curious, why don't you like TSX? I find it much much better than HTML templates because you get actual IDE support with proper code completion and compile time type checking.
Yes, same here. Hate JSX/TSX. Would love to have choice of templating engine so can stick to pure HTML/CSS as much as possible. I still prefer simple template engines like Mustache which respect HTML.
You might dig Joystick https://github.com/cheatcode/joystick. Takes the compositional aspects of JSX/React but uses a very thin abstraction over vanilla HTML/CSS/JS to do it.
Joystick is full-stack (UI framework directly integrated with a Node.js back-end framework) but the UI part can be used on its own.
How I the experience with this framework in terms of latency? It’s good to reduce front end size but the wire ought to get hot if the browser has to constantly refresh from the server with simple view changes.
Not exactly, because this example from Deno is using Preact, a stripped down React-like framework. Server Components relies on React fiber and the latest async features in newest version of React. They do both aim to limit the JS sent down the wire though.
Maybe/maybe not? Deno is a new-er runtime for JavaScript and TypeScript by the creator of Node.js. It's essentially an effort to correct the perceived shortcomings of Node.js with a new runtime that is: 1. compatible with TypeScript, 2. aligns closer with browser APIs, and 3. comes with a standard library modeled after Go's.
It would be great for a 1 visit per hour site, but so would PHP, or just an index.html file. Hard to really answer that question without knowing what your site is.
I keep hearing and reading about Deno too but I don't understand why I or my project would need or want it or consider switching to it. What does it give me, how is it better, why is it better. I'm confused on why I would pay for hosting? You're not alone.
I am very interested in seeing where this goes. I would love to see things get fleshed out a little more ala BlitzJS (auth, simple client/server session management, integrated database tooling), but I don't know if that goes along with Fresh's principles. But this is a very exciting start for this framework and looks like a great addition to the Deno ecosystem.
The only issue I have with it is that it's a pretty niche application. I think most sites will have a single database probably in a single location, in which case you don't gain anything from edge compute as far as I can tell.
Maybe if these things came with some kind of fancy "edge database" it would cover normal use cases, but they never seem to mention that so I assume it doesn't exist.
Single-instance private FaaS (as opposed to multi-instance public FaaS) allow to implement edge databases, assuming there is persistence. Currently only Cloudflare offers this with Durable Objects and the industry seems to not have caught on, maybe not even understood, yet.
Client side rendering many cases, could be not only cheaper for the host, but for the client also, when the JS was loaded, the new content for the pages could be fetched faster than the new static html, not to mention the ugly experience of page loading.
I'm typically pretty hard on Deno (mostly surrounding dependencies), but this is probably an extremely good use-case for it. I might have to pick this up for side projects that need fast iteration.
edit: i just realized you probably meant another company... i believe Slack uses deno in prod, not sure if they're currently hiring. netlify, github, and supabase all use deno as well.
deno has been around for a little while, i think some commentators may have taken your responses as satirical because of how often deno shows up in conversations surrounding the web platform. think of deno as a node.js successor, it's a separate javascript runtime; with that, it'll need to compete with node's ecosystem (vast and diverse tooling). the deno company is spearheading that movement by creating things like Deno Deploy and now Fresh. i believe the gist of it is that people were excited about the concepts Deno had been brewing up, and now they're building things that will allow people to actually make use of deno.
so, many people will look at this and say 'how is this different from ___?' and the reality is that for the most part, it's not. that's the point. you're making use of deno to do the same thing you did with node, and hopefully it's a little bit easier and/or more developer friendly.
also fwiw, deno is not really yc-affiliated; the company is backed by Sequoia, Four Rivers Ventures, Rauch Capital, Long Journey Ventures, the Mozilla Corporation, Shasta Ventures, and i think some other angel investors like Nat Friedman (former GitHub CEO).
My advice is to just ask your question and don't overanalyze the tone of the responses as their tone is related to their interpretation of your own tone/honesty. But if the responses seem weird or unfair to you, it's useful to ask yourself how you might have come off the wrong way.
gherkinnn's response sheds light on why your question could have been misinterpreted as, for example, low effort.
Deno is a TS runtime. Deno Deploy is a place on which to deploy a Deno project. Fresh is a framework with which to write web apps on Deno, which then can be deployed to Deno Deploy.
Your question seemingly conflates these three things and adds unrelated elements.
To answer what you might be asking: TS is a nice language and Deno offers excellent tooling, Deno Deploy is easier to use than AWS and Fresh makes it easy to write web apps in a React-ish manner.
Isn't DENO just AWS Lambda@Edge but you can't use anything other than JS/TS?
edit: whoa whats with these downvotes? and im being accused of being a troll for asking a legitimate question? I am trying to evaluate this and my comments are being flagged for no reason Dang
This is the second or third time just asking innocent questions on a YC linked company resulted in mass downvotes and flagging.
I don't see choward's comment as offensive/threatened.
They are legitimately asking how the comparison works. You are comparing a web framework and a Cloud Serverless runtime, which is a bit odd. Maybe you didn't have context on Fresh/Deno, but that doesn't mean they are "threatened by the mere question."
I don't agree with the downvote brigading you're receiving but when someone poses a question like this in neutral language, reacting in this way certainly isn't going to help the situation.
You're guilty of the sins of conflating orthogonal resources, aggravated by the fact one is a hugely overused proprietary, expensive and absolutely unrelated service, so the downvotes were definitely expected. Then even worse, you're editing posts to complain. Whoever may be contacting you though, here's my fuck you to them.
Nothing new under the sun. This is just the latest in the pendulum swinging back to the same server-side strategy that Php/Django do. There's a reason we moved away that! Client-side apps are an elegant architecture that takes advantage of computing power on edge devices.