Hacker News new | past | comments | ask | show | jobs | submit login
What We Need Instead of "Web Components" (carlana.net)
132 points by nalgeon 9 months ago | hide | past | favorite | 142 comments



That's a great writeup.

As someone who's worked primarily on news/media sites for the last 15 years, I can totally relate and agree on all your points.

I would also argue that there are non-code issues that could easily be resolved. Bundling fonts for example. Decide on the top 100 with support for all or almost all characters, based on usage etc., and bundle them in the browser. That would shave possibly hundreds of KBs of each page's size.

On the flip side, stuff like lazy loading could become standard and use the attribute only to force-load content. Or stuff that is used on every freaking site (eg togglers as native html tags).

I'm hopeful though.


While I agree with the general gist of the article (and please, yes, it seems like a minor thing, but give us resizable iframes), except that "reactivity" does not meet the bar of developers collectively having landed on a solution to a common problem. There are a couple of frameworks with similar, but different, approaches to it, and yet none of them is truly widespread. I'm very sceptical that standardisation would hit the right mark at this point in time.

But yes, that is basically what we saw with Web Components. People took drew the wrong conclusions about what problems React solved ("everything is a component"), and then tried to retrofit Web Components as a native solution to the problem - with the added assumption that just because it was built-in, it was better.


> except that "reactivity" does not meet the bar of developers collectively having landed on a solution to a common problem

I think the request for a native API for a Morph DOM/Virtual DOM has a similar problem. He wants a native solution so the frameworks can "focus on differentiating based on the convenience of their APIs", but he's forgetting that any native API will in itself impose a style on the frameworks if the latter want to be performant.

Given the diversity among current solutions, that would be the same as picking one of them to thrive and all the others to be forced to become more like it. Which is the opposite of what he wants.

Also, it's not like you can take any JS-based solution, make it native, and expect it to be faster because now it's made with magic C++ pixie dust.

Also, maybe the different internal data structures among browsers means that they each would prefer different reconciliation approaches, making it hard for them to agree on anything.

I'm not saying that it's impossible to have a well-designed native solution that makes everything nicer, but it's also not exactly a trivial problem to solve. And unlike JavaScript frameworks, once it's standardized the browser vendors are stuck with it, so I can understand if they are conservative about it.


I think one thing to keep in mind is that just because something is available in the browser, that doesn't mean that all other approaches are suddenly moot and will never be able to displace it - XMLHTTPRequest is also still available, for example, but did get replaced by `fetch`.

But in general, you raise a good point - just because something worked well with developer tooling, doesn't mean that it would work well when integrated in the browser. You see the same e.g. when people are calling for runtime type validation in the browser just because they like TypeScript, even though the pros and cons would be completely different.


And XHR is still better than fetch. It supports progress events and SSE. I'm not really sure why fetch exists if it doesn't add new/better functionality. It's handling or lack of handling 302s is crap too. You can set it to "manual" handling but then can't manually handle it.


Fetch exists because everyone just writes a 10 line adaptor function that looks like fetch anyway, so might as well build it into the browser and save the trouble for people who don't need progress events.


Eh... the API is a little nicer but if it's weaker overall I think we might as well keep using the XHR wrappers over bloating the standard API. I end up wrapping fetch with with more niceities anyway.


> XMLHTTPRequest is also still available, for example, but did get replaced by `fetch`.

True, although that also highlights my "once it's standardized the vendors are stuck with it forever" point


Just a heads up, the author is a woman and uses she/her pronouns.


Thanks for the correction, that was sloppy of me (usually I default to "they" when I don't know an author's gender). Too late to edit my comment now sadly.


> except that "reactivity" does not meet the bar of developers collectively having landed on a solution to a common problem

Now that everyone seems to be in love with signals, there is work going on in the web components community group to prepare a spec for a signal (or observable, not sure what they are trying to call it) primitive [0]. It seems that they are getting ready to bring it to TC39 as a proposal.

(In the meantime, the Observable primitive from rxjs has been given a go-ahead for browser implementation. There is a proposal ready [1], and I think I heard that it may already be in Chrome behind a flag [2]).

So yeah; it's gonna be fun. Especially if both groups call their primitive Observable :-)

0 - https://github.com/webcomponents-cg/community-protocols/issu...

1 - https://github.com/WICG/observable

2 - https://nitter.net/BenLesh/status/1737174784406933599


This proposal makes the same mistake as various stream implementations (including RxJS in the past) of making operators methods on the observable.

This means you can't add custom operators without subclassing or doing weird crap with proxies.

When RxJS stopped using instance methods to implement operators, it opened up the ability to create custom, domain appropriate operators which I found key to writing comprehensible and maintainable code.

We really need a `pipe` operator, at minimum. https://rxjs.dev/guide/operators#creating-custom-operators


> This proposal makes the same mistake as various stream implementations (including RxJS in the past) of making operators methods on the observable.

I don't think they are making a mistake. I am sure Ben knows what he is doing, given how it was he who refactored rxjs 5 with all operators being methods on the Observable, to rxjs 6 with pipeable operators.

But, their objective is not to bring rxjs into the browser, but rather to bring the Observable primitive into the browser. And, like Array prototype, which has methods, Observable, in order to be even minimally useful, needs some methods, which they modelled after TC39 iterators, for the sake of consistency.

They say:

> We expect userland libraries to provide more niche operators that integrate with the Observable API central to this proposal, potentially shipping natively if they get enough momentum to graduate to the platform. But for this initial proposal, we'd like to restrict the set of operators to those that follow the precedent stated above, similar to how web platform APIs that are declared Setlike and Maplike have native properties inspired by TC39's Map and Set objects. Therefore we'd consider most discussion of expanding this set as out-of-scope for the initial proposal, suitable for discussion in an appendix. Any long tail of operators could conceivably follow along if there is support for the native Observable API presented in this explainer.

As to

> We really need a `pipe` operator, at minimum

Maybe we don't. Note that in RxJS version 8, they have introduced a new way of piping observables, which is the rx function [0]. Maybe they are thinking of something similar for the browser. Or maybe they are thinking of using the native pipeline operator if it ever gets approved.

In the meantime, for any complex manipulations on observables, users will probably still import relevant functions from libraries.

0 - https://github.com/ReactiveX/rxjs/issues/7203


As someone who's first job out of college included a large knockout.js codebase (two way observables).

All I can say is any developer reaching for Signals today beware.

This pattern is great is very small apps, but if you grow any larger the debugging story becomes a complete mess as it can be come nearly impossible to know which component modified the state of the observable and when.

Also this may be my first Old Timey Developer opinion Ive written where I warn developers about reinventing our mistakes of the past.


Wow, I'm actually a pretty big fan of the original Observable, so on the one hand I'm pretty happy about that (assuming it will get standardised, rather than being a Chrome-only API).

But on the other hand, indeed, like signals, I don't really think it meets that bar - especially since Observables have been widely available and actively worked on for a long time, without seeing wide adoption.


> especially since Observables have been widely available and actively worked on for a long time, without seeing wide adoption

Take a look at the "Userland libraries" section [0] of the proposal (almost certainly written by Ben). He argues that observables get reinvented in the userland in various libraries over and over again. It is a primitive, like Promise, only better, and with very sound theoretical underpinnings.

[0] - https://github.com/WICG/observable?tab=readme-ov-file#userla...


I think people have been burned by RxJS where observables are "cold" by default and the source of many problems, over subscriptions, false starts etc.

The modern swing back to signals (which in a way stem from Knockout's observables) seems to be simpler and easier to reason about.


I'm pretty stressed out about the proliferation of standards here. It's indistinct to me why we need so many ways of hearing about a new thing; they seem the same general idea, but everyone has some insular notion, it feels like, and I don't get the value add, don't get why we'd have another.

We already have a streams API that tells us of new things. Adding observables, adding signals, feels like a lot of totally different ideas that do mostly the same thing. What is missing from streams to make it cover these other cases? I suspect not a lot. But these are isolated cultures each possessed of their own idea, their own mentality, and there's not much binding glue forces to pull these disparate worlds together, even though it feels like the overlap is overwhelming. I feel like we are making a huge mistake letting these people chip more shit into the language, that we gain only more modes that don't work with each other. I love these communities, but not like this.


There are proposals for both Observables and Signals.


Agreed. It's true that reactivity is needed but I also don't think it's time to standardize yet. There has already been attempts in the past too such as mutation observers. Too early.


Thanks for this, gives my intuition some words to back it up!

I find especially compelling how the author separates concrete problems like reconciliation (hard to argue against) from the abstract principle of "everything should be a component" (can be argued more easily IMO).

Shamelessly plugging https://github.com/morris/vanilla-todo here; in this try-hard-to-stay-vanilla case study there are similar conclusions: Reconciliation is hard, CSS global namespace is problematic, etc. - I also did not use web components, but could not explain/justify that decision well (until now!).


My biggest beef with web components is actually the lack of "real" isolation.

You still have shadow-piercing props and settings that will bite and break your components (font size is a simple example).

If web components provided the kind of isolation that iframes provide I'd use them more.

But so far my experiments into having a set of components I could easily reuse on different websites has failed.


> This is the ideal scenario for the evolution of the web: developers worked collectively over a series of years

The ideal scenario is when you design something well so that you don't need to waste a series of year of that collective work solving for the lack of design


That's the unrealistic ideal.


Big Cathedral philosophy. Someone just build it right! Kick ass build big and make it excellent! Just trust that one gal and let them soar!

Versus Bazaar mentality. It takes time. Let there be a competition of ideas. Figure out strengths and weaknesses over time. Adapt and keep building better.

You see this soooo much in the anti-Wayland troll-camps. That the idea isn't fully sprung meeting every need in every way is a huge fault, is unforgivable. The slow integration of ideas has been going to slow, and the Wayland protocol folks have finally recently started admitting it; they have a point. But being an official protocol is not the be all end all. So much protocol development has been just one camp, then two camps. That organic spread has been so beautiful, has allowed both innovation & deliberation at the same time.

But people expect a world of consumer software, where someone has made all the decisions and now we're there, and we live with it. The Cathedral and it's authority & singularness appeals to so many, is closest to the rest of the world & commercial software. But the Bazaar has many advantages, has a cautiousness & explorativeness that hopefully delivers, long term.


With the addition of a jsx syntax to js, they'd make more sense.

Presently, if I have something for the web to create I have a set of component files, and,

    const tpl = (id, title) => html`<div id="${id}"><h1>${title}</h1></div>`;

    export default class extends HTMLElement { 
        connectedCallback() { 
            const t = tpl(this.getAttribute...); 
            t.onclick ... //t is now a dom node
            this.replaceChildren(tpl); }
    }
with additional simple component-specific helper functions defined in the file, after the class export -- I find this is covering all of what I need.

And I think it's what's most basic apps need: a way of seamlessly moving between html/js reprs, and a way of seamlessly composing templates.

It's an incredibly weird way of solving some strange fraction of the problem. But combining it with es6 modules, with string template processors, and so on -- you can create a really simple system of components.


You just reinvented lit.


actually, looking at it (https://lit.dev/), i do exactly that.

I also define a `render()` and extend my own parent, which does a `replaceChildren()` with the render. And, strangely, I also call the processor `html`

I'll still stick with mine however, my 'framework' is half-page of code. I dislike dependencies greatly. I'd need to be saving thousand+ lines at least.

Here, I don't want a build system to make a website; that's mad. So I don't want lit. I want the 5 lines it takes to invoke a dom parser, and the 5 lines it takes do define a webcomp parent.

It seems all these frameworks invent abstractions on top of how the browser works ('reactive properties', etc.) -- i'd prefer just writing code that works according to the abstractions inherent in the tool -- not half-baked ones invented by a framework author.


Lit's HTML rendering code is mostly just safety and caching so re-renders only run code and mutate DOM nodes they actually need to - I read through the whole thing to get a feel for what it was doing and you might want to do the same to see if there's any ideas to steal.

Reactive properties aren't really an invented abstraction in any meaningful sense - they're mostly just sugar for creating a javascript property whose 'set' handler schedules a re-render if required - think

  get prop () { this._prop }
  set prop (value) { this._prop = value; this.scheduleRender() }
so to my mind that functionality -is- inherent in the tool, how magical it seems is mostly dependent on whether you've already gone to the trouble of learning the relevant corners of javascript.

I've done (smallish) things that use no build tool and just have two script tags, one to pull in a minimal lit, one to pull in the page's code.

I do agree that it's often easier to implement just the subset of functionality you need rather than using a dependency, but it's still IMO worth understanding the existing libraries to see if there's an idea in there you want to add to your subset, and dismissing lit because it -documents- using a build tool since that's the common approach these days strikes me as a mistake (learning how it works and -then- deciding you don't need that is completely fair, mind).


> i'd prefer just writing code that works according to the abstractions inherent in the tool -- not half-baked ones invented by a framework author.

The abstractions in the browser are extremely bad, and you have to be very careful coding around them to get good performance out of them. But by that time you will hve invented your own framework.


What shocked me to find out that web components start to replace the input field. And my tooling broke.


I’m new to the webdev game, been doing mostly cloud and infra so the fast moving pace of js is dizzying. But recently I’ve been working with webcomponents in alpinejs, coupled with tailwindcss means I’m writing mostly html, breaking out to js for special cases only.


I created https://notice.studio because web components did not work for me at the time (I needed something that worked for SEO).

Now, we are more a "cross platform page builder" with SSR when possible. But I still love the idea of web components, building a website like you would play LEGOs.


big fan of any work in this area - just checked out your website might use it for our company


> Custom elements aren’t “semantic” because search engines don’t know what they mean.

Lots of good points in this article but I don't think this is an issue. IMHO custom elements shouldn't be used to define content, we have standard elements for that, it should be used for rich interactive components. For example a datepicker. If there is content in the element then I do not see how search engines would interpret it any different than a normal div, or even better the custom elements should define content appropriately such as <my-component><article>text</>


>... and you get all the benefits of Shadow DOM with none of the goth language about “light DOM styles piercing the shadow root” and whatnot.

Goth Talk!

https://www.youtube.com/watch?v=wR2dsEZv578

Let's have a funeral to mourn the Loss of the Living Web Components!


Really good points, let browsers implement natively what developers are already doing


I agree with all the points, and I want to add that all the effort and literally hundreds of millions of dollars spent on web components could be spent more wisely by:

- actually listening to developers and looking at how the space develops instead of chasing own vanity projects.

Literally the extensible web manifesto many authors of web components espoused and then completely forgot.

- putting more effort into https://open-ui.org/ (somewhat ironically created and originally driven by Microsoft, of all orgs)


>> actually listening to developers and looking at how the space develops instead of chasing own vanity projects.

This is what WHATWG is. It is also why everything is "worse" and "better" than when it was just the w3c. I don't think we have had clear, cohesive, design intent for the front end/browser in more than a decade, and it isn't getting better.

I spent years working in PHP, and my life is better in Go and C... I would rather work in circa 2004 PHP code than what modern JS looks like. The whole stack of framework, tooling, linting/ts, node/bun, build/transpile... SO. Many. Dependencys... I had a 10 minute conversation on what the fuck a "thunk" is and why I would want to use it yesterday, this is the front end of 2023/4

And I give credit to front end engineers for keeping it all going, frankly there isn't enough cocaine in Columbia to get me to do it. But I'm not sure any of the people who think that this is OK or its how to build a web page are the ones we should be letting ADD features to a stack we want to be backwards compatible. Its how we end up with stray stuff like web components and shadow dom that never live up to the promise and then we drag them around for the rest of time cause space jam and what ever comes after still needs to render.

I'm gonna go back to the back end, I would rather face down the expensive foot gun of autoscaling than risk my sanity working in the front end.


> "The whole stack of framework, tooling, linting/ts, node/bun, build/transpile... SO. Many. Dependencys... I had a 10 minute conversation on what the fuck a "thunk" is and why I would want to use it yesterday, this is the front end of 2023/4"

Frontend hate is so prevalent on HN that you can literally cough up word salad and get upvoted as if you've just crafted a cogent technical argument. Thunks predate the invention of JavaScript by the way.


Thunks in javascript are barely related to historical thanks. There's so little overlap that they're borderline tangential.

And to the OP's argument, he's really just recounting his experience. It's a widely shared experience as you've admitted. There's a huge coterie of people who have seen things done well and seen things done badly, and while we ALL would admit modern javascript has brought some improvements over the old ways, we also believe the tradeoffs are rarely worth it and we are still building and searching for ways that are in line with the best paradigms we've seen.


> Thunks in javascript are barely related to historical thanks. There's so little overlap that they're borderline tangential.

Thunks in JS are exactly the same as thunks in any other language. I would love to hear your explanation of the technical differences. You're trying your hardest to salvage some semblance of an argument from OP's rant.

> And to the OP's argument, he's really just recounting his experience.

He hasn't recounted a single experience, besides his frustration with understanding thunks (which as I've said isn't even a JavaScript exclusive topic). He's literally just regurgitated a list of programming terms (many of which aren't even exclusive to frontend programming).

I criticized OP for not formulating a technical argument, and you've somehow responded with an even less technical argument.


I’ve seen C#/Java projects with just as many dependencies, but much inferior dependency management.

To be honest, this reads like you’ve only experienced frontend from the outside, it’s really not that bad. Besides all the croft is still pretty much optional, you don’t really need that much in 2023 to create something nice, but as frontend engineers we typically care about the code not becoming a total mess, because of lots of prior burns, that’s why i always introduce mandatory linting and formatting as a pre-commit hook where ever i can. It’s such a small thing, but boy is it nice when all the code always looks and feels more or less the same. I’ve tried the same for C# but it seems like the community does not care to the same degree.


I've spent 12 years in the JS ecosystem. It really is that bad and it got significantly worse in the past couple of years. It was hard to see it however until I tried working in some of the newer ecosystems like Elixir, Rust, Go.

Things break all the time and a variety of tools and technologies just don't work with eachother. Making choices is like stepping on land mines - tomorrow the ecosystem might decide that choice was wrong and abandon all effort into it (e.g. next server components breaking most css-in-js libraries).

It doesn't have to be this way, there are ecosystems where this is not the case.

Despite that there is still a lot to like and enjoy. TypeScript is really excellent, building UI with things like tailwind can be a pure joy when you get into the flow, reactivity libraries like MobX / Solid and frameworks like Svelte largely make you feel like you're only writing business logic when managing state, etc. But overall the ecosystem is extremely "internally incompatible" for lack of a better word


> next server components breaking most css-in-js libraries

This is a legitimate gripe, and React Server Components in general have introduced a lot of complexity. It's worthing noting though that you don't have to use server components and Next's new App Router. The Pages Router still works. I'm ok with a framework occasionally (not excessively) introducing a new way of doing things as long as the old way is still supported (much like React still supports class components despite the introduction of hooks).


My point is that this sort of thing doesn't happen so lightly in other ecosystems. And the "you don't have to use it" excuse isn't used when the project clearly wants to move in that direction:

- others will jump on the feature and use it. you might not have to use it but libraries you depend on may decide otherwise and rewrite

- you didn't have to use ReactHooks, but try using Apollo client or react-query without them now, or try finding any non-hooks documentation in general.

- you will slowly see bugs being neglected if they're not about the new and shiny.

- there is no equivalent of useContext for class components

So the word "supports" here carries very little weight in practice because the rest of the ecosystem constantly rewrites.


> My point is that this sort of thing doesn't happen so lightly in other ecosystems.

React has been around for 10 years (almost 11 since we're a few days away from 2024). Many of the ecosystems you're holding up as the epitomes of stability have had their fare share of large scale changes in that same time frame. Remember go before generics, or Rust before async landed [1]? Elixir's Phoenix didn't even have its first commit when React was released [2].

> there is no equivalent of useContext for class components

Class components can still use context [3]. They obviously can't use the useContext hook, because class components aren't compatible with hooks, but that's kind of a tautology.

> you didn't have to use ReactHooks, but try using Apollo client or react-query without them now, or try finding any non-hooks documentation in general.

Library authors moved to hooks because sharing stateful logic was difficult before hooks. I never bought into the GraphQL hype train, but I remember there being a ton of verbose render props necessary to make GraphQL libraries like Apollo work before hooks. I don't think anyone really complained about data fetching libraries adopting hooks. You could argue that the react devs should have created the perfect library in 2013 complete with hooks, but I can forgive them for not having perfect foresight. Like I said, if an API changes occasionally over a decade I'm ok with it. Technology should be stable, not frozen.

[1] https://areweasyncyet.rs/

[2] https://groups.google.com/g/phoenix-talk/c/l8kIhc_LC7o

[3] https://legacy.reactjs.org/docs/context.html#classcontexttyp...


You can't use more than one context in React classes

"React" has been around for 10 years but in practice its been 3.5 different frameworks

- the original

- the class version (still close enough to original)

- hooks version (largely a whole different concept and "compatible" only superficially while the rest of the ecosystem starts writing incompatible things)

- server components, and `use`, which monkey patch fetch and already broke half of the ecosystem.

And no, hooks are still far from perfect and are infact very unintuitive and awkward to pretty much anyone who hasn't used React.

Additionally, there were designs possible to evolve from class components, its just that the React team cared more about

- "innovation",

- "functional" aesthetic preferences

- enabling a DCE compiler

rather than compatibility and continuity

Example design that would prioritise compatibility and continuity while still enabling all the current features of hooks: https://news.ycombinator.com/item?id=35192312

Again, this is _not enough_ attention to backward compatibility and stability in other ecosystems, even older ones like Python.


The problem isn't just react though. Most libraries in JS land just don't prioritise interfaces, compatibility and colaboration, instead opting for a "I'm just gonna build my own thing and not care about anything else out there" mentality.

Lets look at that Rust comparison and see how the equivalent situation looks like for JS. Before async-await, Rust had a variety of libraries implementing something close to Futures. They all largely standardized on the Future trait from the futures crate, which made it possible to write compatible library code that could build on top of any of them. There was a bit of an issue with compatibility as the trait essentially moved into std and some compatibility shims were needed, but after that a lot of the ecosystem adopted it and its possible to write runtime-independent code

In contrast in JS land, we had callbacks which were not even values, Promises (which had a working group that actually cared about compatibility), monadic Futures, observables, thunks, event-based interfaces, a variety of libraries. This came to be dominated with node-style callbacks, and then what got standardized were promises which instantly made most of the library ecosystem incompatible with the standardized syntax.

Both ecosystem have had these challenges, but to me it seems clear that the Rust community approached it with more care and thought on average.

Going beyond async-await, we can see similar patterns in the rest of the ecosystem. We have:

- half a dozen different component frameworks and not one has thought to define an compatible component interface that anyone could implement.

- 10 different popular libraries for manipulating standard library collections, but not one common protocol or trait (other than iterable and concatspreadable) that would make them largely work with other custom collections (e.g. MobX reactive collections or similar)

- 5 different popular bundlers and 10 older ones, but not one common trait, interface or standard for writing bundler plugins

- at a certain point, we had 3 different ways to define monorepo workspaces between npm, pnpm and yarn

So for a variety of reasons we have a very "internally incompatible" ecosystem. You can barely get components to work together, and what works together today is almost guaranteed to break tomorrow.

I think that if we recognize this is a problem collectively, there may be a way out. There is very high interest in some maintainers for doing this, especially those that care about compatibility. This is because one-sided care is not enough - the side you want to be compatible with has to care and prioritize it too. Those maintainers could decide to form a sub-community that prioritizes compatibility and continuity amongst them, as well as a promise to their users.

The recognition has to be more widespread than just in maintainers though, as frameworks that have done this before (e.g. Ember) have been largely left behind due to hype propping up the latest shiniest thing.


> "React" has been around for 10 years but in practice its been 3.5 different frameworks

> the class version (still close enough to original)

React.createClass existed because React is so old that not every browser supported native classes, and it was ditched when browser support improved. It was a relatively seamless transition. The rest of your bullet points are basically restating what we've already discussed. There have been two major changes in the past decade: hooks and server components. Again, I'm fine with two changes over the course of more than a decade.

To give you an example of a Go project that is roughly the same age as React and has introduced major breaking changes take a look at InfluxDB. It went from using SQL and InfluxQL as a query language in v1, to Flux in v2 as the headlining feature, then back to SQL and InfluxQL in version 3 recently [0].

> You can't use more than one context in React classes

You can, it just requires render props which are verbose (as I've already stated). There's a whole section of the legacy docs on this. This was one of the many motivations for the creation of hooks.

> 10 different popular libraries for manipulating standard library collections

Not really. Lodash is obviously the standard choice, and at one point Underscore had a bit of a following too. We wouldn't need these libraries if TC39 didn't drag its feet implementing the standard library, but browser technology has many stakeholders so it's a slow process.

> In contrast in JS land, we had callbacks which were not even values, Promises (which had a working group that actually cared about compatibility), monadic Futures, observables, thunks, event-based interfaces, a variety of libraries. This came to be dominated with node-style callbacks, and then what got standardized were promises which instantly made most of the library ecosystem incompatible with the standardized syntax.

I disagree with your characterization of async in JS. First of all you're tossing in a bunch of fringe concurrency techniques to distort history. There have been three mainstream ways of doing concurrency in the three or so decades that we've had JavaScript (in the following order). Callbacks (which is the primitive we were originally given from the browser and node emulated, e.g. addEventListener), promises (which have been standardized incredibly well, e.g. Bluebird promises still work after all these years), and async/await syntax which is a nice layer of syntactic sugar on top of promises that every language with promises eventually adopts.

Functional style monadic futures were never a mainstream way of doing things, to the point where people proposing them were told that they were living in fantasy land (which is why they literally created a library called fantasy land). Observables and thunks are not specific to JavaScript. RxJS (the most popular library for handling observables) has analogues in every popular programming language, and Reactive Extensions were actually originally created by Microsoft for C#.

> Example design that would prioritize compatibility and continuity while still enabling all the current features of hooks:

This is an HN comment you wrote with a small snippet of code. I would hesitate to call it a comprehensive design without peer review. You certainly could've submitted this during the RFC on hooks. There was a lengthy discussion before the design was finalized (which another commenter mentioned when you posted that comment to HN).

> Again, this is _not enough_ attention to backward compatibility and stability in other ecosystems, even older ones like Python.

Your definition of stability must be wildly different from mine then. Every time I have to install a python library on a new machine I audibly groan. There's a great article about this, and an associated discussion on HN and lobste.rs that I recommend [1][2][3]. Python is complex enough that it literally drove the adoption of Docker, because it was easier to ship an entire containerized OS than to instruct people on how to install Python packages.

This conversation is getting a bit lengthy, but I'll leave you with this:

- The most popular programming language on earth will inevitably explore the design space a lot, because there are so many people working with the language and thinking about these problems. As Bjarne Stroustrup said "There are only two kinds of languages: the ones people complain about and the ones nobody uses".

- Older languages will inevitably have more cruft than newer ones, because design decisions tend to accrete over time.

- JS is in a particularly difficult position because it runs in both the browser and the server, and because it's built on web standards and TC39 uses design by committee (there's no BDFL supervising JS).

[0] https://news.ycombinator.com/item?id=37614611

[1] https://www.bitecode.dev/p/why-not-tell-people-to-simply-use

[2] https://lobste.rs/s/vtghvu/why_not_tell_people_simply_use_py...

[3] https://news.ycombinator.com/item?id=36308241


One last bit

> You certainly could've submitted this during the RFC on hooks. There was a lengthy discussion before the design was finalized (which another commenter mentioned when you posted that comment to HN).

There were RFCs submitted. My conclusion from that entire discussion was that the React team proceeded the way they've decided anyway without providing very convincing explanations and the priorities they had didn't align with most of the community. I believe this is in line with how other Facebook OSS works. React has largely been built with internal clients and their needs in mind. Additionally there seemed to be the desire to continue to be "iconclastic" and innovative and power through criticism in a similar manner that the initial React release did.

That's all perfectly fine, but in my opinion its not what the community needs, and we'de be better off moving on to other solutions (e.g. Solid, Svelte). But more important than that, we need to move on to a different mindset, because without that mindset change we're likely to go through the same problems over and over again.


There are very few projects, in Javascript or elsewhere, that care about backwards compatibility the way React does.

You can still easily put a class-based component inside your modern hook-based codebase, and it will work (I did it this year :) )

But I think they kinda dripped the ball with Suspense and server components


This is not the definition of backward compatibility I have in mind. Backward compatibility also includes "continuity", that is the new features flow from previous design.

React has been redesigned 2 times and now with server components its even less of an extension of previous ways of doing things.

Building 3 different frameworks where the old way to write code still works but nobody builds on top of that in any meaningfully useful way is a very minimal definition of "backward compatibility"


> I disagree with your characterization of async in JS. First of all you're tossing in a bunch of fringe concurrency techniques to distort history. There have been three mainstream ways of doing concurrency in the three or so decades that we've had JavaScript (in the following order). Callbacks (which is the primitive we were originally given from the browser and node emulated, e.g. addEventListener), promises (which have been standardized incredibly well, e.g. Bluebird promises still work after all these years), and async/await syntax which is a nice layer of syntactic sugar on top of promises that every language with promises eventually adopts.

I am (was) one of the maintainers of Bluebird. I'm well aware of the exact state of the node-dominated ecosystem at the time. Promises weren't much more popular than the "fringe" libraries you're describing, and (err, res) style callbacks largely dominated in the ecosystem. And yet it didn't matter - promises got standardized, even though the node community largely disliked them. Very different from the Rust process.

> Your definition of stability must be wildly different from mine then. Every time I have to install a python library on a new machine I audibly groan.

I'm aware that the python packaging story sucks. However there is broad awareness and strong desire to fix this. There is standardization work to get the ecosystem into a state that is largely coherent. There is much less such desire in JS: the thought of coming up with a "standard component interface" for example (i.e. a cross-framework compatible way of writing components) would be met with ridicule. (Note that I don't mean web components - I mean a common interface in the "trait" sense)

The Python article you linked itself admits "expirienced people will know what to do, this is not for them". Well that's not the case in JS - its actually really easy to get a combination of tools that simply will never, ever work with each other, just because they weren't designed with any common interface or compatibility in mind - regardles of how experienced you are. For example, at some (relatively short) point in time around 2019/2020, react-native simply didn't work with pnpm, because it assumed that packages installed together will have access to the same node_modules, so some of the packages dind't have the correct dependencies listed; pnpm being strict dind't provide the unlisted ones, which mean react-native was broken. (Later pnpm added a mode just for this sort of brokenness, but these sort of limitations pop up all the time)

> JS is in a particularly difficult position because it runs in both the browser and the server, and because it's built on web standards and TC39 uses design by committee (there's no BDFL supervising JS).

There's been plenty of time to fix things. Its not just about the ability, its about the general attitude in the community and our priorities. Before things get better, we have to admit they're really bad, and that we need to focus more on compatibility, continuity and colaboration instead of ignoring them in the pursuit of the latest "innovative" idea.


> promises got standardized, even though the node community largely disliked them.

Node is a phenomenal project, but it doesn't dictate the entire direction of the language. I have a hard time chastising TC39 for standardizing promises simply because Node developers didn't like the way it affected their core dumps. Like I said, JS is in a unique position as a language that runs on the server and in the browser. There are lots of stakeholders and not everyone is going to be happy, but I personally prefer promises over the callback hell we had before.

Your comments on Python:

> I'm aware that the python packaging story sucks. However there is broad awareness and strong desire to fix this.

Your comments on JS:

> There's been plenty of time to fix things

Python is one of the oldest programming languages that most engineers will likely ever work with. It was released the same year as Linux. Hasn't Python had "plenty of time to fix things"?

> The Python article you linked itself admits "expirienced people will know what to do, this is not for them".

First of all, the actual quote is "The people that will understand it’s not for them will self-exclude from the recommendations, as they should, since they know what to do anyway." The article also says the following: "I could not tag this as being “for beginners”, because it is not that simple. I know veterans that still struggle with this stuff".

But besides that, why in the world should you have to be an experienced engineer to figure out how to download and run packages of code from the internet? This is table stakes for most languages. You want third party JS frameworks to form a working group to establish interoperability before you'll grant the JS ecosystem your blessing, and yet you'll accept Python generally being bad at packaging code. You'll excuse anything when you're evaluating your favorite languages, but you nitpick when it comes to JS.

> For example, at some (relatively short) point in time around 2019/2020, react-native simply didn't work with pnpm

So you'll accept that the general Python packaging story is bad right now and has been for decades, and your only retort is that for an admittedly short period of time 3 or 4 years ago a single package supposedly had a bug related to pnpm (the least popular package manager)? Meanwhile the Python community is still debating how to install packages locally in a folder [1].

You don't think you're maybe contorting yourself to make some of these arguments work?

[1] https://discuss.python.org/t/pep-582-python-local-packages-d...


> You don't think you're maybe contorting yourself to make some of these arguments work?

First of all, I still prefer TypeScript to Python when it comes to language features. So no, I'm not defending my favorite language here - if anything, I'm "attacking" it.

Secondly, I don't really think in terms of defending favorite languages - they all come with strenghts and weaknesses and I think that its important to talk about and acknowledge both aspects even for languages we really enjoy. That's the only way for languages to move forward and improve.

That said, being embedded into an ecosystem for a long time makes us used to its strenghts and desensitised to its weaknesses. This in turn makes it harder for us to really see the ecosystem's standing compared to others.

I'm trying to provide a perspective as someone who has worked in the ecosystem for a long time and has recently started exploring other options.

Are JS package managers better than Python's? Certainly. If you took a sampling of various tools and libraries in Python, would you be far more likely to have them work together than a sampling of various tools in JS? Absolutely.

I'm a relative beginner to Python compared to JS/TS, but I had a chance to work with it more recently. I had no trouble dealing with pip, venvs and poetry. Yes, there were challenges and annoyances but at least it was possible to overcome them with sufficient knowledge. I agree though - Python has long ways to go to improve this.

However, if I compare this with my previous JS experience, its a different order of magnitude. No amount of knowledge would've made styled components work with next's app router - I simply had to start over and make different choices for my SSR framework and UI library (the alternative was to help with Mantine's migration away from Emotion). So what do I do with my other, already well-developed side project that uses Mantine and Next pages router? I can continue to use the pages router, but for how long? Who knows - its not like there is any kind of concept of LTS or even any kind of thought on that topic when it comes to Next.js anyway (e.g. https://github.com/vercel/next.js/discussions/41906)

At the end of the day, IMO the main difference is attitude. There is broad acknowledgement in the Python community about their problems and they're considered important to solve. I don't see anything like that in JS - compatibility and providing continuity is just not something most authors want to think about. Innovation is (still) priority #1

I realize its hard to make a convincing statement about frequency/magnitude/probability/attitude though. Similar examples will exist across ecosystems. What kind of data would convince you that the situation in the JS ecosystem is bad enough that it warrants a priority shift in the community?


> what do I do with my other, already well-developed side project that uses Mantine and Next pages router? I can continue to use the pages router, but for how long?

I honestly think css-in-js solutions that inject styles into the document head at runtime like Styled Components and Emotion were never really sustainable or performant, and server components just revealed the inadequacy of these techniques. Generating actual stylesheets at build time is obviously more performant (using Linaria, PandaCSS, Tailwind, CSS Modules, etc).

Even one of Emotion's maintainers proclaimed he was "breaking up with css-in-js" over a year ago [1]. He had this to say (before the app router was even announced): "With CSS-in-JS, there's a lot more that can go wrong, especially when using SSR and/or component libraries. In the Emotion GitHub repository, we receive tons of issues that go like this: I'm using Emotion with server-side rendering and MUI/Mantine/(another Emotion-powered component library) and it's not working because...".

Granted, even if css-in-js solutions are less than ideal, I still think Next should continue to support them for compatibility reasons. Neither of us can predict the future, but I would bet that Vercel will continue to support the pages router given that they have customers to answer to. Money is a powerful motivator.

> At the end of the day, IMO the main difference is attitude.

> I realize its hard to make a convincing statement about frequency/magnitude/probability/attitude though.

Like you said the problem is that it's hard to make a convincing statement about community attitude. All we can do is go back and forth with anecdotal evidence.

You'll speculate about whether or not Vercel will continue to support its pages router, and I'll provide my own anecdotes like the example I gave of InfluxDB (a Go time series database) deprecating its entire database query language on two different occasions, which fundamentally broke the entire product.

> What kind of data would convince you that the situation in the JS ecosystem is bad enough that it warrants a priority shift in the community?

The only data either of us have is anecdata, which is why we're kind of at a standstill here.

[1] https://dev.to/srmagura/why-were-breaking-up-wiht-css-in-js-...


> I honestly think css-in-js solutions that inject styles into the document head at runtime like Styled Components and Emotion were never really sustainable or performant, and server components just revealed the inadequacy of these techniques.

Sure. But here we have one that happens to be used by most popular UI toolkits. Forget about Mantine, it feels like MUI is in every other project.

Now again anecdotally, outside of the JS community, the approach and attitude of maintaners would typically be:

- how do I make this extremely widely used package have less issues with minimal backward compatibility breakage

- can I additively introduce features that are not prone to the issues

not "I'm starting over"

Just in the article comments there are 5 other people pointing out a variety of issues with the author's new favorite choice. And they're all kinda right - all choices have tradeoffs. That's precisely why only dedication to compatibility and continuity can grow any one of them into something that is still sensible to use while acknowledging its limitations.

This is IMO a more sensible approach to software engineering

- Start with an approach that is significantly stronger in a certain aspect

- Acknowledge its limitations

- Work to ameliorate those as much as possible

P.S. the article about Emotion is also an example of how React's "no wait, this was always a bad idea because <X>" (where X in this case was concurrent mode) is causing rippling breaking changes. Again, this is their prerogative and at the end of the day, they have their own priorities internally at Facebook. But that's not necessarily healthy for the wider community.

Also FWIW, InfluxDB's approach to breaking changes has been compared to "a random npm library" https://news.ycombinator.com/item?id=37206194

> The only data either of us have is anecdata, which is why we're kind of at a standstill here.

This discussion has been also been useful for helping me pinpoint what exactly is different about JS when I'm working with it.

I'm still hoping I was at least somewhat convincing, because I don't think this is a problem that can be tackled without wider acknowledgement from the JS community.


> - can I additively introduce features that are not prone to the issues

> not "I'm starting over"

React and Next DID additively introduce server components. The old way of doing things is not deprecated. Your entire argument is predicated on the idea that the old way of doing things will eventually be supplanted (which is pure speculation).

> P.S. the article about Emotion is also an example of how React's "no wait, this was always a bad idea because <X>" (where X in this case was concurrent mode) is causing rippling breaking changes.

It's not a bad idea because of concurrent mode, it's a bad idea because generating styles at runtime in a user's browser is much slower than building a static css file at compile time and serving it like any other static file. Ironically you want JS to be like every other ecosystem and that's what we're doing (everyone else outside of JS is using Tailwind and static stylesheets), and you're upset about it.

The frontend ecosystem would have done this from the beginning but analyzing components with dynamic styling, and extracting those styles at build time is non-trivial. It's essentially like building a small compiler (Tailwind in fact built a just in time compiler in v2.1). The simplest solution (from an implementation perspective) is to just use JS to dynamically generate styles at runtime and that's what early frameworks like Emotion and Styled Components did. The state of the art has advanced, which as a technology oriented person, I'm ok with. Like I said several comments ago, technology should be stable not frozen. But again, you're allowed to keep using these bloated UI toolkits because server components are optional.

> Also [for what it's worth], InfluxDB's approach to breaking changes has been compared to "a random npm library" https://news.ycombinator.com/item?id=37206194

It's worth very little. I gave you an example of a massive breaking change in another ecosystem and your rejoinder is "well some guy named 'somedude82' in an unpopular HN post compared it to npm". The arguments are getting weaker as this discussion drags on. By the way, I actually read that comment as saying "a paid database product shouldn't be as unstable as a random package". Since we're trotting out supporting HN comments, here's one from this very discussion that disagreed with you about React's stability [1].

> This discussion has been also been useful for helping me pinpoint what exactly is different about JS when I'm working with it.

This discussion has had the exact opposite effect for me. You seem to be willing to excuse any and all flaws in other ecosystems (Python's packaging story, the InfluxDB breaking changes, etc).

[1] https://news.ycombinator.com/item?id=38742422


> well some guy named 'somedude82' in an unpopular HN post compared it to npm

It implies that the JS ecosystem is well known for behavior just like the InfluxDB example - its the benchmark example of such behavior. That was my point.

> The state of the art has advanced.

Here is where I have the most fundamental disagreement. My position is: "It hasn't advanced significantly because the JS ecosystem wants to restart every time it gets difficult". Well ok, its a bit more complex than that (external pressures from React and other driving force libraries / frameworks), FOMO and all that - but at the end of the day, the actual progress has been very slow.

> This discussion has had the exact opposite effect for me.

I'm sorry about that, and if I think about it I guess discussion didn't do that for me either. In fact if I look at my hacker news history, I was probably arguing the opposite side quite a bit until 2-3 years ago. For me it was probably trying out Rust. Which isn't to say I think Rust is perfect (I'm still not sure that it makes sense to spend any effort thinking about memory management in the vast majority of cases, even with a compiler helping) but it provided a very interesting high-contrast point of comparison for me.


> It implies that the JS ecosystem is well known for behavior just like the InfluxDB example - its the benchmark example of such behavior. That was my point.

To me it implies that rather than self reflecting and admitting that other ecosystems have their own continuity problems, developers would rather respond with these tired tropes about npm. Sometimes a technology or ecosystem gains a bad initial reputation, and because of the tribalism that's so prevalent in tech, it's hard to dispel these tropes. Before JS it was PHP that was the butt of every joke, despite PHP making massive strides in usability, performance, and security. It took years for developers to stop thinking of PHP as a "fractal of bad design" [1]. JS is struggling with these same reputation problems. But I'm not sure if that random HN comment even warrants this much discussion — it's not like the author provided any actual examples.

> "It hasn't advanced significantly because the JS ecosystem wants to restart every time it gets difficult".

> but at the end of the day, the actual progress has been very slow.

I think you're drastically overestimating the scale of these changes. React hasn't "restarted". You act like the React developers decided to throw out the whole api and add Vue/Svelte style templates or something.

The InfluxDB example I gave you is actually a much better fit for your criticism. A database is nothing if you can't query it, so changing the query language twice is much closer to restarting. And given that they've reintroduced the original query language from version 1 in version 3, not only is actual progress slow, but it's actually a regression rather than a progression. This would be the equivalent of React deprecating hooks, and going back to class components after spending years promoting hooks.

> In fact if I look at my hacker news history, I was probably arguing the opposite side quite a bit until 2-3 years ago. For me it was probably trying out Rust.

Rust is cool and I like that the Rust developers are trying something innovative with the borrow checker, but I don't find my self pining for the rust development experience after having played with it for a bit.

- The compile times can be horrendous.

- Cross compiling is much more complex compared to Go.

- The standard library is lacking a bit (which says a lot coming from a JS developer). I don't expect every language to ship with an http server like JS and Go do, but I remember needing to pull in third party crates for simple things like making an http request, or calculating the shasum of a file. One of the most popular libraries for making http requests has 49 dependencies [2]. The entire async runtime has been outsourced to a third party package (tokio).

- The 6 week release cadence is a bit fast for me, and despite all your talk of standardization, Rust still doesn't have a formal specification [3].

- The 'unsafe' keyword gets thrown around a lot in Rust crates. As a newcomer, my options are to audit every dependency and hope there isn't a memory safety vulnerability, or cross my fingers and hope the packages are of high quality.

[1] https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

[2] https://crates.io/crates/reqwest/0.11.23/dependencies

[3] https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...


here is a sample of well-meaning maintainers franatically trying to deal with situations like these and maintain some semblance of compatibility https://phryneas.de/react-server-components-controversy


C# has https://github.com/dotnet/format but because C# is, well, not JS, the importance of linting is far less significant. Instead, there are hundreds of out-of-box analyzers that highlight problematic patterns or likely mistakes in the code and there are even more that you can enable through extensions (like Roslynator) or through packages that are 'dotnet add package' away.

On the package management - it couldn't be more different between Java and C# and it's incorrect to compare the two. .NET has few if any issues of the former.


> I had a 10 minute conversation on what the fuck a "thunk" is and why I would want to use it yesterday, this is the front end of 2023/4

I believe I remember hearing thunks were old-school and nowadays it should be done with sagas... Like 4-5 years ago.


It’s really not that complicated.


Move your bundler and the god awful mess happening inside the node_modules folder down to the basement (make sure to put up a "beware of the leopard" sign so nobody accidentally gets hurt) and it's actually pretty nice. Sort of.


Since we are talking about web components, I just drop my web-components-based framework here: https://realm.codes


What great looking documentation! A very nice site.


I really don't understand the hate for Web Components (this article is certainly not the first one to hate on it). To me, WebComponents are great (as far as anything related to the web can be).

Let's say you need a custom favorite/like button in your app? Cool, define `<myapp-like>` tag with `customElements.define('myapp-like', MyAppLike)` and `class MyAppLike extends HTMLElement {}` gives you a way to really make it work for you.

It works in any modern browser (I've been using it natively for 2 years now). It works in React, Vue, Solid, whatever... because it doesn't care for your frontend or backend stack the same way that you use `<div>` anywhere.

In my case, I'm using web components natively and with Lit framework. Very happy with the choice.

And yes, web components are not the solution for everything. Semantic tags? Nope, it doesn't work that way.


> I really don't understand the hate for Web Components

The article mentions a number of arguments against them (although I wouldn't describe it as "hate", even aside from how reductive that is). Any reason those arguments don't help you understand, i.e. do you think any of them are wrong, or do you just attach less weight to them?


Also a fan of web components w/ Lit.

Folks may not know that Adobe just shipped Photoshop in the browser using web components and Lit. Honestly, I don't think there's any other way that product could have been built.

Web components don't make you do anything. They allow things that are otherwise messy, especially when used with an "anti-framework" like Lit that doesn't provide any components of its own, just event hooks and bindings.


> Honestly, I don't think there's any other way that product could have been built.

Figma is built with WebGL for its canvas, and with React for its UI components. There's nothing special about Web Components in this setting.

> They allow things that are otherwise messy

Like what?

> especially when used with an "anti-framework" like Lit

Lit isn't anti-framework. Lit has an anti-framework stance while having all the attributes and makings of a framework.


> I really don't understand the hate for Web Components (this article is certainly not the first one to hate on it).

- They completely ignore any and all developments in the user space

- They are bad for any of their stated purposes (of which there have been several)

- They require complex specs that make the platform more and more brittle, and impossible to reason about or implement. Most of those specs exists only because web components are so poorly designed they introduce problems no other tech in the space has.

- After over a decade in development they still need 20 more complex web specs to fix issues that, again, no one else has: https://w3c.github.io/webcomponents-cg/2022.html And yes, this list only appeared in 2022 because it took the people who push them 10 years to finally come together and discuss what is needed to mark them as more-or-less complete

They will claim that "web coponents is 'use the platform'" even though it's trivial to show that web components are the worst citizens on said platform. I can't think of any tech that has been as ill-suited to the platform as web components [1]

- The previous bullet point exists because the entire development has been purely ad-hoc haphazard "we'll fix issues later" by people who continuously make a suprised pikachu face when stuff doesn't work even in the most simplest of scenarios that others have pointed out from miles away.

- Don't forget the complete arrogance of most people driving all this. They will deflect and ignore all questions and criticism, they will blame you, users, React, God, or anyone for not understanding their grand vision (that they themselves cannot explain) and for not using this. They will stoop as low as borderline gaslight even the people who work with them on developing this (reading through discussions on Constructible Stylesheets made me lose faith in humanity)

---

[1] https://threadreaderapp.com/thread/1717580502280867847.html


The only good use cases for web components that I see is to use them for embedding encapsulated 3rd party widgets to prevent style contamination, kinda like iframes.

Composing them is absolutely horrid, as is sharing data between them.


Another nice use is encapsulating custom form elements since they can natively provide a value to the form instead of needing a surrogate hidden field and all that entails.

Embedding interactive content into a cms is I guess another surface but that's more semantic sugar and style encapsulation.


It seems you and the OP are discussing different platform levels. They use a framework like Lit and it abstracts away the complexity, leaving very simple deployment for end users.

By comparison your complains seem like those faced by the kind of people making the Lit framework. IMO neither viewpoint is invalid.


Well, the question was "I really don't understand the hate for Web Components".

Once you've reached for a framework that papers over all the issues, and offers you a nice interface, it's no different if you reach for React, Preact, Vue, Svelte, or lit :)


I might be wrong, but the way I see it is that Web Components give the basic blocks of writing those frameworks easier. Of course, most of those frameworks are older than WCs and they made their own choices about how to implement components and probably (I don't really know) don't use WC (or use only parts of it).

But, if someone wanted to make a new framework, today, I see no reason why they wouldn't use WCs. And let's not have an argument about JSX being a nicer developer experience, because sure it is nicer to use JSX and have autocompletion and all, but JSX does not run natively in the browser. WCs do. There are compromises.


> Web Components give the basic blocks of writing those frameworks easier.

No, they don't. Both for the fact that they need 20 more web specs to fix issues that none of the frameworks have, to the fact that they are actually pretty bad as basic building blocks: you can't control their rendering (once a web component is on the page, it will eagerly render, and will pull in and render whatever imports it has); shadow dom breaks all sorts of expectations; each is an isolated rendering block, so you can't control for batch rendering; there's no server-side rendering story to speak of, and that's only off the top of my head.

There are many more tehcnical reasons than I listed, but I'd have to hunt them down. There are many, many, many reasons why even the authors which originally really rooted for web components, and modeled their frameworks after them now moved awaya from them completely (Solid, Svelte, Vue).

There are also many, many, many reasons why even the new framework overwhelmingly _don't_ use web components for anything.

There are a few thoughts^

- Ryan Carniato (Solid): https://dev.to/ryansolid/maybe-web-components-are-not-the-fu...

- Ryan Carniato's stream on web components: https://www.youtube.com/watch?v=BEWkLXU1Wlc (you can skip to 1:37:17 for a specific discussion)

- Rich Harris (Svelte): https://dev.to/richharris/why-i-don-t-use-web-components-2ci...

- All these issues are still there, 4 years later: https://twitter.com/Rich_Harris/status/1198332398561353728

and so on and so forth.

> but JSX does not run natively in the browser. WCs do. There are compromises.

JSX is probably the absolute last thing frameworks care about. They do care about things like not jumping through hoops to make things work.


> Web Components give the basic blocks of writing those frameworks easier.

The argument of the post is that they should but don't really, and some new building blocks should be added to help with that.


What kind of functionality is "runs natively in the browser"?

JSX also runs in every browser.

Edit: Just rename "Web Components" into "Web Tags" and it's all good.


I don't know what browser you use, but when I do:

    const el = <p>Hello, world!</p>;
I get `Uncaught SyntaxError: Unexpected token '<'`. I even tried in several browsers, no luck.


You're just missing two things. First, install Lit. Then write it as:

const el = html`<p>Hello, world!</p>`

And you're done. These are called "tagged template literals[0]". Lit gives you the html "tag" for the literal that you're looking for.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


1) that’s not JSX, it just looks like it

2) it’s still an external library, which is OPs point


I don't think Lit counts as a framework. It's much lower-level plumbing. And tagged template literals are part of the browser.


But lit's DSL isn't a part of the browser. Neither is lit's data binding/reactivity. Nor lit's rules on directives. Nor lit's tasks. Nor lit's context. Nor...


Yes, I would agree with that. And that's why "running natively in the browser" is not a remarkable feature of web components, because you will need something like lit anyway for using web components in a productive way. So you might as well use React, or any of the other billion frameworks.


Both are non-standard DSLs that need an external library to work


It seems the problem here is not the browser, but your lack of knowledge.


Do we need text (HTML is text) and Javascript?

Why can't be browsers used only to run native binary apps? Wouldn't that be more efficient? And even easier to develop for?


And even easier to develop for?

We don't really need to speculate - just look at mobile apps. They're binaries. They're relatively easy to develop, but we need gatekeepers like Google and Apple verifying they're safe, and the quality is horrible at the low end. And devs need to pay an annual fee to distribute them.

Browsers already support something very similar (plugins via app stores) but they get very little love because making a website is the same and the only benefits of making a plugin are too niche for many things to actually need.


Apps for web wouldn't need app stores. You just point the browser to an URL, download and run. The browser would just isolate the app.


> And devs need to pay an annual fee to distribute them.

Only with Apple.


You also have to pay for PlayStore, Windows Store, Playstation, Nintendo, XBox.


I don't know about the others but the Play Store is a one time $25 fee.


Anual or one time, it is still a fee.

On the consoles, not only you have to prove worthy of the platform, have to sign the NDAs, and then rent the devkits.


Text parsing isn't the bottleneck:

https://mozakai.blogspot.com/2013/05/the-elusive-universal-w...

(this blog post is pre-WASM, but the main advantage of WASM vs asm.js isn't that WASM is a binary format or even better performance than JS, but that Javascript can focus on being a language again, while WASM can focus on being a compile target)


As purely an end user, yes.

I don't want to run another virtual machine that eats up RAM, has worse performance and battery life than something natively compiled and generally serves only the interests of the person servicing it to me.

I want apps that are 100% native and work when occasionally connected. And I want the www to go back to servicing content, not apps and garbage to me.

I don't want a gigabyte RAM footprint Electron app that ships Rust in webassembly to read emails when my current mail client used 77 meg of RAM and just works. I don't want 50 browser engines running on my phone.

I think as an industry we've lost our way and turned to crack smoking crazy.


I also want a www of content, but as well as a www of apps. The web is too good to give up. The idea that I can send you a URL into an application state, or navigate between states of different applications, is really cool.

Somehow having more separation between content and applications sounds ontologically impossible, if desirable.


> than something natively compiled

Yet, basically every hot enough method will be run as JIT compiled native code.


Eventually. At the end users's cost of time, memory, CPU and laggy ass performance while the JIT process is running. This is done on potentially millions of machines multiple times a day.

Which is my point about it serving the needs of the publisher, not the end user.


> This is done on potentially millions of machines multiple times a day

And? Your car is working at an average of 20-30% of thermal efficiency, yet the benefit you get greatly outweighs the costs, doesn’t it?


That's not even an analogous argument.


WebAssembly already exists. It's easier for certain use cases, but not most. It gives good performance, but worse than native software outside the browser.


WebAssembly only gives better performance for things that shouldn't be done in the UI main-thread. As long as you don't do number crunching in the mouse move event handler, I doubt WebAssembly can beat vanilla JavaScript for reactive UIs.


Once you work with the DOM, the (small) overhead of calling from WASM into a JS shim or any performance difference between WASM and JS doesn't matter. The internal DOM implementation will dominate the performance profile unless you do really dumb things.


> worse than native software outside the browser

...for typical portable C code the performance difference is surprisingly small though (within 20% or so). The main advantage of native compilation targets is that you have more "manual optimization potential" via non-standard language extensions like builtins and intrinsics (usually at the cost of portability and compiler compatibility).


And after a decade the tooling still sucks, comparing with the development experience of native code.


That's why you develop and debug a native target, and then just flip a build system option to compile to WASM, it's really not much different from regular cross-compiling.

(DWARF debugging in VSCode is now actually possible via https://marketplace.visualstudio.com/items?itemName=ms-vscod..., but the native debugging workflow still makes more sense, unless you need to debug the WASM-specific parts of the code)


Like I say, the tooling sucks.

I am well aware of that workflow.

We can walk with a cane, doesn't mean walking without it isn't much better.


The single parts are all there (but only since very recently), now they just need to be connected through a VSCode extension. I cobbled such a workflow together myself [1] with a couple of python scripts and existing VSCode extensions which is good enough for me. In the end it's up to IDE vendors to provide a similar workflow.

[1] https://floooh.github.io/2023/11/11/emscripten-ide.html


Yeah, it only covers C and C++, solves your specific use case, and still doesn't change the fact the tooling sucks for everyone else, with exception of something like Blazor or Unity.


Do you really want the thousands of bugs that would occur if we shipped a real binary to each device? And all security problems?

Sorry, the FooApp 1.1 does not support your iPhone 12. It also crashes on Samsung Galaxy 21 and lower and has a huge security problem on Windows 10 when using an Intel Core i5.


Wasm is not more efficient for many use cases than regular old JS. HTML is text is pretty meaningless - it is parsed into the DOM, which is efficiently represented in native code by the browser. And this DOM - with all its warts - is the biggest benefit of the web, it’s standard, accessible, available everywhere.

WASM can only access it through JS, foregoing its benefits, while drawing on a canvas would kill everything that makes the web introspectable.


Mostly agree with your comment, but:

> WASM can only access it through JS, foregoing its benefits

Is this a fundamental limitation of WASM? I assume browsers will one day expose a native DOM API.


WasmGC is now available, so I guess it can also be solved sometime in the future, and I wouldn’t mind it at all.

Though we would need much better wasm debug tooling, similarly to JS consoles — today’s minified, obfuscated JS is not much better than machine code, but at least with the tooling we can sort of touch into the internals. I would like to keep that property of the web.


WASM can bundle DWARF sections, which enable stack traces with line numbers, step debugging, profiling…


> I assume browsers will one day expose a native DOM API.

TBH, that's extremely unlikely to happen. The entire DOM is built around the Javascript object model, trying to map the DOM to a C-style API would be possible, but inconvient to use and performance-wise it really wouldn't make a difference compared to going through a JS shim (since performance will be dominated by the browser-intenal DOM implementation, not by the user code manipulating it).

A 'WASM-native' WebGPU API would make a lot more sense, but again, that's extremely unlikely to happen.

The currently required Javascript shim might still disappear, but only because it might be generated automatically under the hood in the future.


We've tried that already with several technologies. It has always been a disaster.


Nacl worked fine, but was limited to x86.


Flash and java worked fine too, but noone bothered to write a secure runtime.


I can't help but feel web dev is reinventing the wheel when it comes to UI. It's basically game, there is a game loop, and all the actors/components may need updating on every frame. I'm not saying websites should be done in Unreal (although it's possible but probably overkill), but the patterns and abstractions in single page interactive applications has a lot to learn from patterns in game engines.


That “game loop” exists in the browser too. It’s just the main event loop, and it’s been there since the inception of JavaScript. Not saying everything that web developers aren’t reinventing the wheel, but I think the reason for it isn’t a lack of tools, it’s a combination of poor ergonomics of the existing tools and (perhaps more importantly) people not understanding those tools and building frameworks for things that are included by default.


That's mainly my point. The browser is the game engine, the web app is a game running it, but the way UI interactivity, components, etc, are abstracted in such a way that the tools end up creating a lot of problems instead of solving them. It's hard to summarize in short form but I think tools would be much better if the underlying abstraction was better. HTML and CSS is a crude way of specifying elements rendering in a viewport, and its main usefulness is really in the past when computers were much less powerful.


That’s hardly the whole picture. There are endless possibilities within that - e.g. the design of a text editor will look nothing like that of an AAA game’s ECS.


Text editor is probably not the best example. I'm thinking of the typical web app with "data lifecycle" as the article calls it, where there is rich interactivity and content is updated frequently based on interactions. In those cases you have components listening on local and global scope, updating of state, and updating of the rendering frame (handled by the browser).


You don't have access to the low level render loop, you have to deal with the already high level DOM api.


There are at least animation libs with methods to access the render loop, and then there is canvas api and maybe wasm.js? Not sure that is necessary for just exposing better abstractions to use directly instead of simplified languages with limited control like CSS.


Browser vendors are (or should be) managing the abstractions for their own needs, with developer needs expected to be met by framework/library developers.

Who says web components are meant for use directly by the developer? Maybe they're primarily meant for the browser developers (those who build browser features), not for use directly by web app developers.


No. Browser vendors implement specs, defined by the W3C. To quote the W3C‘s tagline:

> The World Wide Web Consortium (W3C) develops standards and guidelines to help everyone build a web based on the principles of accessibility, internationalization, privacy and security.

Web Components are a standard to be used by web devs.


FYI: the last time W3C, Inc. endorsed a WHATWG HTML snapshot as a recommendation was in 2021 ([1]), for the WHATWG snapshot published January, 2020 whereas the 2021 and 2022 snapshots were rejected, with no new review process having started since.

Reasons for rejection include a reporting API seen as privacy-invading and related disagreement over whether W3C's HTML WG could augment/redact WHATWG spec text like they used to be doing until 2017 when the previous HTML recommendation was published, or has to go through WHATWG process for any change according to the W3C/WHATWG "memorandum of understanding" which thus hasn't resulted in common understanding after all ;) Another reason was objection against the so-called HTML5 outlining algorithm, which Steve Faulkner actually has gone to great lenghts removing in WHATWG HTML upstream (cf. [2] for details).

Unfortunately, the removal also brought incompatible change to HTML (the content model of hgroup, among other incompatibilities), rendering existing content invalid, which WHATWG set out not to be doing but which they lack the methodology of preventing and for which the spec derived from Ian Hickson's work frankly lacks formal qualities to support. Even more unfortunate is that this change has already spilled to derived standards such as EPUB3 which hence makes existing EPUB3 content using compound headings going back to 2011 invalid, and EPUB3 writers lacking a tool for actually verifying what readers can support (epubcheck was blindly updated without consideration for the installed base). Technically, Review Draft January 2022 and newer should then already be called HTML 6. Since nobody gives a rat's ass (including W3C, Inc.'s dormant HTML WG) anyway, and gross misconceptions about HTML specs prevail, like in your post, I'm not sure whether we should call it a day with WHATWG/W3C's HTML specs already.

[1]: https://www.w3.org/blog/2021/whatwg-review-drafts-of-html-an...

[2]: https://sgmljs.net/blog/blog2303.html


And who defines the W3C specs? Browser vendors! And if they don't agree with the non browser vendors within W3C, they create another standard org (WHATWG).


Anyone not under a rock knows W3C simply copies whatever WHATWG come up with and then stamp W3C on it.


Kind of the point. WHATWG is descriptive not prescriptive. It describes things as they are. W3C is a standards body, sometimes they use the existing solutions and standardize them.


> with developer needs expected to be met by framework/library developers.

You can't implement stuff efficiently if there's no required functionality in the platform

> Who says web components are meant for use directly by the developer?

The people who develop them


This is why I abandoned my career writing JavaScript. It’s an industry of people hopelessly in need of frameworks and tools to do their job for them because they cannot program.


I think that's unfair. There are a lot of Javascript programmers that are perfectly competent.

Unfortunately, the trash "become a programmer in three days" bootcamps and scam courses all target web development, flooding the market with people who were taught one or two tricks and told they're the cream of the crop and should definitely not ask for their money back if nobody wants to hire them.

It's the same problem PHP and Python have suffered from: when your programming language and API is accessible and easy to use, you'll attract a lot of beginners and people who skipped the hard parts.

For some reason, the frontend world seems intent on reinventing itself every five years or so. The backend world works in cycles of 10 to 20 years, but it's going through the same motions. Everything became C, then C++ and Delphi came along, then everything became Java and DotNet, mow everything is becoming Go and Rust, and every iteration brings about new design concepts and paradigms.


Find me common JavaScript employment without use of a framework or to merely put text on screen and you will prove me wrong.


What business doesn't use some kind of framework, even for the backend? I've never heard of a successful business building everything from scratch, usually you'll have something like Qt for GUI construction, or Django/Ktor/ASP.NET for a web server.

I believe Wikipedia uses mostly frameworkless Javascript, but, as you might expect from any sufficiently sized project, they have come up with their own frameworks instead.


Wikipedia is migrating to Vue[0]

[0]: https://www.mediawiki.org/wiki/Vue.js


Yes that. Lines up with the meme going around. They call me 007. 0 coding skills. 0 social skills. 7 Udemy certificates.


Looks to me that would be the ultimate reason to stay in JavaScript, because it seems you hold a competitive advantage over the rest of the developers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: