Hacker News new | past | comments | ask | show | jobs | submit login
Introducing Shadow DOM API (webkit.org)
315 points by twsted on Oct 26, 2015 | hide | past | favorite | 149 comments



DOM is such a crufty old API/data structure to work with and with CSS, computationally it's a pig. Anything we can do to isolate bits and bobs on the page from one another and make DOM easier for the browser to render and emit events for, the better.

I just think the Shadow DOM and Web Components are too complex to work with as designed. The componenty composability of React definitely seems real to me, whereas I don't have such optimism about Web Components.

All I would want/need is a lightweight version of iframe whereby it functions like a javascript security and css styles sandbox, call it a subwindow. A subwindow could load html/css/js independently from the parent page like a Web Worker does, or it could borrow from assets the parent page has loaded and its HTML could be inlined as innerHTML statically/dynamically from the parent page (at creation time.)

A subwindow would have an independent global/window object from the parent window. And, you could postMessage/onMessage communicate with the parent window if you needed. It could be constructed with its own dedicated DOM thread, or it could schedule on the parent's thread if you didn't want another thread spawned. Paints would have to be synchronized between the DOM threads, which could be a bummer.

I just want an inline iframe, which I know is like saying I want an "inline inline frame". ;)


You don't explain how you think Shadow DOM is too complex.

They're simple to me: basically just isolated tree of DOM attached to the document. The only slight complexity comes from projection, which really is just a way to plug in a child element into the shadow. Projection is absolutely necessary for composable widgets, you couldn't create containers without them.


One issue I have with this is that I won’t be able to apply a userstyle to restyle a page anymore, and I won’t be able to use websites without JS anymore.

Just earlier today I had an issue where I was trying to read this article [1] and the page, after loading for around 3 minutes (I was on mobile and on the free plan of my ISP, which gives only 64kbps, but unlimited), had all the content, but neither JS nor CSS.

There are already many websites which I can’t use at all anymore because they make everything invisible until JS and CSS are loaded. Flash of unstyled content is not an issue for me, it’s the whole reason I am able to read pages.

[1] http://www.npr.org/2015/10/22/450583840/in-d-c-and-china-two...


This is not an issue you have with WebComponents specifically but the direction the web has been taking for the last 10 years.


> One issue I have with this is that I won’t be able to apply a userstyle to restyle a page anymore

With the caveat that I only sorta understand this stuff -- if it emerges this way after the politics & process, you should be able to use the 'deep combinator' >>> to apply styles even across shadow DOM boundaries.

https://drafts.csswg.org/css-scoping-1/#deep-combinator


> It’s currently disputed whether this combinator should exist.

Also, it breaks about every existing userstyle, ever.

And there should be a more general solution that also handles the shadow DOM of browser-internal elements like the <input type="file">


I think the deep combinator is already deprecated in Chrome and unlikely to make it into the spec.


Solving the problem of composability of visual components on top of the existing DOM leads to chicken-or-egg problems like getComputedStyle/offsetTop mentioned in that Mozilla blog link someone else referenced. DOM is a crufty stateful beast, and enhancing it is hard which scares me away.

Beyond that, the Web Components spec seems geared for coarse-grained components - like a widget or a panel. I could be wrong about this, I admit, my experiments a year ago with Web Components were short lived when I saw how much got polyfilled on Firefox. Anyhow, React is a javascript library exercise rather than a retool-the-DOM exercise to me, and that's a big reason I appreciate it. It offers data binding, rendering, templating, and true-componentization all in one package - and it supports legacy systems. I think there's an answer in Web Components for all of those things, including legacy support up to a point with polyfills, but it wasn't cohesive nor easy to roll things.

I think with lots of tool support, perhaps a compiled language targeting the Web Components spec, it could be just as easy to roll Web Components as React Components.


(Speaking as someone who worked on the Polymer team for a year, and has been writing React at a separate job for roughly half that)

> The componenty composability of React definitely seems real to me, whereas I don't have such optimism about Web Components.

Web components have pretty much the same levels of composability as React, but have promise for even more. The main reason I say this is that the components specs attempt to formalize a surface area for all web webcomponents libraries to interoperate with each other - though we're not to that point yet.

Many of the Polymer core components are a great example of composed elements (sometimes overzealously so). Shadow DOM and insertion points (<content> or <slot>) do wonders for this. (Yes, Polymer 1.0+ backs off from this a little bit)

React really only composes well with elements written in React. It can talk out to native components (and custom elements), but it's awkward enough that you tend to build wrappers to expose a more React-y interface (value, onChange, etc).

---

> I just want an inline iframe, which I know is like saying I want an "inline inline frame". ;)

Assuming that you want these inline frames on a per component basis, we're talking about a ton of overhead per component:

* A new JS VM context (for the sandboxed global and security mechanisms)

* A new render layer (and the the synchronized painting you mention)

* All the other browser bits that would need to be built up

---

DOM is a problem, but it's not the biggest problem, and performance-wise, there have been tremendous improvements in the last few years (Try running your benchmarks again).

However, imagine a world where all elements are defined with a set of public APIs like web components. No more crazy special cased properties, magic methods, etc. Imagine how much cruft the browser vendors could remove from the DOM. Imagine how much faster things would be.

Imagine all of HTML implemented purely in JS, and the browser only exposes the primitives needed to do that.


A very interesting perspective -- that I happen to agree with. There's also nothing preventing you from using web components from React.

> Imagine all of HTML implemented purely in JS, and the browser only exposes the primitives needed to do that.

It would be interesting to see if that's even remotely feasible with the current "limitations" of Shadow CSS and/or how Shadow CSS would have to be extended to support this.


>Yes, Polymer 1.0+ backs off from this a little bit)

I didn't know about this, any guidance on why?


Polymer 1.0 has support for shadow DOM, but also what they call "Shady DOM": https://www.polymer-project.org/1.0/articles/shadydom.html

TL;DR: Shady DOM provides an API that is similar to shadow DOM, but without all the crazy overhead when polyfilling it. The downside is that it's not quite transparent to someone outside of a Polymer element


Isn't that how normal frames used to work?

Before they were deprecated for iframes?


I don't want to partition the page the way frames/frameset did, I want to sandbox code and content the way iframe does, just in a more lightweight fashion. If I could couple that sandboxing with a component factory library like React, that could really be something else.


Frames like frameset? It sounds more like Netscape 4 layers.


my thoughts exactly. what's old is new again


I think you're talking about a "seamless" iframe.

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...


Note that this is not supported in any browser, and Chrome support was first implemented and then removed over two years ago, so it seems unlikely to ever be supported. Also a seamless iframe is not CSS-isolated, something that the GP said they wanted.

http://caniuse.com/#feat=iframe-seamless


I have sometimes thought about experimenting with rolling BLOB URL's for HTML, CSS, and JS and using that to initialize an iframe. I think that would work. One of the things I'd like to do is be able to build a frame sandbox using assets already loaded by the parent page.

I believe iframes can be very costly for the browser to manage, but I don't understand all the reasons why that would be - perhaps something to deal with the cross-domain security concerns.

It'd just be nice to have a stripped-down same-domain version of iframe that could layout nicely as a rectangle into the parent page, but not inherit styles nor Javascript namespace from the parent page.

The idea of having multiple threads contributing to the DOM that builds the page really intrigues me.

There's a proposed thing for HTML5+ called CanvasProxy, for example. The idea would be that you could detach a Canvas from the DOM thread and give it to a Web Worker. It could get a Canvas context in the worker from the proxy, and all draw commands would execute against the Canvas' pixel buffer in the worker. Meanwhile, back in the DOM thread, the actual canvas tag still lives on, just in detached mode, but you can still set CSS styles on that tag. You just can't get access to the Canvas context.

It sounds rad, and would make a big difference to canvases that represent heavy computations or frequent draws. No browser vendor has attempted to implement CanvasProxy however, it sounds like it is a challenge to keep it actually performant, and/or there are GPU threading issues?


You and me and lot of other people too. Get in line! ;)

It was so nice to finally get inline SVG, instead of confined inside Adobe's plug-in. Crazy days.


The problem is that the HTML standard is being written for regular computer users instead of for seasoned developers.


At first I was gung ho about web components. I was so excited I used Mozilla's X-Tag for one project and Polymer in another project. I am now no longer in favor of using them and it makes me sad, actually.

Web components are simply too heavy. They add latency to download (yeah you can combine them but I've never found tooling (including Vulcanize) that worked very well). Also the shadow dom's separation makes styling more difficult across multiple components requiring awkward shadow dom hacks in the CSS and / or duplicating / adding more CSS to components themselves. Same with using JavaScript and third party libraries not expecting to use the ShadowDom though this isn't so bad.

When I'm working on something that needs to be optimized for low bandwidth / high latency I go as lightweight as possible and I concat as much stuff together as I can. I want web components to work so badly but they're way too slow for me to use anytime soon.


"Too heavy" seems like it should be qualified—maybe you mean they are "too heavy to use for widget-level components in a web app with the current implementations"?

That's a weaker claim, and leads to the question of what they might be good for right now. I imagine for example Stripe's payment widget could be a pretty nice use case? In other words, higher-level widgets that are substantially encapsulated already, and are already often loaded as remote resources.

But I've only looked a little at the W3 specs and haven't tried using them in reality.


> "Too heavy" seems like it should be qualified—maybe you mean they are "too heavy to use for widget-level components in a web app with the current implementations"?

Current implementations yes but when browsers have 100% integration? I want to say yes but would love to say no. My biggest issues (latency, trying to concat and minify down and duplicative CSS) are not really addressed anywhere but until all browsers have the full implementation I can't say for sure I'll admit.

> I imagine for example Stripe's payment widget could be a pretty nice use case? In other words, higher-level widgets that are substantially encapsulated already, and are already often loaded as remote resources.

That seems like a great use case. I'd prefer, and probably most others, to host my own components versus externally referencing them but in some cases where you can't really get around it (I'm not familiar with Stripe's payment widget but Google Maps is certainly like this) I could see that being a good use case. Honestly that in and of itself may justify its existence but I'm just not convinced of its use fullness outside of that just yet.


Polymer dude here. Some points:

- What's wrong with vulcanize (combines all your components into one single html file)?

- Polymer now offers Shared styles, so that you don't have to duplicate your css. And you can use BEM or what ever you like.

- Get ready for HTTP2 in a few years.


> What's wrong with vulcanize (combines all your components into one single html file)?

Most of the issues I ran into were relating to inconsistent pathing. It was difficult to get every type of thing that could use a path (image tags, css, html inclusions, etc) working in a way that worked at both the path of the component and the newly created path when combined. Perhaps this has changed since the last I used Vulcanize but it was very painful at the time.

I also had a hell of a time getting the google maps, and a few other google components, to Vulcanize with the rest of my components but I honestly can't remember the issues now.

> Polymer now offers Shared styles, so that you don't have to duplicate your css

That's Polymer specific though; is there a way to do that with native web components? I could be missing something but it looks specific to Polymer. Regardless I'm glad this is being addressed to a degree but the solution seemed somewhat unintuitive as it's not using CSS semantics for driving the style.

Essentially the majority of projects I work on have developers and designers on them. The designers are usually pretty good with CSS, Illustrator and Photoshop so anything outside of CSS requires quite a bit of learning for them so that's a hill I have to battle with techniques such as these.

> Get ready for HTTP2 in a few years.

Just like I'm not going to be able to use ES6 for many years I'm not holding my breath here :)


Web Components are defined in script, they add no more download costs than any other script or styles.

Shadow DOM's style scoping is very intentional, and brings sanity to styling. Cross-scope styling is possible via CSS custom properties in a much more principled way than letting styles leak all over the entire document.


And since a shadow DOM's styles don't have to even look at the rest of the page's styles, they can be computed much more quickly.


"letting styles leak all over the entire document"

CSS selectors allow for a pretty wide breadth of targets -- they can be as narrowly specific or as broadly general ("leaky") as the author likes.

What's the exact problem that another layer/mechanism of scoping solves that being thoughtful about your selectors doesn't?


ShadowDOM removes the need for a component user and a component author to collaborate in order for styles to be encapsulated. This, in turn, enables easier distribution and usage of generalized components.


Exactly, how many times do you want to use a component but find it near impossible because it's written with a framework, or uses CSS styles that conflict with your own, or is written for use with a different version of the very framework you're using... this is why we keep re-inventing the same dumb web components over and over again.


It's basically impossible to implement lower bounds for descendent selectors without scoping.

You can kind of fake it with build steps and some luck, but it will always have holes and problems with dynamically updated DOM.

Faking lower bounds also keeps the browser from knowing that it doesn't have to invalidate the isolated scopes when properties above them changes.


One-hundred Shadow DOM Elements, each with 10 DOM elements and 10 CSS selectors.

vs

One-thousand `regular` DOM Elements with 1000 CSS selectors.

It's a simple math. 10 times faster.


I don't think that's a fair comparison. Why would you need so many selectors in the 'regular' case and what are they doing? I've worked on some pretty complex apps and have never even come close to 200 selectors in total; probably not even 100 but I'm not 100% sure.

Shadowdom allows for specific targeting within a component; is that necessary faster than a query using another element / class name for targeting a component in "regular" usage? I'm curious on the numbers here as I don't know that you're necessary right or wrong here.


I had the exact same thoughts when I first read about web components, or rather HTML imports in particular. The extra requests and latency introduced are not worth the benefits of having pluggable components. I can already imagine people writing server side code to include these "web components" inline into the static markup sent to browser, which somewhat defeats the purpose of having these components be dynamically interpreted by the browser.

Or maybe the correct way of using web components would be to create component "rollups" on server side, to reduce the latency effect of the extra requests. Or the browser might start caching common components between page requests.

Maybe the current web component spec is just the first step to get the reusable web started...


HTTP2. In the HTTP2 world, extra requests don't add any additional latency. The browser multiplexes them over a single connection. I think I'd heard that it even uses a common compression dictionary for all requests, which means that boilerplate code that's duplicated between components will get compressed away.


>In the HTTP2 world, extra requests don't add any additional latency.

They don't add any additional TCP connection latency, sure. I think it's a bit unrealistic to claim there won't be any congestion of the multiplexed stream which causes additional latency.

>I think I'd heard that it even uses a common compression dictionary for all requests, which means that boilerplate code that's duplicated between components will get compressed away.

I may be misremembering, but I thought that was just for request headers?


>I may be misremembering, but I thought that was just for request headers?

That's correct: https://http2.github.io/http2-spec/compression.html.


HTTP2 also supports server push—the web server sends resources that the browser is going to need without waiting for them to be requested: http://blog.xebia.com/2015/08/23/http2-server-push/


What extra requests? Are there any requests you get with Web Components that you don't with React, Angular, jQuery or plain JavaScript?

Web Components are just a set of APIs and capabilities in the browser. They don't imply anything about the number and type of files that are used to create your app.


I was comparing importing Web Components using HTML imports vs. static inlining the web component code within initial markup sent to browser.

Putting it another way - whats the advantage of HTML imports over server side imports?

Server side imports may increase the initial payload, but that will still be quicker compared to extra fetch requests for the web component.

@nostrademons is right that http2 will level the playing field for this battle... so definitely keeping an eye out for it!


Web component APIs and HTML imports are mostly orthogonal, except that HTML is a nice way to package HTML, CSS and JavaScript.

You can inline your custom element definition just like you can inline anything else.


I use webcomponents (specifically Polymer stuff), and all I do is just roll together all the components I'm going to need for an app in one big concat-and-minified file. You don't have to fetch a gazillion components, any more than you have to fetch a gazillion .js files - cat, minify, gzip works on webcomponents as well. Plus, as other people mentioned, http2 solves this problem as well.


Please file vulcanize bugs or give feedback here on what didn't work for you.

Every componentization system will require concatting to get good performance on today's web (http2 push is close but quite here yet).


isn't the lack of tooling primarily due to the fact that it is very early? I would expect all of that to improve in the future


http2 may help you with that. Your tooling could also potentially inline the components too, you don't have to make an http request for each one. That fact that Vulcanize didn't work too well for you for whatever reason seems like something that shouldn't be terribly difficult to fix.


> That fact that Vulcanize didn't work too well for you for whatever reason seems like something that shouldn't be terribly difficult to fix.

Maybe? I'm not sure. The vast majority of issues I had with Vulcanize stemmed from pathing changes.

So say you're working on a component that exists at /components/mycomponent/mycomponent.html

When you're referencing JavaScript, images, CSS, etc you typically assume you're at / but it doesn't work like that for everything. So you either have to setup a configurations so you have a universal prefix to use for everything or do everything relative to root (which at least in the environments I deployed into not really doable as the root could change). So the vulcanizer needs to understand and reroute everything within your component and since there is no way to override the way the browser fetches content you're left with trying to figure out everything to include and doing it via ajax (if you wanted to be automated about it).

I'm not saying it's impossible but it seems difficult to do to the point where I don't know that that's the best path for web development to go down.

I honestly can't remember some of the other issues I've run into. Mostly JavaScript errors from third party components and even other Polymer components but it's been about 6 months or so since I've used it. I'm sure Vulcanize has gotten better by now and I know Polymer has.


HTML Imports are going away. I'm not sure the state of Polymer but my guess is that they will be removed eventually, so I wouldn't depend on a tool like Vulcanize being improved; it's a dead feature.

Instead use a normal JavaScript module loader to load components and concat them the usual way. Don't use imports.


HTML imports allow you to bundle HTML, CSS, and Script and import them in one step. ES6 only handles the script part. We very much love HTML Imports on the Polymer team and they aren't going anywhere for us.

I expect that when the ES6 modules loader spec is finalized it will be fully compatible with the underlying semantics of HTML imports, and they will work quite nicely together.


There are several competing proposals for asset imports [1] any of which are viable. HTML Imports may persist, be supplanted by ES6 imports, or may otherwise survive in a different but similar form. Hell, I wouldn't be surprised if one day someone makes a web component to implement the current HTML Import spec using whatever the browser happens to support.

[1] https://hacks.mozilla.org/2015/06/the-state-of-web-component... (see: HTML Imports)


The other terrible side effect of web components is that it gives the standards group an excuse to never update HTML anymore. "Why do we need HTML6 when you can make anything you want with big, heavy Javascript-based web component that takes expert programmers to make?" Indeed.

Just say no to web components. Say yes to upgrading HTML specs directly.


That's the exact opposite of the reason why browser vendors are implementing web components. It's so that they can get direct, real-world feedback from actual users of a feature before baking it into standards and making it impossible to remove from the web. The intent has always been that the most frequently used webcomponents will end up getting baked into the browser, and then into the HTML spec.

We used to write web specs speculatively, without having feedback from real-world use, and it was a disaster. Take a look at the list of W3C standards:

https://en.wikipedia.org/wiki/World_Wide_Web_Consortium#Stan...

How many of them do you actually use? Remember when everything was XML, XPATH, XSLT, the semantic web, and we were going to replace HTML with XHTML?


The intent has always been that the most frequently used webcomponents will end up getting baked into the browser, and then into the HTML spec.

That's nice in theory, but what's happening in reality is that no component is going to be standardized, and HTML remains stagnant for another 15 years.


It's a little hard to make that argument when web components themselves aren't standardized yet, and are fully supported natively in only one of the major browsers.

There's pretty ample evidence that common webdeveloper behavior does make it into the spec, eg. JQuery => querySelectorAll, Javascript animations => CSS transitions & animations, long-polling => websockets, common layouts => <header>/<footer>/<main>/<nav> elements, dropdowns => <details>/<summary> elements, the addition of date/datetime/color/number/range/tel input types, etc. If you're still coding HTML like you did in 2000, you are now very obsolete.


Sorry, what? The browser has sorely needed extension and componentization capabilities for many years. Why should the W3C be the gate-keepers of what's allowed in HTML? That only leads to the invention of heavy-weight and balkanized UI frameworks. We can do much better than that.


Ask yourself why the generic UI elements in Polymer, like Menu, Dialog Boxes, etc.., isn't part of HTML itself?

If you want to see balkanized UI frameworks, there it is.


> why the generic UI elements in Polymer, like Menu, Dialog Boxes, etc.., isn't part of HTML itself?

Because they shouldn't be. HTML should not be responsible for providing every possible UI element. In fact, I would argue that provides _far_ too many UI elements, and should instead provide low level user input APIs that can be used to build interactive elements.


And so web developers have to write basic UI widgets just to get basic UI elements. Or, to use a heavyweight component library that needs needs to be downloaded.

Thanks but no thanks.

HTML should provide a rich, declarative UI API by default. We shouldn't have to hunt down separate components to get that.

If you have to load Javascript, you have already lost.


A well regarded standard UI library, say jointly developed by browser vendors, could be CDN served and cached or even bundled with browsers eliminating the download cost with the massive advantage that it can be versioned independently of HTML and browsers and is optional in case a better library comes along.


> A well regarded standard UI library, say jointly developed by browser vendors,

This is commonly known as the web browser.

lol @ browser vendors deploying UI components via Javascript, when they could just as easily deploy them in the web browser itself as part of upgraded HTML specs.


"just as easily" is an interesting phrase to use here. I have to assume that your day job does not involve building browsers.


Again, at the moment where javascript is required to read a website, you have already lost.


This is completely inaccurate.


I'm pretty excited about what this will do for web components development. Polyfills have come very close, but deeply nested components can become a tangled mess in no time.

If anyone is interested in getting started with web components, I recommend taking a look at Google's Polymer[1] library. It's a fairly opinionated approach, but has a relatively small learning curve.

[1] https://www.polymer-project.org/1.0/


I rejigged my personal website using Polymer and was thoroughly impressed. It's pretty heavyweight, mostly because I'm too cheap to precompile and minify my Javascript, but the modular approach means that I can now, finally, apply actual software engineering principles to building websites.

The documentation could be better, and you pretty much have to like Material Design, and there could be a richer set of standard widgets available, and it desperately needs a CDN, but other than that I found it really pleasant to use.

What I haven't tried is building any kind of reactive interface. It's got some support for binding elements to data, but I didn't explore that much. Anyone care to comment?


We (tonicdev.com) experimented with both Polymer and React before settling on React. Doing things "the right way" in Polymer/WebComponents was just a mess. It did in fact remind me of the "old way" of doing things in Cocoa: AKA, accounting for every edge case with every appendChild or DOM mutation that could take place on your component. Having tried React's "API-less" approach of just re-rendering what you want given the params/sub-components, it was an absolute time sink that just offered no benefits.

Here is a question I asked back then: http://stackoverflow.com/questions/25856324/iterating-throug...

You'll see in the answer the complexity around dealing with children that come and go (vs in React you'd actually write it declaratively once and be down with it -- kind of interesting that Polymer has so much "templaty" stuff and is ultimately less declarative than the pure JS approach React).


Polymer has a few baked-in elements to help with various data binding needs: https://www.polymer-project.org/1.0/docs/devguide/templates....

Another classic approach I've seen is to create an "<x-app>" element which contains the entire site, which also handles the various application state and binding tasks. The Polymer Starter Kit[1] follows this approach, and it also includes tools for building/minification using Vulcanize[2] and gulp[3].

[1] https://github.com/polymerelements/polymer-starter-kit

[2] https://github.com/Polymer/vulcanize

[3] http://gulpjs.com/


> finally, apply actual software engineering principles to building websites

I've waited a looong time for that.

It does seem like we're getting close to being able to work on the web like I did on my Mac in 1990!


> actual software engineering principles to building websites

What are the actual software engineering principles Polymer enables developers to apply?


Separation of concerns and better variable scoping as well. In a webcomonent inside a shadow dom, you can do something like input id="mainInput" or some other generic-named DOM id, without worrying that some other third-party library or component will create another node with the same id.

Similarly for CSS selectors and class names - I mean, BEM and SMACCS and stuff are cool and all, but it's also nice to know that my CSS styles won't leak outside the component, so I can just do div class="main" or something and not worry about how other developers name their classes on other branches of the DOM.


Mostly composability. Assembling a web app from small custom and self contained components is a big difference.

It's just a library, so it won't do magic but it's a big improvement.


A new and pretty nice set of wc widgets is http://expandjs.com


Did you try React before switching over? How did you enjoy data-bindings after working setState() ?


No, I've never touched React (mostly because it seems to require precompilation). I've been meaning to check it out at some point but the barrier of having to set up a Javascript toolchain has, so far, been too high to be worth it.

Before the Polymer rewrite the trivial amount of interactive stuff on my site was raw jquery.


FYI, React doesn't require any precompilation. You only need to include a JavaScript file. The JSX stuff is entirely optional, although almost everyone uses it. I share your fear and loathing of JavaScript compilation toolchains, and I successfully avoided all of them on a medium-sized React project. After we had a good working prototype, we started using some more tooling, but we kept it extremely simple to start with.

1. JSX tags simply compile to calls like React.createElement("a", { href: url }, ["content"]) and if you just create a short alias for that function, you don't need JSX at all.

2. If you're anything like me, things like Flux scare you by being weirdly ideological and requiring unnecessary boilerplate. You can ignore it and just use a single object to store all your state, according to the "keep it simple, stupid" philosophy. That said, I would recommend that you look at the Redux library, which is a simple and rational approach to the nebulous "Flux" concept. Redux is simple—it's just a certain design pattern that makes sense in the React context.

3. I find React.addons.update to be the most obvious and simple way of doing immutable updates to nested structures—which is what most React programs are doing all the time. People have come up with various libraries for more sophisticated immutable data structures, but you're unlikely to need them, and React.addons.update gives you 90% of the bang.

Next time I have a bit of non-stressed free time, I'd like to make a repository with an example React application that's extremely simple, requiring no tooling and as much as possible avoiding "lockin" to various newfangled libraries and paradigms...


I shared your fear of toolchains, but with Webpack, Babel, and Hot Module Replacement, I've been won-over.

Those tools make it painless to sculpt out a UI in real-time, using the latest ES sugars and a sane module system. Webpack has Uglify built-in, so you can target a minified file for production.

With Babel and JSX, you're still writing idiomatic JS, so you don't have the same lock-in problems you might with something like CoffeeScript. In my mind, it's a clear value-add with no observed downside.

What's your hesitation?


Yeah, those things are nice and compelling. I've used them, often happily! Some of the new language features are hard to live without once you're used to them; I especially like async/await and the destructuring syntax.

The flip side is simply that it's more technology, more stuff to learn, more stuff to keep running on your computers, and more potential bugs and strange interactions to run into.

I don't like to talk about it in a negative way, because it can sound like I'm disparaging the technology. But, there are nice things about having your project be written in the actual language that the browser already supports. Like simply never having to think about transpilers.

Of course once you've learned about tools like Webpack and Babel, you can use them to your heart's content if they make you more productive. I just wanted to emphasize to the person I replied to that you don't NEED any of them. Especially not to just get started.

You don't need minifying until there's a problem with your page load time and until you actually know that minifying will help with that -- until then, it's strictly speaking a premature optimization. (Writing less complex code may be more important!) And so on.

And "no observed downside" is not strictly true if you consider the cost of added complexity.

For someone who is overwhelmed by infrastructural proliferation, it should be nice to hear that you can develop actual applications with just plain JavaScript and maybe a small Makefile.


By "no observed downside", I meant once the toolchain is set up. Now, I get Hot Module Replacement and JS Harmony sugars for free, without having to worry about which browsers natively support which new features.

Your point about perceived complexity/intimidation is a good one, but for me, removing cross browser inconsistencies simplifies development more than Webpack complicates it.


There's always a cost to complexity, and it's usually in interop. What happens if you want to share a quick technology demo with a non-technical person over e-mail? What if you want your business cofounder to be able to tweak CSS by herself, and not have to learn what "setting up a toolchain" means? What if you're leading a team (or worse, a department or company) of people who aren't familiar with the latest in the JS world, need to do minor tweaks to the frontend, but can't justify learning a whole stack of technologies for something they spend 20% of their time in? All of these are real scenarios from my professional career.

I like Babel too and the bulk of my new development is in ES6, but ubiquity is a really nice feature for development tools, probably the nicest of all.


I made a quick tutorial with this in mind. Most of it is JSX based, but it has 3 lessons without it. Maybe this helps :)

https://github.com/kay-is/react-from-zero


JSX isn't mandatory, so you can just write plain old JS if you want. And if you want to use JSX but don't want to use a full-blown toolchain for development, there is an in-browser JSX transform library[0] that you can include to do the compilation on the client side.

[0]: https://facebook.github.io/react/docs/tooling-integration.ht...


To be fair React without JSX is pretty bad. I did that for a while with TypeScript before it got JSX support.


I agree with you but to provide some more context, some people seem to prefer the non-JSX approach. Here is a (random) example of what that looks like: https://github.com/littlebits/react-popover/blob/master/exam...


Oh yeah, I remember seeing that with coffee-script without the parentheses.


React with JSX is pretty bad. A library like https://github.com/mlmorg/react-hyperscript solves the verbosity with not using JSX.


You know what, the barrier is fucking shit, that's true. I had issues myself and I'm not sure I am doing it right. However don't knock it off because of the precompilation stuff. Mostly because I'd like a nice comparison by someone, heh :)

I personally hate bindings now, apart for using them with forms, but React offers a mixin for that. What I am interested is if Polymer is worth it or not.


If you like a discrete control flow, then Polymer is a valid approach. TBH, if your application isn't beyond moderately complex, it doesn't matter too much which framework you choose as long as you have relatively clean separation of logic and control flows.

React (with a flux-like control flow library, such as Redux) has a higher cognitive load to start. Where it shines is as additional features are added, there is very little additional complexity. Where with Polymer (or Angular) your application as will see a curve of additional complexity as features are added.

I started a pretty basic setup for React + Webpack with hot reloading[1]. I'm going to add in Redux next, then Router... At which point I'm planning on adding in material-ui[2]. From there, I'll try to keep it updated or forked to use as a base application.

I started it off by following te SurviveJS book[3], but my direction is a bit different. I'd also recommend reading the full stack redux tutorial[4].

[1] https://github.com/tracker1/r3k-example

[2] http://material-ui.com/#/

[3] http://survivejs.com/

[4] http://survivejs.com/


I've been using Polymer for a few months now after following the WebComponent spec for quite a while, and I'm really liking what I'm seeing.

While it can admittedly lend itself to a very Java Swing-feeling development flow, that's a breath of fresh air compared to the traditional mess of web code I usually end up with.

WebComponents in general (and Polymer as a particular library on top of them) I feel are going to be a 10x force multiplier for web development.


Yeah Google Play music is extremely slow in Firefox and I'm pretty sure it is because of their insane Web Components polyfill.


Play Music is using an old prerelease version of Polymer. When they update to the latest production Polymer it should be much, much faster.


Are they using it for Inbox, too? The memory use on that page. Yikes. To think I used to browse the web on 64MB of RAM without ever feeling memory-constrained. Sometimes with my e-mail client also open!


Inbox is NOT using Polymer in any way. But the reason why it's so resource heavy is because of it's huge DOM Tree.

Web Components to the rescue!


I’ve seen cases where I’ve opened Inbox on web on android and all apps in background crashed... The memory usage is insane.


I just checked. My Inbox is has no mail in it (in the part I have open anyway, which is the... inbox). It is using 1.01GB of memory, 858.4MB compressed.

Double-u. Tee. Eff.

Docs.google.com, with a spreadsheet open? 617/585MB. Still entirely unacceptable, but lower than an empty Inbox. The difference between the two should be more than enough to run both, even with a comically generous bloat allowance.

(I'd written then deleted something snarky about Google notoriously rigorous hiring practices really shines when we look at software that results, but seriously, all those smart people and this is what comes out? All their web software is embarrassingly bloated. Chrome's battery drain versus Safari? Android, which seems to take Do the Worst Thing that could Possibly Work as its guiding principle? What is going on in there? Is it some sort of organizational problem?)


This is exactly the issue.

The products have gotten bloated, slow, unusable on older devices, and the quality of the actual service has gotten worse, too.

This is just... impossible to use.

Google already is the new Microsoft, and Chrome the new IE. With this, they’re also losing any other reputation they had left.

Google products once used to be about the tiniest and fastest solution, working everywhere, and using the least possible resources, without any bullshit features.


Inbox does not use Polymer


It might also be interested in taking a look at Aurelia (http://aurelia.io). That is a fairly new framework (still in active development, it hasn't hit 1.0 yet) which is heavily influenced by web components and seems to implement them quite well. When using their bundling gulp task I can have a full components using app that only requires a couple of HTTP requests to load.


I really like webcomponents, but they still statefully update the UIX, just like most native ui frameworks.

React solves this by giving a declarative functional approach to webdevelopment.

Would it be possible to combine both react with webcomponents? If so, are there any demo"s out there? I couldnt find anything.


That depends on what you are doing inside of your web components. You can load any JavaScript you want in there, you know. If you want to use React, that's fine. Web Components is just a set of extensions to the DOM standards to allow encapsulated custom elements.


SO has some discussion on React & Web Components: http://programmers.stackexchange.com/questions/225400/pros-a...


I was wondering about that when I read this post this morning.

The interface of a React component is very small: it's effectively just a render function with some lifecycle hooks and an internal setState method. Depending on your data management philosophy, you can even scrap state altogether and leave just the render function:

    var View = (props) => <div>{ props.content } </div>;
You could probably write a Polymer-esque library that let you create Web Components with a similar API. I haven't looked into the implementation details of react-{dom, canvas, -native}, but I imagine that's effectively what they do - bridge from that API to the platform's native one. I expect that if Web Components are ever faster or otherwise better than the legacy DOM, such a shim will emerge to make WC as easy to write as React components.


The things I'm curious about with Shadow Dom:

- How does it affect performance, if say, a large application is composed of many shadow dom elements, each containing a large amount of redundant CSS?

- Will there be a way to let certain css styles "leak" through while encapsulating others? Or is any type of native inheritance gone?


Performance will be better with Shadow DOM. The styles are deduplicated (in Chrome at least, and I believe Firefox too), and the style boundary adds opportunities for optimization.

There will be ways for styles to leak. The Polymer team has added a polyfill for CSS Custom Properties and CSS Mixins (the @apply rule [1]).

Custom Properties cross shadow boundaries, and @apply lets you define properties sets that are applied later down the cascade. This lets components define very targets sets of custom properties that they apply to specific elements within their shadow root.

[1]: http://tabatkins.github.io/specs/css-apply-rule/


Because a template would define the styles once, but be stamped multiple times, there should be minimal redundant copies of the styles themselves. How this performs will remain to be seen, as it would be up to the browsers to implement the spec optimally.

You can "pierce" the shadow DOM with the ">>>" combinator (previously /deep/), allowing you to style elements from outside of the shadow DOM. This is discouraged in the case of web components, however, as a component is supposed to operate as a black box entity. Instead, a component can expose APIs for styling, such as with CSS variables: https://www.polymer-project.org/1.0/docs/devguide/styling.ht...

Edit: This article[1] has details on performance and styling options.

[1] https://hacks.mozilla.org/2015/06/the-state-of-web-component...


The last I had read[1], the intention is to deprecate shadow piercing combinators altogether.

[1] - https://www.w3.org/wiki/Webapps/WebComponentsApril2015Meetin...


I guess that would make it discouraged in all cases, then.


I'm sure Chrome/Opera and Firefox will tweak their implementations to match the updated spec soon, then all that's left will be Edge, and they've expressed intent to implement!! https://dev.modern.ie/platform/status/shadowdom


Well, we need the custom elements spec to be figured out too


This is true. But custom elements are quite easy to polyfill, especially compared to shadow dom.


I'm extremely excited about this. I have been playing with stock Web Components for the last year and a half or so. Polymer I have been hot and cold for. I prefer to learn the standard to start. Overall I am very happy with this development.


Agreed. While Polymer is a neat project, I honestly think it's kind of hurt the adoption of web components. I'm sure I'm not the only one who's tried to learn about web components according to the standards, only to run into a ton of Polymer stuff on the web and get turned off by the heavy, opinionated, framework-like nature of it. The underlying web components APIs are much less off-putting to me, once you can dredge them up through the SEO swamp.


Polymer is really not that opinionated. All they do is provide sugar that you would eventually write yourself after you get comfortable with web components and start to notice where the boilerplate lives.


One step closer to XAML...


or XUL ;)


Finally! ;)


I, for one, am not excited about this.

It's a "oh, that's neat" feature visible only to developers, but it is not enabling anything groundbreaking for the end users, so it isn't making webapps more competitive with native. We still need iframes for 3rd party embeddable components, and for 1st party components we have good enough solutions, and the tendency seems to be abstracting the DOM away.

Web Components keep taking a lot of spec and implementation effort that could be spent on something with a bigger impact (ServiceWorkers in iOS? Transactional DOM that doesn't jank/layout-trash? Activities/Intents/Scopes for native-like sharing?)


A solid platform gets better gradually, it may not excite you but it's an important step in the right direction.

Indirectly it does impact the users, because polyfills for browsers that do not support shadow dom have performance issues, not counting the time developers spend maintaining them.


Without Microsoft adopting this too it remains in a "sort of kind of useful but not really" status.

Where is Microsoft in this adoption cycle?

And are there any incompatibility across the existing implementations, namely, Chrome, FF and now Webkit?



Microsoft has publicly announced that they're implementing Shadow Dom for Edge.

Chrome is implementing Shadow Dom v1 now, an experimental version is in Canary I believe. The currently shipping version has a different model for distributed children, but we're actively moving to the newly agreed upon spec now.


"Stop writing frameworks" by Joe Gregorio [OSCON 2015] is a nice intro to html imports, shadowdom and other aspects of web components, although his knock on frameworks goes too far for me, especially since some frameworks can use web components as well. https://www.youtube.com/watch?v=GMWAHzXQnNM


I have always been skeptical about Shadow DOM. It tries to solve different problems under one solution.

Say you want a static page without scripts but with CSS encapsulation. You cannot achieve that. Instead you have to use Shadow DOM and get it in package with other unneeded things.

CSS encapsulation could be solved with special attribute, new HTML tag or even with new CSS rule, and it would have been more flexible than Shadow DOM is.


You're looking for "<style scoped>", which is in Firefox, used to be in Chrome, but was eventually removed in favor of Shadow DOM.

http://html5doctor.com/the-scoped-attribute/

http://caniuse.com/#feat=style-scoped

https://groups.google.com/a/chromium.org/forum/#!searchin/bl...

I think the main reason webcomponents won out over scoped styles is because usually once you need to scope your styles, you find you also need to scope your querySelector calls, and your DOM traversals, and your IDs....and pretty soon you have shadow DOM. It's unlikely that you need style namespacing if you're just a single dev working on a static page, and it's unlikely that you won't need JS & DOM scoping if your product grows beyond that.


Great to see such a progress. I hope within a decade or two HTML will finally allow us to create convenient reusable building blocks.


I've working with Polymer and it's been a great experience. This is great news as it will make polyfills less needed.


Compose all the things! It's definitely reassuring to see that this is the attitude of the people who are building the standards for the future of the web.


Can anybody tell how this relates to http://trac.webkit.org/changeset/164131 (discussed in https://news.ycombinator.com/item?id=7243122)?

New API, new implementation, or both?



Really like this as a React dev. Recently there is a trend to use inline CSS to do CSS modularization in React, but I feel it's not really a clean approach. With Shadow DOM you can mount the whole React App onto a Shadow root and have perfect CSS encapsulation. Hopefully we can see Shadow DOM supported by major vendors' stable channel soon.


Shadow DOM is not the solution for CSS isolation for components. You'd have to mount every single component into its own shadow DOM which seems unnecessarily complex (and most likely has significant perf implications).

The answer to CSS scoping is much simpler: CSS Modules https://github.com/css-modules/css-modules


No I wouldn't mount each component into a Shadow DOM. For example if I have a React App with 2 major parts developed by 2 teams, I would just mount them onto 2 Shadow DOMs so each team don't need to worry about CSS classes collisions or whatever else that's on the webpage.


> and most likely has significant perf implications

This is true, CSS performance is better in native Shadow DOM.


The template element looks very useful. Basic support in old browsers should be possible by simply setting it to display:none.


There are some key features of template that hiding it would not accomplish, namely that the contents like scripts are guaranteed not to execute until they've been cloned/stamped.

Fortunately there are already polyfills for that: http://webcomponents.org/


What gonna happen to all those engines/framework with Shadow DOM Views technologies? In short is the end of React?


React's virtual DOM is an implementation detail - its real power is in how easy it makes it to write composable components. If Web Components are ever a faster render target than the DOM, you can trust that ReactDOM will be upgraded to comply (in the same way that Ember aped React's virtual DOM as Glimmer).

That said, I have a feeling Web Components add complexity (and weight) to the render path. I doubt they will be faster than DOM diffing in the near term (although with any new web tech, there's always the possibility that they'll make available performance optimizations that can't be supported in the traditional DOM).


I agree there will probably be a performance lag for a while, but long-term I think the WC model wins out because it brings encapsulation and reuse into the formal suite of W3C specs.

That's not a big deal for any one web app, but a huge deal across web development in general.


> ReactDOM will be upgraded to comply

Except, it won't. They've explicitly stated they have no interest in conforming to or adapting the web components standard.


Why do you think it'd be the end of React?


Eventually yes, web components will likely kill off the usefulness of most front end frameworks.


General note to people engaging in technological conversations:

When posting strong statements, please consider including explanations, arguments, and maybe some definitions. This helps other people understand your claims, and is a big part of what makes "reason" such a powerful thing.

More information here: https://en.wikipedia.org/wiki/Reason


They're fantastic for a lot of purposes. I've been experimenting with web-components for a couple of months now. It's not really "prime-time" ready, IMHO, but it's cool to see how quickly it's progressing now that Mozilla is more on board [1] and Microsoft's Edge is implementing core features [2a] and supporting WC's generally [2b]. (PS Vote on the MS's Edge website for implementing WC features! Seems they actually look at the votes some to determine features).

One of the biggest hangups for my team has been Firefox/Mozilla opting not to enable HTML imports (it's implemented) which causes quite a bit of annoyance when developing and using piece-meal web-components. They list there reasons [3] but their perspective seems to be driven from a "Javascript" first mentality with HTML being second class. My code is all driven by web-compoents though. The difficulty comes from getting the WC polyfills to load before Firefox tries to load your dom-modules (in Polymer terms). I want to separate out my HTML into modules and not require loading polyfills before I can include static HTML pages that might not even have Javascript in them.

There will probably be a lot of resistance, or rather, misuse of web-components for a while unless web-developers start switching to a "data first" mentality which is gaining traction from React and ClojureScript. It also feels pretty very heavy-weight if you're using Polymer currently.

BTW, anyone know how the shadow DOM affects performance of the entire browser window? Mainly, I'd be curious to know if updates to a shadow-DOM is isolated from causing Light DOM updates (e.g. updates to the shadow DOM could be isolated to generally not affect the parent window, or could be batched). This would yield much of the benefits of React's Virtual DOM. Just not sure where to look in the Webkit docs to find where this would be documented.

[1]: https://hacks.mozilla.org/2015/06/the-state-of-web-component... [2a]: https://dev.modern.ie/platform/status/templateelement/ [2b]: https://blogs.windows.com/msedgedev/2015/07/15/microsoft-edg... [3]: https://hacks.mozilla.org/2014/12/mozilla-and-web-components...


Isn't this React???

Shadow DOM, in particular, provides a lightweight encapsulation for DOM trees by allowing a creation of a parallel tree on an element called a “shadow tree” that replaces the rendering of the element without modifying the underlying DOM tree.


It's similar in some ways, but not exactly the same. This allows an element on a page to contain a whole tree of elements while still appearing as a single element from the outside (i.e. for the purposes of DOM methods, CSS, etc.).


No, the virtual DOM and the Shadow DOM are distinct concepts.

The virtual DOM is a parallel representation of the DOM which is used to diff against—it's only used for "logic" rather than display.

The Shadow DOM is actually almost the opposite: it replaces the visual/interactive presentation of the tree without changing the exposure of the underlying elements to the rest of the page.

That being said, they bother offer mechanisms for writing modularized frontend code effectively.


[deleted]


React does not use Shadow Dom. And React is not related to CSS first-hand, though people are trying apply the same modularization principles of DOM encapsulation to the CSSOM.

Not only that - there's a lot of implication that come with Shadow Dom that will make it impossible to just plug and play with React. Namely, the fact that is stops the cascading effect of Cascading Style Sheets. Whereever you are stylizing large swathes of your app with common css code, you'll need to rethink by either treating css as a module and importing at the new shadow root, or by rewriting your css to be more redundant.

These tradeoffs require considerable buy-in, which leads me to suspect React will continue to be agnostic towards shadow dom.


So when can we use it?


This is a nod to ReactJS


The ShadowDOM spec has been in development since at least a couple of years before React was announced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: