I disagree that the Shadow DOM is pretty awesome. I think scoping style is valuable, but building components that are exposed as new tags is not appealing given the vast complexity of the implementation and the limitations of tags.
Markup has a very weak type system (strings and children) which makes building complex UIs more painful than it has to be (this also stands for markup driven toolkits like angular and knockout -- where the equivalent of main() is HTML markup and mostly declarative). Markup isn't a real programming language, and it's very weak compared to a true declarative programming language.
JavaScript however is a real programming language with all of the constructs you need for building extensible systems. For building anything complex (which is where Shadow DOM should shine) you will need to use extensive JS, you will need your Shadow DOM components to expose rich interfaces to JS... At which point, why are you still trying to do mark-up first -- it's something that's more "in your way" than helpful.
Thank you! I thought I was the only one who felt this way. I truly do not understand why Google feels that application composition should happen at the presentation layer rather than the code layer, particularly when the presentation layer is as weakly typed as HTML. This was tried and failed in the very first round of server-side web frameworks back in the mid-late 90s. More recently, the complexity of Angular's transclusion logic should have clued them in that this is an unwieldy idea.
I agree that some kind of style scoping construct would be a good addition, and far simpler than ShadowDOM. Simple namespacing would be a good start. It would be a more elegant solution to the kludgy class prefixing that has become common (".my-component-foo", ".my-component-bar", etc.)
Well, for one thing, div-soups are hard to read, create deeply nested DOMs, and lack semantics or transparency. If you're trying to preserve the nature of the web, which is transparent, indexable data, one where "View Source" actually has some use to you, then having a big ball of JS instantiate everything is very opaque.
The phrase "div-soup" makes me reach for my revolver. It seems to be a straw man that means "Either you're for Web Components or you're for the worst of web engineering."
- How does ShadowDOM (or Web Components more generally) make your DOM shallower? It's still the same box model. Deeply nested DOM structures are usually the result of engineers who don't understand the box model and so over-decorate their DOMs with more markup than is semantically or functionally necessary. Nothing in ShadowDOM (or, again, Web Components) changes this.
- Are custom elements really more transparent than divs? If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>? Proponents of Web Components seem to imagine that once you can define your own elements, you'll magically become a better engineer, giving your elements nice, clear semantics and cleanly orthogonal functionality. If people didn't do that with the existing HTML, why will custom elements change them? At least with divs, I can be reasonably sure that I'm looking at a block element. Custom elements, I got nuthin'. They're not transparent. They're a black box.
- Finally (and more importantly), we already solved the "div-soup" problem. It was called XHTML. Custom elements in encapsulated namespaces! Composable libraries of semantically-meaningful markup! How's that working out today? It's not.
TL;DR: a common presentation DTD is the strength of the web, not its weakness. Attempts to endow web applications with stronger composition/encapsulation should not be directed at the DOM layer but at the CSS and JS layers above and below it.
1. Shadow DOM scopes down what CSS selectors can match, so deep structures can hide elements from expensive CSS rules.
2. Custom Elements promote a declarative approach to development, as opposed to having JS render everything.
3. XHTML was not the same as Shadow DOM/Custom Elements. XHTML allowed produce custom DSL variants of XHTML, but you still ended up having to implement them in native code as trying to polyfill SVG for example would be horrendously inefficient.
4. The weakness of the web is the lack of composeability due to lack of encapsulation. Shit leaks, and leaks all over the page. Some third party JS widget can be completely fucked up by CSS in your page and vice versa.
A further weakness is precisely the move to presentation style markup. Modern web apps are using the document almost as if it were a PostScript environment, and frankly, that sucks. We are seeing an explosion of "single page apps" that store their data in private data silos, and fetch them via XHRs, rendering into a div-soup.
The strength of the web was publishing information in a form that a URL represented the knowledge. Now the URL merely represents a <script> tag that then fires off network requires to download data and display after the fact. Search engines have had to deal with this new world by making crawlers effectively execute URLs. I find this to be a sad state of effects, because whether you agree or not, the effect is to diminish the transparency of information.
You give me an HTML page, and I can discover lots of content in the static DOM itself, and I can trace links from that document to other sources of information. You give me a SinglePageApp div-soup app that fetches most of its content via XHR? I can't do jack with that until I execute it. The URL-as-resource has become URL-as-executable-code.
Both are needed!
Javascript is great for portability of apps that would otherwise be done in a native environment (you wouldn't want to index these anyway). Isn't there a standard mime type to execute js directly in browsers? There should if not.
If you care about being searchable and having designs that are readable on a variety of devices, powerful and degradable markup is very useful.
Or search engines could use URLs with a custom browser that man-in-the-middles XHR and WebSockets to effectively crawl APIs, since the APIs theoretically are semantic by default.
execute url, index all XHR and websocket data, follow next link and repeat.
> If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>?
You can extend the semantics of existing elements so you'd actually have <div is="custom-element-with-some-unknown-semantics-but-its-still-mostly-a-div">. Unextended tags are for when nothing in the existing HTML spec mirrors the base semantics you want.
Of course nothing stops people who did bad things before from doing bad things in the future, but it doesn't make tag soup worse.
The Custom Elements[1] and Shadow DOM[2] specifications have little to do with each other. The former is useful for defining new elements in HTML, along with properties and methods. The latter can be used to encapsulate the style/dom of that element's internals. So each technology is useful by itself and can be used standalone. When used together, that's when magic happens :)
While you're perfectly allowed to disagree, it sounds like what you're saying is this:
"Collections of div-soup activated by jQuery plugins are the way to write maintainable web applications that make sense"
It's not as though Javascript has no role whatsoever in custom elements, but really, there's a lot to be said about how this way of working will be a huge improvement over the current jQuery + div-soup status quo.
I'm saying that DOM through its relationship to HTML has weaknesses that make it unsuitable for building application components out of. "jQuery-enabled div-soup" is an example of how mixing presentation with model and logic yields unmaintainable results.
I have been interested in React.js recently, since it provides an interface to create reusable components and to use them inside a rich programming language with full types. I think that's a better example of a competing idea.
My experience is with building single page apps from scratch, so maybe there's a common use-case (embedding a twitter widget, or a 3rd party comment system in a blog) that Shadow DOM and Custom Components will address that I'm not familiar with.
FB React is a good example because it's living more in the presentation layer. But Custom Elements offer some things that React doesn't (as far as I'm aware).
One is better encapsulation, another is a well defined styling system (although obviously this article shows that this is not a super simple problem to solve, I'm certain that a good way of doing this will be around before too long) --- finally, and the most important thing, is that it's just baked into the platform itself, so interop between different frameworks is less of a pain.
For instance, suppose you want to use a particular Ember component in your Angular app. You probably don't want to include the entire Ember environment, and you want it to play nicely with Angular's idea of data binding and local scopes. Can you even do this? If you can, how much effort does it take, and how much does it degrade the application?
So, we've got: interoperable components/widgets. Easily style-able widgets. Elements with some semantic purpose. Simplified documents. Reusable templates (which, once HTML imports are pref'd on by default, should be easily host-able on CDN hosts).
There are a lot of benefits to baking this into the platform, despite making the platform an even bigger, crazier mess than it already is. It should hopefully give us better (and better designed) tools to work with.
Granted, I'm not saying it's going to solve every (web) problem ever, nothing ever does.
The biggest problem with this weak type system is obvious with CSS3 Matrix Transforms. CSS3 matrix transforms are the biggest bottleneck preventing fast updates to many DOM elements. Without fast updates of many elements across an entire page (window), pulling off the awesome smoothly animated effects found in modern desktop and mobile operating systems is pretty much impossible, especially in a system that implements immediate mode graphics over retain mode (DOM).
Marshalling matrix3d updates from numbers in an array of length 16 to a stringified array to apply it to an element, only to have the browser have to convert that stringified array back into an array of 16 numbers is insanity.
If you want performance, you need a more robust type system than just strings and children. I'm an engineer at Famo.us and we would absolutely love it if we could do something in javascript like element.set3DTransform(someLength16Array); We could simultaneously update thousands of DOM elements if arrays and typed arrays were supported instead of stringified.
Yeah, the CSS OM is really horrible too. CSS Animations is another area where you end up feeding huge generated strings into the DOM -- in theory Web Animations is meant to improve this, though personally I feel like the API too high level and ends up being really large because of this :(.
In your example, I think it'd only be a small patch (for WebKit, where my experience is) to optimize "elem.style.transform = new WebKitCSSMatrix(a,b,c,d)" without intermediate stringification. Mozilla doesn't expose a CSSMatrix type unfortunately. I've done some similar things for other CSS properties in WebKit -- have you considered submitting a patch? I've found the WK guys super receptive to small optimizations which don't change observable behavior (i.e.: you can't tell if the WebKitCSSMatrix was stringified or not currently) like that.
We're not über familiar with the internals of the browser or how to go about submitting a patch that fixes this. We did talk to people at Mozilla about this, but we still have to follow up on that.
Do you contribute to this area of Webkit? I'd love to chat more about this with you. Email is in my profile. Use the famo.us one.
Markup has a very weak type system (strings and children) which makes building complex UIs more painful than it has to be (this also stands for markup driven toolkits like angular and knockout -- where the equivalent of main() is HTML markup and mostly declarative). Markup isn't a real programming language, and it's very weak compared to a true declarative programming language.
JavaScript however is a real programming language with all of the constructs you need for building extensible systems. For building anything complex (which is where Shadow DOM should shine) you will need to use extensive JS, you will need your Shadow DOM components to expose rich interfaces to JS... At which point, why are you still trying to do mark-up first -- it's something that's more "in your way" than helpful.