> Don’t be ashamed to build 100% JavaScript applications. You may get some incensed priests vituperating you in their blogs. But there will be an army of users (like me) who will fall in love with using your app.
We all want wonderful experiences as users. The crux is almost a question of "how we want things to be" and "how we want to get there".
For me, the 100% JS MV movement is wonderful for a specific genre of app: An app that is:
* Behind an intranet
* Behind a paywall
* Behind a login-wall
* Prototypes / Demos / PoCs / etc.
But for the open web -- wikipedia, blogs, discussion forums, journalism (etc.) this movement detracts from the web as a whole, in that it excuses developers from having to worry about degraded and/or non-human consumption of their websites' data.
We have to ask ourselves what we, as humanity, want from the web. Do we really want a web of 100% bespoke JavaScript MV web-apps with no publicly consumable APIs nor semantic representations? If that is the intent and desire of the developers, designers and otherwise educated-concerned web-goers, then fine, let's do that and hope it works out okay...
But there is an alternative that has its roots already planted deep in the web -- the idea and virtue of a web where you can:
* Request an HTTP resource and get back a meaningful and semantically enriched representation
* Access and mash-up each-others' data, so as to better further understanding & enlightenment
* Equally access data and insight via any medium, the latest Chrome or the oldest Nokia
So, please, go ahead and create a 100% JS front-end but, if you are creating something for the open web, consider exposing alternative representations for degraded/non-human consumption. It doesn't have to be progressively enhanced.
Imagine for a moment if Wikipedia was one massive Ember App... And no, Wikipedia is not an exception from the norm -- it is the embodiment of the open web.
Every Ember.js app, by definition, connects to an API that does all of the things you are asking for.
Seriously, open up any Ember.js app and look at the network traffic. You'll see a series of requests, usually using very RESTful URLs, that requests the document.
The only difference is that, instead of HTML, where you are conflating the markup and the content, you get a nice, easily-consumable version of the document in JSON form.
There is literally no change to the web here, other than the UIs are faster and better, and it's easier for me to consume JSON documents than trying to scrape HTML.
That could work -- if the JSON is intended for public consumption, and if it is documented as so. The problem, I'd argue, with JSON is that it does not intentionally facilitate semantic annotations, unlike HTML(5). I'd argue that a properly marked-up HTML5 representation of a piece of data is more useful than a bespoke JSON structure with crude naming liable to change without notice. The benefit I get with an HTML representation is that it's the exact thing that was intended for the user to read/consume, whereas JSON is awkward to divine meaning from without the crucial app-specific view-logic that turns it into DOM.
How would you reconcile the need for an open-semantic-web with arbitrary JSON structures with no governing semantic standard?
EDIT: An example of a potential problem: Please take a look at how the Bustle app you referenced brings its article content to the front-end:
View source. It's not a public REST API (not visibly so); it's awkwardly embedded as literal JS in the HTML document itself... That'd be hell to publicly consume through any kind of automation.
You are building up a strawman against JSON without acknowledging that every problem you outline applies just as much, if not more, to HTML.
Is the HTML of any popular website publicly documented? Is there any guarantee that an XPath to a particular value won't change? Is there any guarantee the data I need is marked up with semantically accurate class names? No.
HTML is intended for public consumption—by a human, at a particular time. It is not a data interchange format.
Contrast that with things like Twitter or GitHub, which provide a versioned JSON API that is guaranteed not to change. Your web site becomes just another consumer of that API.
JSON contains all of the data you need, but in a way designed to be consumed by computers, and you don't have to do all of that awful HTML scraping.
And as for Bustle not having a public JSON API, well, here you go:
A versioned JSON API is awesome, I am not denying that. I also don't deny that the current state of the HTML markup on most sites is semantically rubbish.
Regardless of this entire PE debate, we would still have a problem, on the web, of data being out of reach due to walled apps that only serve rubbish HTML.
The problem of open + semantic data is very relevant to this discussion but we're pretending that one "side" has all the answers. I want a better web -- more open -- more semantic -- and maybe some shimmer of a truly semantic web[1] will emerge in the next 20 years.
So, yes, a 100% JS App is 100% awesome if, IMHO, it has:
* A publicly documented and consumable REST API
* Semantically enriched data through that API
* Some kind of degraded state NOT just for search-engines but for older devices and restrictive access (e.g. behind national/corporate firewalls)
I am not interested in being one side or another regarding this PE feud, and I am sure you're not either. I am trying to question what is best for the web and humanity as a whole. I don't think we have a silver-bullet answer. I do think it's necessary to dichotomize walled web-apps and open websites, and the latter deserve additional thought regarding usability, accessibility and semantics.
and in fact that particular bustle link you posted is a perfect example of where using HTML5 + microdata would be not only faster to render and crawlable but also allow the underlying data structure to be consumed by javascript. There's no reason why
Bustle.pageData.article.title
couldn't have been extracted from
<article itemscope itemtype="/article">
<h1 itemprop="title">Why We Should Root for Lamar Odom</h1>
...
</article>
It's a real bitch to implement when your use case is complicated and nested. From experience, I have no idea how to mark up a document "correctly" because of like recursive definitions of some microdata. Like flingy can contain thingy. A thingy can contain flingy.
Should I mark my concrete item as flingy where some elements are thingies or should that be a thingy with some subelements as flingies?
I just did my best and called it a day. Then spend a lot of time debugging it in the microdata analyzer tool.
in the example above, we define an article that has a related author property, that author property is its own scope so has firstName and lastName properties of its own. We also define an unrelated itemscope (unrelated because it has no itemprop) that happens to be nested in the same element, so this would parse to:
"And as for Bustle not having a public JSON API, well, here you go:
curl -H "Accept: application/json" http://www.bustle.com/api/v1/sections/home.json
A versioned JSON API that is guaranteed not to change. Can any public site on the internet guarantee that about its HTML?"
That's an interesting definition of "public JSON API" you are using.
* Where is the public documentation for this endpoint?
* Given an ember app, how do you arrive at that URL? (Previously you mentioned watching network requests from a browser to discover the URL endpoint, I think that's particularly unsuitable way of discovering a public JSON API)
* Where's the schema definition of that JSON blob?
That's as much a public JSON API as "curl http://en.wikipedia.org/wiki/Louis_Boullogne" is a public HTML API. And at least with the wikipedia one, there is some defined structure and implied meaning to the data coming back (i.e. standardised HTML elements).
"A versioned JSON API that is guaranteed not to change."
How is that guarantee enforced? What prevents a developer from changing the nature or structure of the response at that URL?
We all want wonderful experiences as users. The crux is almost a question of "how we want things to be" and "how we want to get there".
For me, the 100% JS MV movement is wonderful for a specific genre of app: An app that is:
* Behind an intranet
* Behind a paywall
* Behind a login-wall
* Prototypes / Demos / PoCs / etc.
But for the open web -- wikipedia, blogs, discussion forums, journalism (etc.) this movement detracts from the web as a whole, in that it excuses developers from having to worry about degraded and/or non-human consumption of their websites' data.
We have to ask ourselves what we, as humanity, want from the web. Do we really want a web of 100% bespoke JavaScript MV web-apps with no publicly consumable APIs nor semantic representations? If that is the intent and desire of the developers, designers and otherwise educated-concerned web-goers, then fine, let's do that and hope it works out okay...
But there is an alternative that has its roots already planted deep in the web -- the idea and virtue of a web where you can:
* Request an HTTP resource and get back a meaningful and semantically enriched representation
* Access and mash-up each-others' data, so as to better further understanding & enlightenment
* Equally access data and insight via any medium, the latest Chrome or the oldest Nokia
So, please, go ahead and create a 100% JS front-end but, if you are creating something for the open web, consider exposing alternative representations for degraded/non-human consumption. It doesn't have to be progressively enhanced.
Imagine for a moment if Wikipedia was one massive Ember App... And no, Wikipedia is not an exception from the norm -- it is the embodiment of the open web.