This is what I tell potential customers that are evaluating our product that creates single-page web applications where the JS dynamically creates all content: if you want search engines to crawl the content, then you're not using the right product. It's a surprisingly easy decision when viewed in those terms. Most SAAS products that likely would be implemented as SPAs do not care about public search engine access, and are only used by B2B customers where you can dictate a certain minimum standard in terms of browser versions (IE).
Everything else should be a traditional static web site with "decorative" JS, or "progressive enhancement".
The "holy grail" architecture whereby the first request sends a fully formed HTML response which is then immediately hydrated with a JS frontend app that takes over routing and data gathering is now pretty technically feasible. It's the best of all worlds ... fast static content for the first load, and then fast frontend routing thereafter. If you ward your clients away from this path you're doing them a disservice. With this stack JS web apps no longer need to be app-like, they're well suited to content-heavy sites too. They even work with JS turned off :)
Not only feasible, it's extremely easy to do these days with things like React (+Redux). We're about to launch a fairly large project for an Australian broadcaster using this approach and it's been incredibly simple to do.
And there's nothing ~magic~ that React does to make this possible - it would be quite trivial to 'roll your own' framework for this. Best of all, you could start with doing it just server-side and then roll out the JS to 'the other side' later on.
On server request, render the first snapshot of a React app into pure HTML and send it out, and then that HTML snapshot uses a script tag to get the actual JS file of the react app.
Yes, but in the case of SAAS apps, what exactly are they crawling (if allowed) ? In our case, it would be a bunch of edit controls, grids, buttons, containers, etc. that don't really make a whole lot of sense in the context of a search engine.
I think it's important to start recognizing that there's two distinct types of web "applications" starting to emerge: the more traditional kind of "application" that is actually more like a web site, and the newer kind of application that is more like an actual application (desktop/mobile).
Edit: Just read the new guidelines, and they are now considering serving up different content for the crawler as being "against the rules". So that probably cements the idea that you shouldn't allow Google to crawl anything that's mostly content-free and just an "app".
"Don't let Google crawl your site if it's not appropriate" is great advice, that's true, but it has nothing to do with whether you're serving up JS or HTML.
Yeah, I'm conflating the two, based upon my personal experiences.
But, are people really writing applications that generate all content completely on the client side via JS, but want the end result to be more like a traditional web site ? That seems weird, even to me. :-)
I can't tell you how many times I follow a link from Facebook or other social media, only to be brought to a page where a spinning gif spins for several seconds while JS loads the article.
In theory it works, but in practice it awfully degrades the page load time. The way it works is that it will call the init() method of JS app (ready() in jquery, main() in Dart etc). And it is very sub-optimal compared to server side generation. This is because it has to execute the ready()/start() method, which may spawn of some async requests to get some JSON, on getting which successfully you recreate your DOM.
So too many possible failure points, and for these reasons it will timeout many times, resulting in increased crawl errors, apart from page load time degradation. I learned this the hard way, and had to revert to server side generation of crawlable pages. And now looking at something like react/redux to achieve what I originally wanted (which is basically again server side generation, but while maintaining the quality of the served pages).
Can't upvote hard enough. Applications on the web are amazing, and developers must embrace JavaScript heavily.
That said, the apps do need to remain accessible to those with screen readers, vision issues, motor impairments, and cognitive issues. They need to be accessible to those with slow or spotty connections. But search engine optimization is about content sites, not necessarily apps.
Static pages are perfect for those cases, and I am thrilled to see more people coming around to that.
>Can't upvote hard enough. Applications on the web are amazing, and developers must embrace JavaScript heavily.
So how's it going in JS land, are you guys still reimplementing Windows 3.1's message passing loop and hailing it as a revolution ? :^)
Or maybe you're still busy pulling in literally 10+MBs of code on top of a 100+MB browser to fix the shortcomings of what is a terrible language ?
Or maybe you're busy trying to get performances that matches what a 68060 could do thirty years ago. Oh right, browser vendors had to agree on WASM to get OK performance. So basically, not Javascript.
But hey, I guess it's crossplatform. Kind of.
But promised, once transpilers are good enough, once I can use a good, strongly staticly typed language, once the APIs are stable and the tooling becomes tolerable, I'll join the SPA side.
But meeting the needs of customers is what I get paid to do. And customers don't want server-side rendered stuff. They hate losing their place when they add a row of data.
So what are ya gonna do?
See, I hate JS. I think it's terrible. If there were any other choices out there, I don't even think it would still be a thing if there were any other options.
But it's a requirement of a modern web developer to know how to do it well. And I'm a web developer. So I'm going to be good at it. At least as good as I can be given the tools I use and constraints I have.
> But meeting the needs of customers is what I get paid to do. And customers don't want server-side rendered stuff. They hate losing their place when they add a row of data.
>So what are ya gonna do?
Both.
Do the initial render (well, HTML generation) server-side, and update the state client-side when the user performs an action (or receives an event from the server).
Use real, accessible URLs for links, and use pushState when updating client-side.
I'd love to see books, screencasts, or tutorials that talk about doing this beyond the very trivial. I hear people talk about it, but whenever I ask, they can't show me code because reasons.
This makes me skeptical. And based on what I see in the wild... lots of loading progress bars, etc, I don't see widespread use of this technique.
"This makes me skeptical. And based on what I see in the wild... lots of loading progress bars, etc, I don't see widespread use of this technique."
If you're looking for a "big" site that does this, then: Yelp. All of the content on yelp.com is statically rendered, with progressive enhancement. Turn off javascript, site still works. It looks nearly the same, in fact. So go inspect source to your heart's content.
There's really no magic to it, and nothing of which to be skeptical: you first generate static HTML and CSS, and you treat javascript as an enhancement to that content, rather than as as the content itself. This is the way things were done for years. It's only since 2013 or so that people have lost track of this "ancient" art.
No, no, I get that. That's how I do my stuff and have always done it. We used to call that "progressive enhancement".
But it's easy to say "ehhh just render the whole page statically first then re-render parts in JS".
I'd like to see people demonstrating this with modern frameworks and tools. Because without people leading, it'll be JS-rendered static content for years to come.
The GP wasn't saying that, as I read it. It was saying: "use progressive enhancement".
When you insert new elements in the DOM with JS, you are re-rendering parts of the page. I see no other, non-invented reason that anyone needs to render static content with javascript. Devs do it primarily because they're annoyed that they have to think about AJAX logic, and want to write simpler code -- at the user's expense.
> But meeting the needs of customers is what I get paid to do
That was like being slapped with a good dose of pragmatism – I love it!
Wrt the later part of your post, try Dart. It's annoying in entirely new ways, but it does away with a lot of JS pains; it's also mature, powerful, and production-oriented. It even comes with a VM to run it in, so you can pretend it's a real –boy– language. Its obscurity also means that it can be hard to come across help/tools (emacs completion, anyone?), but despite all the shortcomings I enjoy using it, and am happy to back Google's bet on Dart.
Stable and field tested, with a majority of the marketshare, as opposed to using the flavour of the day because that's what's hip?
Where do I sign up ? I mean, I do C# day to day, so boring is perfect :)
Also, much to my dismay, I do try things outs, maybe not extensively, but enough, before criticizing them. I've used Angular, React, Ember, JQuery (yes, working on a 100% jquery SPA. It's as fun as you can imagine), then other things like Vue.js, Knockout, and quite a few more. Even Vanilla JS. So, while I will not pretend to be an expert, or even having advanced knowledge of Javascript, I'd like to think I kind of know what I'm talking about.
That's good to know, and I promise I wasn't (just) being snarky! The problem is that unless you're working with something low-level like C, by the time it's stable and field tested, the language has also gotten kind of brittle, there's a mountain of bad code, and new features don't undo the damage done by the bad patterns of the past.
Javascript also has mountains of bad code and bad practices, but a lot of it's being accumulated in frameworks which are getting created, tested, found wanting, then dropped, with the good parts surviving on, either in new frameworks or as part of the standard.
I won't lie though, I'm as framework exhausted as everyone else right now and it can't go on like this for much longer. Using something like C# (which I see as "good Java") would be a nice change of pace. Definitely check out Ampersand and Webpack with React, they're kind of coalescing to provide a saner, "use only what you need" approach.
«But promised, once transpilers are good enough, once I can use a good, strongly staticly typed language, once the APIs are stable and the tooling becomes tolerable, I'll join the SPA side.»
Typescript is a plenty good enough, statically typed language that transpiles to whatever flavor of JS/ES you need, including ES3 if for some reason you are stuck supporting IE < 8. Typescript has a good ecosystem of tooling (try Typescript in Visual Studio Code, for instance).
As for API stability, it largely depends on the SPA Framework you want to try. Yes, the problem is that there are so many to choose from and some of them do wonky things like reimplement the old GUI message passing loops and call it progress.
If you want my biased opinions on SPA frameworks: These days I use CycleJS when I get the choice, as it is simple, gets out the way, and built on top of RxJS for true reactive programming. It's simplicity provides a very small API overall and thus a considerable amount of stability for said API because of its smaller surface area. If you need something a bit more time tested with the broadest classic browser support, I think Durandal (Knockout-based) is a stable, well worn SPA framework. Having used Durandal in the past I admit that Aurelia, it's successor, is likely to have a similar stability, as it matures.
TL;DR: The transpilers are good enough if you give them a shot. Typescript is a great statically typed language for the web. SPA APIs are crap shoot, but there are good options out there, especially if you stray just a tad out of the way of the hype trains.
It's not very fair, I feel, to drag browser binary size into it. Those hundreds of megabytes give you a UI. If you have a QT or GTK app then the dependency you will drag in will not be small (and heaven forbid you depend on some Gnome or KDE lib).
I do agree about the browser UI performance though - why I can't scroll a page without the tearing effect in 2016 on an 8 core machine with 16GB of RAM is beyond me ...
That app was created with our web development product (check the site if you want to know more), and has the following characteristics:
Two files, one HTML loader and one monolithic JS app, so latency during loading is minimal. The HTML loader is ~189K and the JS app is ~462K, and includes the entire runtime and UI layer, and a lot of the control/component library. Both the HTML and JS are aggressively compressed/obfuscated by a compiler, and the coding is done in a statically-typed, OO/procedural language with RTTI and other nice things. The UI was designed using a WYSIWYG designer with two-way tools (code-behind).
So, there are products/tools out there that will do something along the lines of what you want. And the existing JS engines are very good in terms of performance, so all that developers like us need to do is some quick compilation to JS and we're all set.
However, I do agree with you on two points:
1) JS, by itself, just isn't structured enough for large-scale applications.
2) The push towards libraries and away from frameworks was misguided because JS, by itself, doesn't have the means to allow for this approach to be successful. Instead, what we have now is every single small library reproducing the same functionality over and over again. Case in point: I was looking at writing an external interface (tells our compiler how to type-check external JS code) to ChartJS this week (great little library), and started looking at the code. 80-90% of the "common" code in the library was code that was already present, in some form, in our UI/runtime layer, and that was around 70K right there. Multiply this by the number of small libraries, and you end up with a lot of duplication of functionality that is, essentially, dead weight. I don't know if it's 10MB of dead weight, but it's pretty significant.
5.2MB of datasets?method=rows&dataset=IPCountry&Country=%27United%20States%27 took 21.22 s to load
"39246 rows load in 40948 msecs" - 41secs total just to see something!
This is not an awesome presentation, and misses some of the stuff the previous commenters mention about "windows 3.1" like "don't load an entire dataset in memory, but show a small slice of it at a time". At the time, memory was very limited, so developers were forced to be efficient, and it was good for users. Even on my old 486 DOS machine, I could load up a spreadsheet application, and navigate through it (with way more than 40k rows) at nearly instant speed because it was smart about what it was doing and what it's limitations were (memory, disk access speed, etc)
Our product is for developing applications, not consumer-facing web sites. It is expected that the first hit on the HTML/JS is going to take more time than normal (we offer progress options for this). After the initial hit, the browser will cache both, and the load is instantaneous until it is updated again.
Also, the total size/download time that you're seeing isn't for just a combo box and a grid, it's for the entire UI layer and a lot of the component library. IOW, as you add more and more to the application, you're only going to see very minor increments in the total size of the application because most of the code is already baked in. Most extensive client-side JS UI frameworks are at least 300K or so, minified.
In retrospect, it was probably a bad idea to post that particular example without an explanation of why that app was created. The whole point of that particular example is to show that you can handle large numbers of rows in JS apps without killing the browser if your UI framework does smart things like implement virtual grids. Many other frameworks don't handle things very well when the number of rows grow beyond 1000 or so, and memory consumption gets out of hand very quickly.
After the initial hit, the browser will cache both, and the load is instantaneous until it is updated again.
Ok. That's pretty much the definition of normal HTTP caching policy though.
Many other frameworks don't handle things very well when the number of rows grow beyond 1000 or so, and memory consumption gets out of hand very quickly.
Fair enough. Obviously you've got the UI down as it's butter smooth once everything is loaded.
But many of the popular JavaScript grid systems already have stuff like virtual grids, and have had them for a while (scroller[0][1] for datatables was released in 2012 for example). They may not be quite as smooth as yours, but the DataTables example[0] (timing from the moment each has started loading of the data until there is some visual data on the screen) is less than 300ms, whereas yours is still over 10 seconds (not a 100% comparison as yours is 5MB of data). Definitely rather have people working on actual product, rather than waiting 30 seconds to do something (and each time the dropdown is switch to another country and back it's another 5MB download and 30 second wait).
we offer progress options for this
Then please, please show this. Showing people a demo of good code doing smart things is a great first impression. For example, a demo of 5 million rows of data via server side processing with that butter smooth animation would be amazing. Then when people are hooked, and start asking about doing stuff like loading 100% of their giant files into browser memory, you can impress them more with how there is no slowdown other than the pain they cause themselves by not loading data piecemail.
But ultimately, any demo that takes 40 seconds to load is just going to be unsavory without a lot of context.
Just to clarify: I would never recommend that someone actually try to load 30K+ rows into a grid in a web application. So, most of the countries not called "United States" is probably more representative of the typical usage of such a grid. The majority of the time spent is not the server request, but rather the loading of the incoming JSON data into what we call a "dataset".
Having said that, we've had a lot of requests for more incremental row loading, so that will definitely be in the works at some point in the future. Initially, we tried to make the "datasets" as dumb as possible, but information about primary/unique keys are available from the back-end, and it is possible to progressively load the rows as necessary (without requiring manual pagination). But, again, loading that many rows isn't a typical use case, and we normally advise against it.
It might be great for the developers, but frankly, as a user, I find that demo very disagreeable. It breaks multiple useful UI conventions that work fine in native browser components, and it still took 5s to download a dropdown and an empty table (for comparison, the HN frontpage takes <1s here).
Also, which UI conventions are you referring to, specifically ?
I'm from Portugal, so I click on the dropdown and press P. The standard behaviour is to jump to the first result, but this one automatically selects it, closing the dropdown from under me and starting an update.
There's no horizontal scrollbar even when the content doesn't fit, so if I reduce the window horizontally, it becomes unusable.
When I click on a row of the table, nothing changes; I must move the cursor to see it has in fact selected it.
Thanks for the feedback, it is very much appreciated.
The near-search is just how it was coded (it's responding to a selection by loading the rows). It is a bit off-putting, though, so I'll change that.
The horizontal scrollbar is the same thing. You can specify how you want the surface (body element) to behave with respect to scrolling, and the I turned off the scrolling.
As for the clicking, that's the "hot" state over "focus" state preference in the UI layer. We're also thinking of changing that: it used to be "focus" over "hot", but it was changed, and I'm not sure that it was a good decision because of what you describe.
I've updated the example to fix the issues that you mentioned.
Also, the text isn't selectable on purpose. There's an option for our grid control to put it into "row select" mode where it is only used for navigation/selection (similar to a listview control under Windows).
It's definitively much better now, though it still has a weird behavior when pressing a letter to jump in the countries list: if I press P repeatedly to get to Portugal, only 1 in 5 presses actually register, the others are ignored. It seems there's some "cool-off period" after a keypress.
Still, on a larger point, it's not just important that one can configure the software to behave "well"; programmers are lazy, so the defaults are crucial, and as bad as browsers can be, the basis are pretty much solid nowadays, so it's hard for me to be confident that these will be well handled by new frameworks.
By the way, how is the software on acessibility? Can a blind user understand that there's a dropdown and how to select an option?
Yeah, there's a near-search timeout involved in order to allow multiple keystrokes to be used, and it was set a little high as the default (it's configurable as a property). I've bumped the default from 500ms down to 200ms and that seems to be a better fit.
Accessibility is so-so. We use text blocks for any labels, grid headers, grid cells, etc., but I still need to audit the UI completely with the ADA helper tools that are available. As for the combo box, the answer there is "I don't know" until I complete that audit, but it probably won't be as usable as our other combo box options that are actual edit controls (you can select text, etc.). The reason for the combo box that you see here is touch environments, specifically those that automatically pop up a virtual keyboard. It's used for situations where you don't want the keyboard constantly popping up, but you need a button-style drop-down list control.
The problem with the current HTML incarnation, and why stuff like the above is done, is that it's just not flexible enough to handle real-world applications (equivalent to desktop apps) without some serious compromises on controls, or by subverting the HTML semantics. We chose the latter. It's the only way to do things like virtual list controls (the combo box has a virtual list control associated with it). If the browser offered a way to define semantics for custom controls, it would help immensely.
Another minor ones:
1. If I click on the background with the mouse so the input lose the focus, I cannot use the tab key to bring back the focus on any of the input elements (in Firefox)
2. Cannot use Ctrl + mouse wheel to Zoom the page, but Ctrl + "+" works.
3. The dropdown list opens using any of the mouse buttons not just the left mouse (should not open on right click or middle click)
4. Cannot select the data in the list with mouse (but can select whole page text with Ctrl + A)
Also it's strange that you are talking about fast load time but I see the opposite: 660KB loaded in 4.5 seconds for the inital page load.
Some performance tips:
It should be obvious: use HTTP compression, a quick check shows: the total size could be reduced to 27% ! of the current one. Maybe your custom server ("Elevate Web Builder Web Server") doesn't support it yet? This could help with the HTML and JS load times.
In the case of the countries JSON it's not the transfer size that is slow but the response time of the server: 600ms for a 8KB JSON is not so fast especially if the data is not changing and easily cachable.
If it's a single-page app then you could embed the JS in the the HTML so all the data comes in one continuous request, and if even the initial JSON data (for the countries) is included then additional more than 1 second can be saved that is currently there between page loaded and starting to load that JSON.
Also loading big dataset by selecting United States from the dropdown freezes the browser and Firefox shows the unresponsive script warning (maxgridtext.js). There could be some processing in the JS that may not be necessary to do for the whole dataset but do it only on the visible part of it or it could be done on the server.
I post this because I see you care about the UX and performance and I hope it helps you a little.
Thanks for the feedback. I fixed the focus issue, and I'll look into the keystroke/mouse handling.
As for load times, I never stated that the initial load time was fast - I said that the latency was low. As you say, combining the JS into the actual HTML file would improve it even further.
Yes, the server doesn't support gzip yet, but will soon. It's basically our "here's a web server to get you started" web server that we include with the product, but you can use any web server that you want. We actually didn't even plan on including one, but you know how plans go..... :-)
The JSON isn't cacheable, exactly, because it's the result of a database query. So, the response time you're seeing includes setup/teardown time for the database connection, etc. Again, this isn't production-level stuff here, just a one-off coded to show something in particular.
As for the loading of the "United States": the server request is actually very fast, but the loading of the JSON takes some time. We do custom JSON parsing in order to validate the JSON properly and allow for missing column values. We investigated the built-in JSON parsing, but we are still left with the same issues in terms of finding whether certain column values exist in the resultant JS objects, and would end up using almost twice as much memory because the resultant JS objects would still need to be copied into different target structures.
Why does it have to be one or the other, though? You can have static, pre-rendered landing pages for search engines to crawl and index, and your actual application can be an SPA. Right?
Absolutely. In our case, it's a non-fit because of the client-side content generation via JS (our apps are more akin to desktop/mobile apps with a monolithic JS app). But, if you already generate most of the HTML/CSS from the back-end and just sprinkle in some JS here and there, then you can certainly do a hybrid approach, like you suggest.
I generally agree with the article, but the examples of javascript web apps that were given seem weak:
> Most web apps I work with daily have highly sophisticated in-browser interactions that are built with JavaScript and can only be built with JavaScript: Flickr, YouTube, Facebook, Twitter, GMail etc
The web apps that I use the most day-to-day are Trello, Slack and Gitter. IMHO, those are better examples of js bringing actual value to the table that progressive enhancement simply cannot.
With that being said, the issue of overusing SPA technology when it doesn't fit the need is definitely real.
Part of the issue comes from people wanting to pad their resumes w/ experience in "hot" technology, or people who do have a genuine interest in improving their skills, but are not very skilled in identifying the pros and cons of whatever "new hotness" or "best practice" they read on their favorite news aggregators. By extension these creates an industry for grunt work to maintain/refactor/rewrite everchanging codebases/frameworks. Coupled with the general tendency of people to favor new shrink-wrapped libraries over doing good ol' painstaking research, it's difficult to reverse that trend.
For me they are tools for work and they are the web apps that I use the most every day (I work remotely). Trello is a task management system where you organize tasks as drag-n-drop "cards" and Slack/Gitter are chat apps (Slack is for company chat and Gitter is for my open source project community).
I do use Youtube/Facebook/Twitter etc as well, but I use them primarily for content consumption for entertainment, so my usage patterns of those services wouldn't be affected much if they were written without using javascript at all. In contrast, drag-n-drop and the ability for real-time collaboration are the killer feature of Trello (for me), and obviously there's no way web chat would ever work smoothly without js.
I am a JS dev (after a lot of experience in statically typed languages) and just amazed to see the comments in this thread. At one end its experts(or near-experts) and at other end is just ignorants who probably did some $(...).show() and call it javascript.
No wonder why its hard to hire good javascript developers. People just don't appreciate how wonderful this language is.
I probably fall into the opposite category. Moved away from statically typed languages back when .NET 3.5 was released and have settled on JS primarily JS for the past few years.
I'm wiling to bet C# is 10x better than the last time I used it. Except, I don't have a good excuse to use it again to see.
The MS AJAX control toolkit and UpdatePanel probably didn't help, either.
I think I can probably say that Javascript engines are an order of magnitude faster than they were 16 (!) years ago.
If I may make a generalisation, C#/Java devs are more likely to be "enterprise" developers and said enterprises were more likely to be the ones on IE <11. I know I fought damn hard to upgrade from IE8 just a couple of years ago. My webdev skills suffered as a consequence, too.
Microsoft moved pretty fast once they realised that IE needed to compete again - and Visual Studio evolved alongside it (mostly).
VS2015 is now a pretty good development environment for Javascript/Typescript and even includes some Grunt build support (although I haven't had a play with it yet).
> I think I can probably say that Javascript engines are an order of magnitude faster than they were 16 (!) years ago.
Two orders of magnitude. I just checked, and on my laptop http://web.mit.edu/bzbarsky/www/mandelbrot-clean.html (the first JS thing I wrote where performance of the language itself seriously mattered) runs in about 30-60ms in modern browsers. On the same hardware, it runs in about 2200ms in Firefox 3, which is the last pre-JIT version of Firefox. Other benchmarks I've tried show similar improvements results. And that's only going back to 2008; the interpreter was a good bit slower another 8 years before that.
>> ignorants who probably did some $(...).show() and call it javascript
So was that a dig against jQuery?
>> People just don't appreciate how wonderful this language is.
Ever try adding a string and an integer in JavaScript?
I think developers who think JavaScript is great, love it because it's forgiving. Do what you want, the JavaScript siren says, it might work, it might not but at least you won't have an ugly all stop. I'll try to make it do something, anything, sings the siren.
No way I would disrespect jQuery in any way. I learnt a lot of JS by digging into jQuery code and from this book from its author "Secrets of the JavaScript Ninja"
But I hate it when people who have done some simple basic programming and used jQuery think that they know JS
It's hard to hire good javascript developers because knowing javascript doesn't imply that you know how to be a good programmer. Nowadays, you can string together 500 node modules and make an impressive demo on the surface. The moment that dev needs to dig deep and debug something or create something from scratch, they can't.
I agree that Javascript web applications have a place. The problem is that the pendulum has swung too far, and folks are using Javascript everywhere. Static content generation on the client side, rendering non-interactive content, on the backend, in the build process, as a generic scripting language... the list goes on.
We're watching programming and design concepts which played out back with Windows 95 being hailed as "the future of programming". We're being told "worse is better" as Javascript is being used to turn our computers against their users by serving as a vector for malware, ads, and bots.
It's a mess, and it's going to take pushes from the far side of the pendulum's swing to bring it back to a sane place. If the pendulum was actually stalled too far towards "server side only", I'd agree that "server side only" articles would be too much - but we're not there.
> We can only improve the user experience with the front-end technologies we have.
No, we are more than capable of degrading the user experience with the front end technologies we have. There's a lot of proof of that on the web today. Delayed full page ads, pop-overs for every third word, hijacked scrollbars, malware installs...
JavaScript has grown up; it's got one of the best package managers anywhere, rapidly improving syntax and semantics, a large passionate community, loads of great tooling and a well maintained cross-platform binary in the shape of Node. It's no less deserving than any other language to be run on a server, as part of a build, generic scripts, whatever. Browser security issues arise because it's hard to resolve total user security across the public web coupled with highly interactive scriptable web pages ... its got nothing to do with the JavaScript language per se. It's fair enough to argue against the level of control scripts happen to have over the browser, but web page scripting could be implemented in <insert some other language here> and the same issues would be present. Your comment comes across as ignorant and prejudiced I'm afraid.
I was talking about npm. And I find the JS community welcoming, friendly and passionate and for the more serious npm modules out there highly technically literate, but hey it's a subjective thing I suppose :/
So, the package manager that doesn’t do proper deduplication and, thanks to nested dependencies, runs over the file path length limit on multiple platforms?
FYI, the nesting and deduplication were fixed in NPM 3. It uses a flat structure now.
Unfortunately, in their place the version resolution algorithm is now subject to order dependency: the same package.json file, run at the same time with the same packages available in the NPM repository, can give different results on different machines depending on what had already been installed.
Ha ha, the effing path length limit drove me crazy! There'd be 50 node_modules folders and Windows wouldn't let you delete anything; It was nuts! I had to start renaming directories to "a", "b", "c", and so on until I cut the size down. I like JS for quick personal projects, but I'm glad I don't use it professionally anymore.
OK, here's an everyday example. Suppose you and I are working on a front-end project, all neatly set up with package.json and so on in place. We've been working on different new features, adding in a new package or two from NPM in the process. We both fetch each other's latest changes to in-house code from source control. What NPM command do we run to make sure we also have the same packages installed and are actually running the same code?
package.json version control and npm update? npm install --save will install the package you want, and add it to package.json
Why wouldn't your repo merge package.json between your changes? I'm doing the same thing when I merge changes to production. I add packages as I work on things at the desktop, and sync changes to production server.
Unfortunately, the regular package.json and related npm commands are built around semver, and typically also only specify direct dependencies. Either of those alone is enough to violate the simple condition I gave many times over on a real project, because it means you can't in practice guarantee that two developers are even running the same code. It's actually worse with the latest NPM changes that flattened the tree in node_modules, because now there are order dependencies affecting which versions of things you get as well.
The closest NPM has to an answer for this is shrinkwrap, but that adds a lot of complexity of its own, and still has limitations. In fact, there's a note at the bottom of the npm shrinkwrap documentation page suggesting that if you really want to control your dependencies robustly, you need to check everything into source control -- which is obviously the sensible thing to do, except that because npm does magic in node_modules that can involve all kinds of funky build steps, you're not checking in just the source for the modules you depend on, but might have all kinds of other artifacts that may or may not be portable as well.
In short, it is all but impossible in practice to use Node and NPM to satisfy the most basic of development requirements: getting all of your developers (or production systems, for that matter) to actually run exactly the same code. If that's not a catastrophic failure to meet the requirements for a package management system, I don't know what is.
Personally I don't find using npm shrinkwrap a chore at all. If npm 3+ notices a shrinkwrap file in the project it'll update it automatically, so most of the time you don't even have to think about it.
We've been using npm 3 on multiple large projects spanning frontend and backend, being worked on by many developers, pushing code through CI into production many times a day and npm installing fresh in every environment. There are probably many thousands of nested dependencies in some projects. It works, nothing bad happens, it's fine.
Npm comes with some trade offs, true, but also a long list of plus points as well. No system is perfect, but you're painting some calamitous disastrous picture here which simply isn't reflected in reality.
You've been using NPM3 on many large projects with many developers, many commits, thousands of nested dependencies, and not run into a single problem? If so, that is a remarkable achievement, and I hope you'll write up how you've done it somewhere.
In particular, I envy whatever combination of dependencies and development platforms you get to work with, because I haven't seen a single project with no problems due to updates since NPM3 arrived, never mind all of them. (Edit: This is not intended to imply NPM3 is the only reason for the problems, only to define a comparable time frame.)
Even if you're in an environment where shrinkwrap is working well, we seem to find transitive dependencies of installed packages that introduce bugs or break something without correctly updated semver with alarming frequency. As some of us were discussing here on HN the other day, this has happened so often with Gulp plugins over the past few months that one project I work on dumped the entire Gulp ecosystem (and instantly and dramatically improved developer productivity as a direct result).
I didn't say not a single problem: I said "its fine, nothing bad happens, it works". Of course there are issues, and bugs, some of them due to dependencies breaking, but nothing calamitous or actually bad, and nothing out of the ordinary compared to other stacks and tools imo.
Npm module churn can be annoying sure, but on the flip side its good to be getting incremental patch updates sometimes too. And with exact versions in package.json and shrinkwrap you can eliminate enough uncertainty that to all intents and purposes its really not a problem.
> dumped the entire Gulp ecosystem
Amen to that. Gulp plugin eco-system has been messed up for a year or more. We do all our frontend builds with npm scripts now, running the bare cli tools where possible, and reducing dependencies as much as possible (e.g. Gulp and all its plugins).
I suppose we just have different amounts of tolerance for unexpected breakage.
It sounds like we do have a similar philosophy as far as tooling goes: simple scripts, minimal dependencies. But I add to that a strong preference for tools like build systems and package managers and source control systems to have completely deterministic behaviour and therefore provide reproducible results by default.
In practice, trusting semver has caused real world problems for some projects I've worked on with non-dev dependencies too. For example, a few weeks ago, Facebook pushed a bad update to React that completely broke dev builds on IE. It was fixed a few days later, but during that window NPM considered the broken version just as good as the version we'd been developing and testing with for the past couple of weeks. Tracking that one down wasted a few hours in our office as we tried to figure out why tests were failing on our new build/CI machine but whenever the developers tried it everything seemed to be working fine.
I guess I just don't understand the decision to make ambiguity the default with NPM, particularly with the Node/NPM culture of using very many small dependencies. It's not as if it's difficult to issue a message if a newer version of something relevant is available, or to provide a command or option to deliberately change to newer versions of packages -- pretty much every package manager in every language I've ever used does these things -- so introducing unnecessary variability just seems like asking for trouble.
Relax. You're talking about someone's opinion of a programming language.
That said, all of your comments come across as ignorant to me. I don't think you're ignorant, though. I think you know JavaScript really well and like it.
I personally found it unusable on the server until TypeScript became mature enough to use. Have you tried TypeScript or some other static-typed, better-defined language (Flow, Clojure, etc.)?
You may love JavaScript, but there's a good reason Facebook, Google, and Microsoft all find it to be unusable and have created compile-to-JavaScript languages so that their companies can be JavaScript-free someday.
I doubt the goal is to be JavaScript free. The compilers typically enhance the language or give you access to functionality that is destined to land in the future JavaScript. ES6+ shows a lot of promise and thanks to Babel we can use those features now.
Let me clarify what I mean by "JavaScript-free": they don't want any of their employees writing JavaScript by hand anymore, whether it be ES6, ES2015, or ES2020. TypeScript and Flow are supersets, but Clojure (and many other compile-to-JavaScript languages) are completely different.
No, they just want some static tooling on top of a dynamic language.
We use Typed Clojure where I work. To say it's because we "don't want to write Clojure by hand anymore" would be nonsense.
Reading your comments in this thread, I think you'll find out one day that people just have different appetites for different trade-offs and that's really all there is to it. No need to wheedle, condescend, or link Paul Graham essays.
I understand what it's like to be emotionally invested in a particular language or technology. It's usually a result of not having used something better[1] or just having invested a lot of time into that technology.
The good news is that a lot of programming skills are fungible, and you (like me) will eventually use things you like more than you expected -- perhaps even more than you like JavaScript. The time you invested won't be wasted.
You assume a lot about me! Don't worry ... I'm 35, I've been around the block, dipped in and out of a few languages and platforms :)
And if we're talking about emotional investment in technology ... I think that's where most of the JS shaming comments we're seeing on this thread are coming from. People feel a bit threatened by it because it seems like it's eating a lot of things at the moment. They'd be better off following your advice and stop sniping at it and take an interest.
Javascript has grown up. It's past its infancy, and at a point where it starts to be really good at what it was designed to be good at. However, like any adolescent, since it's good at one thing, it thinks it is the best at everything.
I'm anthropomorphizing Javascript somewhat here, attributing the feelings of its community to the language itself. But from the point of view of someone whose first programming language was not Javascript, the trend is pretty apparent: We can run Javascript anywhere, so we will run it everywhere.
> However, like any adolescent, since it's good at one thing, it thinks it is the best at everything.
The only thing that's happening here is that a community has formed around an evolving language and is taking advantage of some great tooling and open source projects to do some interesting stuff. That community isn't formed exclusively of wet-behind-the-ears web developers. It's comprised of developers of all skills levels with experience in a multitude of platforms and languages, just like any other. These people chose JS because it works for them and their problem set. To suggest otherwise is condescending to say the least. Stop trying to paint the whole community with a single brush, you sound ridiculous. If JS isn't your bag fine, move on, but less of the JS shaming please.
JavaScript has grown up; it's got one of the best package managers anywhere, rapidly improving syntax and semantics, a large passionate community, loads of great tooling and a well maintained cross-platform binary in the shape of Node.
I'll give you the passionate community point. I almost completely disagree with everything else.
JavaScript is a very immature platform.
JavaScript has probably the worst packaging ecosystem of any major programming language. The fine-grained packages, indirect dependencies, and frequently broken versioning cause endless problems in real world systems. Recent attempts to fix the package manager have unfortunately swapped one set of problems for another, while still leaving some of those fundamental weaknesses involving transitive dependencies and excessive trust of version numbering conventions unaddressed.
It does has rapidly evolving syntax and semantics, but at the expense of stability, portability and future-proofing. The only reason 99% of JS programmers use most of the new syntax and semantics is a combination of a transpilation tool written primarily by one (remarkable) person and a polyfill written primarily by one other (remarkable) person in their free time. Browser, tool and Node support for many of those new features is patchy, which is why that transpilation and polyfilling is necessary.
Tooling for JS is at least two decades behind the industry best-in-class. It's a dynamic language that now has such a huge mess of build tools associated with it that seasoned veterans of web development weep at the prospect. You can't reliably debug real world JS code in its natural form, even on the level of setting break points and examining local variables and call stacks, without jumping through all kinds of hoops, and in some browsers even then. Profiling also relies almost entirely on browser magic. There are lots of testing tools, but setting them up is often a chore and they have widely varying degrees of power and flexibility. We've only recently got a style checking tool that can actually cope with ES2015 reasonably well. Refactoring, auto-completion, documentation, navigation in your editor? What are those?
As for Node... I've seen experienced developers spend months trying to get basic stuff working in a cross-platform way with Node, and then seen other experienced developers do more in a week or even a day using C. Using Node is saying "I know, let's throw out a few decades of experience building reliable client-server systems and all the other tried and tested languages and tools and libraries we've developed over that time, and instead transfer all of those negative aspects of JS to the server side as well, even though unlike front-end developers we don't have to".
I am a huge fan of JavaScript myself mostly because I found it very fun to work with and that you can do anything with it at the moment (websites, games, native mobile apps, smart TV apps, program raspberry pies and much more). About single page apps now: don't do a SPA if your website is not really an interactive application. Do you have a game, photo editor, P2P video chat? If yes, go ahead, SPA is really great for those things but for God's sake do not create a single page app when your website actually has more pages with different content and no interactivity. Why try to emulate how a browser works (loading, rendering, etc...) inside the browser itself?
> Most web apps I work with daily have highly sophisticated in-browser interactions that are built with JavaScript and can only be built with JavaScript: Flickr, YouTube, Facebook, Twitter, GMail etc.
There is truly nothing about any of those that couldn't be done with a simple page refresh. Especially YouTube. Generally I find most of what JavaScript adds just irritating.
I feel like Gmail reached a peak of Javascript use just slightly past what the basic-HTML version has, and everything since has just slowed it down and made it eat more RAM for little benefit. Sometime between then and now I'm sure they added some "helpful" frameworks that have contributed to the problem. Inbox is even worse.
Our children won't believe we ever browsed the web in a graphical multitasking OS on 64mb of RAM and a single processor core. "Don't be silly, that wouldn't even be 1/6 enough to load Google Inbox!"
Let's list Youtube's features that could be done with no Javascript
* Displaying the video
* Controlling the video (although with the browser's default controls)
* Being logged in
* Listing videos
* Voting
* Commenting
* Suggested videos
* Subscribing
Actually, I can't see a single thing that can't be done without Javascript. It has no complex notification system aside from annotations (and I'm not sure anyone enjoys annotations), nothing that isn't provided by browsers.
But no javascript would require page refreshes, which makes for a worse UX. Eh, we survived refreshes before. Maybe browsers could even agree on a standard to refresh only part of the pages by querying the server themselves. Throw in five lines of JS to save to localStorage the current time on the video and restore it on reload.
So you are saying one shouldn't use JS to implement voting and comments but instead voting and comments should interrupt the video and then one uses JavaScript to restore the playback position. You also imply youtube should have waited for 6 years or whatever for video playback to even be possible without client side programming.
> Maybe browsers could even agree on a standard to refresh only part of the pages by querying the server themselves.
Maybe browsers could also agree on implementing all needed applications in the browser. Then you just make <youtube> tag and you have implemented youtube.
HTTP 204. You send the request, the browser stays where it is. Super simple, and what it was made for. Amazon used to use it quite heavily back in the day.
Is there a spec that the browser must stay where it is? Otherwise I would just use a form targeted to a hidden iframe. Still it's not usable here since you won't see your vote or comment or anything without reloading the entire page.
The spec language "SHOULD" is not same as "MUST" at all.
No, but in practice all major browsers (Firefox, Chrome, IE, Opera and Safari) do so. And it's not like the implementations of JS and its APIs never differ.
> The spec language "SHOULD" is not same as "MUST" at all.
There are edge case clients that would need to contravene this. Most of those clients don't support JavaScript (or in many cases video) in a meaningful way, making the whole thing moot.
Maybe you could have some kind of application, or "app" that users could install locally, on their computer or device, and then this application could play videos from Google, without needing to use a web browser at all!
Not saying it's true in the case of Youtube, but I've noticed recently that going back to lame old creaky page-refreshing HTML+(a little)CSS+a few lines of JS sites with no AJAX that they feel unbelievably fast compared to most AJAX-heavy Angular/Backbone/React/Whatever sites.
Ajax was better UX than full page refreshes when it was basically just replacing frames and iframes. It's become bad UX now that template rendering systems and a few hundred KB of code are involved.
Actually, I just tried this with modern Gmail in one tab and the basic HTML version in another. It's way, way faster to switch folders/categories in the basic HTML version. It wasn't even close, despite modern Gmail "efficiently" loading just the data it needs and basic HTML refreshing the whole page.
Let's kill UX and get rid of all the smooth JS interactions and replace them with refreshed pages and native browser controls. Geez... are going backwards here?
I can see where that comes from, the functionality of Youtube is nothing without its videos.
But on the other hand, Youtube wouldn't have as many videos – specifically the content creators wouldn't have as much of an incentive to upload them – without the social features being there.
So you downloaded 30 MB of client side code instead of 100kb. Don't see how that's a win. By "higher quality" you mean placebo (see foobar2000 faq) or some DSP filters?
> So you downloaded 30 MB of client side code instead of 100kb. Don't see how that's a win.
Depends how you're counting. Using that 100kb of code "as intended" essentially requires Firefox (95MB) or Chrome (153MB), or some proprietary alternative.
To use VLC instead, we can get by with something like dillo (1MB) and youtube-dl (1MB), which (added to your 30MB estimate for VLC) is much less client-side code.
Firefox and Chrome are application platforms, like your OS. It doesn't make sense to count them. They don't only run youtube, unlike VLC which is only a media player. Otherwise VLC counts as 20 GB (or whatever) for the OS as well.
From a performance perspective, it is a win. I've used the VLC trick to smoothly play videos on machines which are too old to play them at full frame rate in the browser.
An unanswered question in this debate is: what is a web app and what is a web site? Sometimes the boundaries are blurry and sometimes they are clear.
For example, should a blog rendered in the browser be treated as a web app? Or rendered principally as HTML and CSS before it's sent to the browser (rather than rendered in the browser using Javascript). Here's an example: over 300K of Javascript to render a plain text page: http://elm-lang.org/blog/new-adventures-for-elm
My impression is that some developers are beginning to treat anything rendered in the browser as a web app - even plain web pages with no dynamic elements.
Why do developers do this? Is it because it makes their life easier using a single tool or language for both dynamic and static content? Or is it because they want to learn a new framework that's popular or interesting? Whatever the reason, one has to ask if giving users a good and fast site experience comes anywhere into the equation.
Each time yet another hipster posts his unique snowflake insights about why we absolutely need yet another bloatware to build websites on this vey site (which is just static html pages over an FS storage in less than MB of code, including a dialect of Lisp and related DSLs) a kitten dies somewhere.
JavaScript web apps are fine, as long as they are "isomorphic", i.e. the front-end code is also run on the server and the resulting HTML is sent to the client, and as long as it's possible to at least navigate to and read all the content without having JavaScript enabled.
Am I the only one wondering what the fuck "progressive enhancement" is? Even the "enhanceconf" website doesn't explain.
>We're convinced progressive enhancement remains an important aspect in pushing the boundaries of the web while still providing a robust experience for every user.
What?
EDIT:
>Your HTML is not more accessible or more semantic when it’s rendered on the server.
IDIOT DETECTOR TO FULL POWER. It is by definition more accessible if it doesn't require Javascript to render.
> Most web apps I work with daily have highly sophisticated in-browser interactions that are built with JavaScript and can only be built with JavaScript: Flickr, YouTube, Facebook, Twitter, GMail etc.
Good that GMail is created with GWT. So it doesn't even use JavaScript.
Most people are so afraid of JavaScript that they create hella crazy abstractions (Angularjs [Google], ReactJS [Facebook]) and even then Microsoft and Google trying to avoid JavaScript more and more with Typescript or Dart, they transpile it.
JavaScript maybe is valuable in the web, but not in it's current form. So much quirks that you either spend using a library or fixing browser incompabilites.
A lot of people here said that transpiling will be used a lot in the upcoming years, and I think that also. People will more and more. They want to use Python/Scala/Java/Whatever on both sides.
Also it will reduce complexity by a lot.
Currently your typical webapp has, a server-side technology (even if you are using node), then you have a web frontend which uses at least npm, however mostly you end up with npm, bower, gulp/grunt, webpack and whatever.
> Client-server architectures instead of monoliths
What about small teams? They can't split their monolith since it will become unmanageable. So if they have two apps, frontend/backend it's still a mess to manage these projects with a small team.
> We need to stop excluding JavaScript apps from “the web as it was intended”. JavaScript apps are “of the web”, not just second-class citizens “on the web”.
In my eyes JavaScript should be used where it is needed, but not used in stuff which never should touch it.
JavaScript is still a mess.
Gmail is not built with GWT, it's built with Closure Compiler.
Inbox is built with GWT and Closure compiler together, as a hybrid app, with the UI done in Closure/JS, and the business logic done in Java so it can be shared with Android and iOS.
Over the past 2 years, the GWT team has been honing a new JsInterop spec that removes the majority of the impedance mismatch between Java and JS for hybrid apps, which will be part of the 2.8 release.
One of my favorite features is automatic Java8 lambda <=> JS function conversion. Any single method interface in Java, when marked with @JsFunction, can be passed to JS where it can be called as a regular JS function, likewise, any JS function can be passed back to any Java function accepting a single-method-interface, and it will pretend to implement the interface.
You can now write (GWT 2.8), code like $("ul > li").click(e -> Window.alert('You clicked me')) with only a few lines of code to interface with jQuery for example.
I have to disagree. Adding another layer of abstraction cannot reduce complexity, although it might make certain tasks easier. Consider assembly vs C as an example: C can make lots of programming tasks way easier, but the complexity of the system is not reduced.
With C, this becomes apparent when the program segfaults. A garbage-collected language or Rust can prevent segfaults using a subsystem to manage and/or enforce memory ownership, again making lots of tasks easier, while adding even more complexity.
In the case of transpile-to-JS languages, these can fix many of the shortcomings of JS and make lots of tasks easier, but they are a more complex system which can cause additional work if the generated code fails at run time, and the browser debugger brings up something completely different from your source code.
Your point that we will see more transpiled languages in the future makes sense.
I can see where it will reduce code complexity while upping stack complexity.
You're right that it's not always less complex. Trying to trace an error through some of these systems is ridiculous. I miss the days of it just pointing to a line number.
"Most people are so afraid of JavaScript that they create hella crazy abstractions (Angularjs [Google], ReactJS [Facebook])"
Much as I dislike JavaScript, every other language uses frameworks or "abstractions" to be more efficient. Why the hell would I reimplement all that crap myself in Python when Flask or Django have done 90% of my app for me.
> Most web apps I work with daily have highly sophisticated in-browser interactions that are built with JavaScript and can only be built with JavaScript: Flickr, YouTube, Facebook, Twitter, GMail etc.
Highly sophisticated browser interactions? You mean adding a comment? Which you actually just did on Hacker news using zero javascript.
None of those sites need or require a lot of Javascript. In fact, when I use Gmail I have the standard html version set to default. It's much faster and time to inbox is faster.
Everything else should be a traditional static web site with "decorative" JS, or "progressive enhancement".