Something that's been concerning me about the current approach to web development in particular is the accumulation of unacknowledged -- and increasingly unmeasurable -- technical debt.
I'm not sure that's precisely the right phrase, but here's what I mean: we all (should) know about technical debt in our own projects, but we also know that every project accumulates technical debt. When we build our projects on top of other projects, our work now has the potential to be affected by the technical debt -- bugs, poor optimizations, whatever -- in the underlying projects.
Obviously, this has always been true to varying degrees. But we've reached a point where modern web applications are pulling in dozens of dependencies in both production and development toolchains, and increasingly those dependencies are themselves built on top of other dependencies.
I don't want to suggest we should be writing everything from scratch and constantly re-inventing wheels, but when even modest applications end up with hundreds of subdirectories in their local "node_modules" directory, it's hard not to wonder whether we're making things a little...fragile, even taking into account that many of those modules are related and form part of the same pseudo-modular project. Is this a completely ridiculous concern? (The answer may well be "yes," but I'd like to see a good argument why.)
More simply put, stability is not valued in the front-end world.
It's interesting to compare the dominance of Microsoft in the enterprise world where IT runs the show was established largely due to stability. IT is run by managers who can get fired if instability costs the company money.
In the front-end world managers can't keep up with the changing landscape and therefore cannot make stability based decisions (in terms of software stability). Instead, in my experience, they make decisions based on how easy it is to hire (as cheap as possible) developers.
So, as a proxy, popularity drives the front-end because developers who have the time to keep up with the latest trends (IE mostly young 20-somethings) because they're the ones who get hired.
I think that's an interesting observation about Microsoft re: the period where they had an enterprise OS (NT) and a consumer OS (3.1x/9x). I think those two tensions and philosophies came through clearly.
It's a bit ironic to praise Microsoft for stability in this context, when they probably have had a major role of causing instability in the web front-end world for more than a decade.
Most people's complaint was Microsoft was trying to push their own proprietary standards (Activex anyone?) upon the www at large. Also their adherence to Web standards at the time IE6 came out was so bad, that it caused almost unsurmountable instability.
> But we've reached a point where modern web applications are pulling in dozens of dependencies in both production and development toolchains, and increasingly those dependencies are themselves built on top of other dependencies.
Um, take ANY non-trivial application, written in any language, and then follow the chain of its dependencies all the way down to the kernel, and shriek in horror. JS is not unique here, the only unique part is perhaps how obvious npm makes it just how much stuff your code depends on.
Sort of kind of. I deal as a consultant for large corporations where down-time is a no-no. Think secondary market-makers, institutional investors, healthcare insurers. Not quite rocket control systems or firmware for dialysis machines but five-nines is a given.
Bugs happen everywhere. That's an axiom. You can decrease the risk of it with formally verifying the system (see: SPARK for Ada), but they'll occur. The difference is the what in response to those bugs. ASP.NET on Oracle 11g might cost a few hundred thousand a year in support contracts alone, but when the bugs occur there is an insurance policy (literally-- at that level, there's an SLA and everything), professional liability insurance, and legal recourse. Those ecosystems have had 1: time to mature so those previous bugs that customers have identified and reported have been resolved, and 2: the option to pay (a lot of money, I'll admit) to have any new bugs resolved almost immediately. I was working with a large healthcare provider and some how a bug made it through 3 rounds of QA into production. Within 6 hours 3 engineers who worked on that specific component had flown across the country, landed, got in my car and was driven to our office, while another team was simultaneously Remote Desktop'ing to see if they could resolve it in the process. Microsoft still offers LTS for a 13 year old operating system.
I'm not ragging on node.js - its endemic of any new technology with a lot of fervor. In fact, it very much reminds me of the CL community in the late 1990s where everyone was writing a ton of solutions, submitting their packages which worked very well for their own purposes, but broke the second you tried to do something it wasn't meant to. It would break and you'd be on a mailing list back-and-forth'ing for two or three weeks.
A large amount of code is in use which hasn't had to stand the test of time (which flushes out a) the poorly designed solutions and b) the bugs in "good" solutions). To make matters worse, and no one knows which is going to be around in 5 years (i.e. what is "good"?) so it's all a gamble where no one quite what to put their money on (should I pick Knockout? Angular? oh wait now it's all about isomorphic code..should I use Redux or Flux?).
FWIW we use Angular where I work, without any JavaScript build tooling (only for SASS), and it's actually fine. So taking proven solutions that have been around for a couple of years in the front end world is not a bad idea, if you want to just Get Things Done.
At my last place we used React with all the latest hotness, and had ongoing issues with complexity and devs having to keep up with all the new tech.
I agree with pretty much everything you said - and thanks for sharing your perspective. Node.js is still very young and the enterprise ecosystem has a long way to go, but it's getting there (e.g. IBM's acquisition of StrongLoop, the EnterpriseJS initiative - https://enterprisejs.io) I've done a bit of work with Erlang, which is all about the 9s (though it doesn't have the enterprise ecosystem on par with Java or .NET around it) but for a lot of software, Erlang is an overkill, as would be formal verification for example. A lot of software is just fine being built with Node or other young tech trading stability for productivity and time-to-market.
As others have pointed out, the number of lines of code that the typical 2016 webapp depends on isn't all that different from other ecosystems. The dependency footprint just looks a lot bigger because its broken up into lots and lots of little modules.
I'd argue this actually makes things a little less fragile, because - at least theoretically - its easier to manage versions and do rollbacks of the different various pieces if something goes wrong. You can surgically switch the version of a dependency without necessarily changing a lot of other functionality.
Of course, things aren't always that simple. And lots of little modules might effectively mean less integration testing and higher chances of bugs. So, maybe it's a toss-up.
I think if anything you're being too kind here. The way dependencies are typically handled in the Node/NPM ecosystem is horrendous, and this has serious negative implications for the stability and longevity of almost any project built within that ecosystem today.
Unfortunately, modern web development is led, at least in publicity terms, by people who think phrases like "move fast and break things" and "living standard" are cool. The trouble is, what actually happens if you follow those people is that you break lots of things very quickly and you have no stable standards the help everyone do better.
The Node / NPM system is fantastic. Even my design colleagues use it on a daily basis, and for them, 'npm install' just works. Interesting that Microsoft have adopted a very NPM style with their new .NET Core project.json files.
Your point is one well worth making..."technical debt" is not an inept choice of terms in IMHO...
I've been in the trade for decades now...I'm more inclined to apply the term "technical baggage" to the feeling I often have these days when sitting down to solve a problem for a client...
Layers upon layers of dependencies to sort to do a proper job of putting systems back on sound footing...multiple patches to be examined, some to be discarded...with the clock ticking...non-tech managers love to hear you say, "OK, you're back up"...increasingly, that's what we settle for...
All to keep the wheels of commerce turning with as little down time as possible...
I have to admit that my discomfort is at least somewhat influenced by comparing the current environment to the environment of the early days...essentially stand-alone programs, elegantly written, performing the same basic function that now require five, ten...
How's web development any different from other "branches" of software development when it comes to technical debt or deep dependency trees?!
It's been the norm in the field of software development since the early days to rely on other people's work for productivity reasons and many other objectives and consequently the trade-off of convenience with technical debt whether in implicit or explicit terms. I don't think that web development is more guilty of this "sin" than any other branch of the field.
You might be right. From exploring other open source applications my impression is that there tends to be many times more external libraries, tools and plugins pulled in by modern web applications when compared to, say, a Cocoa/Objective-C application of moderate size, though.
The difference is that we've seen that the other kind of web development (without grapevines of libraries and complex toolchains) is both possible and capable of producing usable results. We're regressing.
I think you make some good points, as do a number of your repliers. We're pulled in different directions by a number of business and technical priorities. What I see as a common denominator is a development life-cycle that expects increasingly nimble articulation of break-off projects and shorter iteration times into production, at least in SaaS. Concerns about code stability and optimization are not entirely lost by any means, but they don't seem to have the same priority when the business "has to" keep moving, and the tech "has to" keep evolving at this pace.
The problem with hypermodularization is that it takes time to make a decision on what module to use for every task. If you have to to do this with 100s of modules, you're wasting a lot of valuable time.
Does anybody know a quick way of making the right decision?
NPM is like a crowded graveyard nowadays (or like .com domain names - where all of the good module names have been taken along time ago and now all the best practice modules have completely irrelevant names). There are thousands of buggy modules in which development has stalled, and it's easy to waste a lot of time trying to separate the wheat from the chaff. I personally sometimes have to open up 5+ Github repositories to check their last commit, number of contributors, interfaces, code quality, unanswered issues, etc. Only after doing so am I able to make a decision.
In terms of knowing what's cutting-edge practice it seems you have to watch Twitter a lot and be careful not to follow every single bad idea.
I can't imagine what it'll be like to search for a module to handle something as common as config in a couple of years. Even when you constrain yourself to something like '12 factor config' there are many different implementations.
Don't even get me started on the insane assembly required to get webpack-, babel-, cssmodules-, and postcss- to all work together.
Just like the difference between pre-internet post-internet days, the problem is no longer finding the information, its filtering the large amounts of available information. Same with libraries and modules - there's probably a module for a LOT of things. You eventually develop a good sense for what heuristics make a good module, and you develop in such a way that the cost of an accidental bad module is low.
You can certainly find full-fledged frameworks out there that do everything for you and make your decisions for you, but the cost of getting that wrong is far higher than a bad module.
You just have to look at the module named mkdirp to know there is a problem with hypermodularization. Here is all of the modules related to making a directory -- https://www.npmjs.com/search?q=mkdir
> I personally sometimes have to open up 5+ Github repositories to check their last commit, number of contributors, interfaces, code quality, unanswered issues, etc.
Only one of these, it seems to me, is likely to be consistently related to the quality of the module.
Honestly, I think it's better as a question/koan than as explained. Asking yourself how good these each of signals really is and asking yourself how they could fail is probably much better than having a random internet commentator try to talk you into it.
But:
- last commit: I'm not surprised to see this here. Almost the entire industry is deeply invested in the idea that software can never be done, only abandoned, so deeply it's almost invisible. And projects with enough surface area likely are like the proverbial shark: either moving or dead. But for modules with limited, well-defined functionality, reaching a steady state where the project is actually done and updates are rare should actually be a sign of quality.
- number of contributors: again, I suspect that where this coheres with quality, it's probably correlated with size/surface area. Plenty of limited, well-defined projects probably are good with one or a handful of contributors.
- interfaces: are we talking about presentation of the project? Or the UX of an app? Either one could be a sign of overall craftsmanship, or it could be a sign that the author is concerned with appearances/marketing.
- code quality: tautologically true (code quality is related to project quality) but not helpful. One might easily mean "do the authors of this module follow my favorite code style guide," in which case I think this is extra likely to lead you astray.
- unanswered issues: unanswered issues seem like a great signal. If they're present (and real), the author(s) either can't fix them, or the project is abandoned while actually needing improvement. Inversely, if there are answered issues, the project is being used and attended to.
Basically, you look at the project's social presence.
That sounds kind of airy-fairy, but it's honestly how it works. You look at what the README says, how well it addresses questions, you look at the release history, see how maintainers interact with contributors, read a blog post or two to see how the maintainers are thinking.
And then you make a judgement call.
You don't have to get it right every time. That's the beauty of the modular approach. If you get it wrong, then you're smarter when you go to replace that piece of the infrastructure.
From a library developer perspective, the major issue I find with NPM is that it stores it's own copy of the source.
Old but useful libraries will often stagnate and get forked by users who still want/need to carry on development. Linking directly to a repo would make more sense but that's not how most people use NPM.
> In terms of knowing what's cutting-edge practice it seems you have to watch Twitter a lot and be careful not to follow every single bad idea.
A lot of this stems from the fact that todays solutions will likely become tomorrow's technical debt. It's not a popular opinion on HN but adjusting development to follow current/future web standards is insurance against future technical debt.
Webpack, for instance solves todays problems:
Modules:
Currently, there are 3 non-standards (AMD, UMD, CommonJS). To make 3rd party libs interoperable, all 3 standards have to be supported so Webpack handles the messy details. Which BTW is a huge improvement over not being able to use libs that don't support whatever non-standard you choose. As for future standards, Webpack is moving in a very positive direction by adding ES6 module support in v2.
Transpiling:
Transpiling as it's used today will likely become less relevant over time. In terms of Javascript, ES6 provides useful additions to make programming in vanilla JS much nicer. ES7 has the potential to shake things up even more in a really good way. For instance, decorators will make it much simpler and more straightforward to create higher order functions; which in turn will make it much easier to do functional-style programming in JS.
The next major shift will come with CSS extensions. Less, SASS, Stylus are the current common non-standard solutions to the difficulty of managing large CSS codebases. I'd expect that the web standards people will eventually cherry-pick the good parts from them the way they adopted the good parts of CoffeeScript in ES6. Unfortunately, those who heavily rely on Less, SASS and Stylus will either have to adapt or continue to use/support the tools of a dying standard when everybody else moves on.
Bundling:
Bundling as we know it today is an anti-pattern but also a necessary evil due to the limitations of HTTPv1. Warming intermediate caches helps but a warm local cache trumps all. Unfortunately, bundling is in such widespread use the chances of a user having a warm cache are essentially nil.
HTTPv2 will (hopefully) move influence developers to abandon bundling strategies, thereby improving cache reliability for all.
The next major shift we need to improve cache reliability is a widely-adopted versioning strategy that library devs use to mark packages for long-term support. It's insane that everybody relies on bleeding version of dependencies but everybody bundles everything so there's no measurable benefit to sticking with an older, more stable version of a dependency.
I would touch on the issues with the widespread adoption of functional-specific supersets of JS but -- considering the tastes of many HN users -- I really don't feel like being downvoted into oblivion.
> Does anybody know a quick way of making the right decision?
Try to see things from a long-term perspective. Stay cognizant of the nature of the hype cycle.
Some technologies really do have the potential to provide huge improvements in performance and usability. Some will eventually provide the improvements they promise but the first version won't be good enough.
Most tools -- no matter how useful they appear to be today -- will likely die or be replaced by something better in the future.
I think the javascript ecosystem has a fundamental difference from all other languages/communites that came before it: it is universal. Hence I think a lot of the debates raging about the right tools come down to different people using it for different purposes.
I doubt that everyone can consolidate on just one set of tools, because in javascript-land "everyone" means something different than other places. How can a tool that is good for a front-end designer who needs basic DOM manipulation also be the right tool for someone building an entire application in js and only using HTML as a delivery mechanism for their app bundle?
I wish people would recognize in these discussions that their use-case might be different from others, and instead of talking about "the best tools", instead talk about "the best tools for this class of applications".
So hopefully the toolset could be consolidated down to one clear choice for each class of usage. Then the biggest decision to make is deciding which type of application it is you're building.
This is definitely part of it. JavaScript tooling encompasses back end, front end (progressive enhancement inside server generated web pages), front end (large single page apps), and a bunch of language flavours (ES5, ES6, ES7, TypeScript, JSX, to name a few). So it's understandable the tooling ecosystem is large and diverse, and that plumbing it all together can result in premature hair loss.
We're using ES5 with AngularJS at work, and it's like a breath of fresh air ;)
The web community is flooded in negativity because it was swamped by kids with overly idealistic, unrealistic, and heroic ideas about what the web was going to be post-Facebook.
Those people then realized that the web is, like all things, both real and imperfect. So now they're upset. This is all part of growing up.
People who have been involved with the web for a while are not in any way more jaded then before. They already witnessed PHP, PERL, Java Applets, Flash, the browser wars, and so on.
I don't think I agree. I've been developing stuff for the web since the mid-90s, when TABLEs were just about starting to be a thing and Perl was definitely the preferred choice on the server. There has been plenty of excitement along the way, undeniably, but it feels like HTML5 took forever to get to where Flash was, JavaScript on the server still underwhelms me, and it seems almost daily to get more complex and full of enterprise beans, browsers are far from equal, the CSS3 spec isn't finished, developing for the mobile web is often a pretty wretched experience... and my clients still ask me to make their logos bigger and complain that important stuff is below the fold. Am I jaded? You bet.
But hey, at least we can centre things vertically in CSS with Flexbox now (apart from the dubious browser support, of course).
It seems to me that most of the people complaining have been from the web old-guard (e.g., PPK[1][2]), but maybe I've missed the "kids" who are complaining.
If this has a grain of truth in it, the 'older crowd' need to lead by example, stop shaming 'kids' for their age, and prove why their old products are better.
Creating something is difficult and criticizing is easy. Such that showing oneself to be capable by creating is MUCH harder then showing oneself to be capable by criticism.
> They already witnessed PHP, PERL, Java Applets, Flash, the browser wars, and so on.
It's frustrating that the situation hasn't improved. I thought by now the web would have a clean, easy system to work with. At the rate things are going, I don't have much hope before 2030.
Come'on it's way better than before. Before you could barely get the UI you wanted to work at all, now its mostly techy-debates about tooling and patterns.
Whether it's better than before is extremely debatable.
Cross-browser compatibility was a major issue, and it has improved some. But the problem in the 90s wasn't whether you could get the UI you wanted to work, and even if that was the problem, it still isn't solved. Actually, the consensus I remember from back then was that Javascript sucked. I guess the solution to that problem was to have more Javascript?
We have actually gone backwards in terms of ideas like "semantic markup" and most websites are no longer fit for viewing in Lynx. Which also means accessibility has gotten worse. Bloat is worse than ever and privacy issues are worse than ever. Power over the internet is concentrated in the hands of huge corporations who run crummy walled gardens. At least in the 90s most internet users agreed that AOL's walled garden was a bad idea, but now everyone comes out of the woodwork to argue in favor of Facebook or the App Store. I'd say things are worse than before.
I'm coming from a "end-result"-driven opinion about making something that works. It was a huge hurdle just to do pretty much anything beyond display text documents, so that was my problem when building web apps.
As to you your last point - yes true, but I think it's much worse with non-web technology and native apps!
> It was a huge hurdle just to do pretty much anything beyond display text documents
Like what?
The trend I see in practice is that most web applications are still 98% text documents with images, but a combination of modern design patterns and tooling makes generating those documents insanely complicated.
Currently I'm watching a project to "modernize" a bunch of small websites. The amount of effort it takes to convert them to some "familiar" (Bootstrap-like) UI is cringe-worthy.
Amen. We can build powerful, stable applications for the web that work across all browsers and operating systems. It has never been easier to pass data around and tap into 3rd party services to compose platforms that weren't possible a decade ago.
I'm hardly 'old' in IRL terms but I remember back before we had really good version control, commmunication, data synchronization, deployment, testing, etc... tools.
The state of things nowadays is just nice. Just watch out for the hype landmines and life is good.
This is different though. Javascript is something that all of us are, in one way or another, exposed to. Giant companies building an ecommerce platform, startups making fun and innovative spas, designers adding a bit of animation and adaptivity to a theme, bare metsl devs bringing javascript to robotics and operating systems, severside crud applications and websocket handling, C++ devs that are interesting in bringing the best to such a widely used language, mobile phones frameworks and compilers, microsoft, google, apple, unity, mozilla.
Javascript is the real deal. What makes it crazier is that 10 year olds can do a few tutorials on html5 rocks and feel confident in their abilities (in a good way). Javascript is not just a crappy language, there a socio-political aspect that has probably never been seen before
I sometimes feel that everyone thinks that this stuff is totally unique. However, I remember copying Basic code out of magazines and BBSs as a kid. Copying and modifying that code was as empowering and amazing then as github/npm/etc is now. There is certainly a higher quantity due to the ubiquity of the Internet, but the basic quality is the same.
I suppose this is my generations basic : / Visual basic was my first language actually. It was pretty fun, built some cool stuff with it. I guess the web more free since people a share with and I dont need windows computers. And I dont need visual studio to make an application. When I first started with website dev I used to write in notepad because I didnt have the money for dreamweaver and I felt like I was joining the elite. Then I learned about notepad++ anf it felt like a very natural progression
> People take libraries like lodash – or jQuery, as we analyzed earlier – and insert the whole thing into their codebases. If a simple bundler plugin could deal with getting rid of everything in lodash they aren’t using, footprint is one less thing we’d have to worry about.
If you use Google CDN, why does it matter how big jQuery is? If N people use their own "smaller" M-byte copy of jQuery, browsers will have to download M*N bytes, as opposed to 0 bytes if you use the cached full version. A profound waste of bandwidth.
My advice: Use Google CDN, for less common stuff use cdnjs. Don't adulterate libraries!
There are quite a few disadvantages to using a CDN like Google.
- Delay for DNS resolution and the new TCP connection could be non-trivial (some tests show 300ms+).
- A JS CDN is also likely tracking your traffic and you may not want them to.
- No offline dev environment (on the plane).
- Server run (phantom etc.) tests might run super slow if they have to pull in a remote js library.
- Probably issues when it comes to apps that are meant to work offline, although I think this is solved with service workers proxying the CDN request.
And if the recommendation is to continue loading large libraries from a CDN and nevermind only what you need, well there's still a memory cost. If I have 5+ SPAs running in tabs that are each fairly complex (you know, like Gmail), it does start to add up. Because of the way that tabs are sandboxed I believe this may mean 5 copies of jQuery or React or Angular or w/e.
It's possible to not use the CDN when you are developing in your local environment, and use the CDN in your production environment. That way you can keep developing even when you are on an airplane.
Plus the cost of parsing the JSON and merging the huge object structures these libraries provide.
I missed one item on your list, which is security. Google does not provide any guarantees on the safety of the CDN provider versions of JQuery. Imagine the mayhem when a widely used jquery library is replaced by a backdoored version.
There was an article in recent memory that looked at how various websites were using CDN's for delivering common libraries and if I recall correctly they concluded that the chance of getting a cache hit from a CDN is relatively low because of the different versions each website is using. The Google CDN for example offers dozens and dozens of different versions of jQuery: your visitor might have visited a site using jQuery before, but have they visited a site using the same version? Unfortunately I cannot find the article, maybe someone else will be able to provide a link.
There are other concerns too, if your website depends on a Javascript library and the CDN is offline it can break your website (which has happened with Google CDN before, and many of the other popular Javascript CDN's). What if the CDN is compromised and is delivering malicious content? Not impossible.
I'm only aware of this slightly older (11/2011) article [1] on the topic. I've poked at the numbers recently in Big Query and see a wide spread for jQuery versions on popular sites. Maybe one of these days I'll get around to documenting it properly. I'd rather measure from a sampling of client side pageloads but this might be a bigger project to tackle.
Your cache isn't as warm as you think it is. The cache on mobile browsers is ~50MB, and 250MB on IE11. It's smaller on older IEs. Most regular internet users will download more than that every single day.
You should always use a CDN, but that doesn't excuse large libraries.
Shameless plug, but my startup, https://rasterize.io can tell you your cache hit rate for every visitor to your site.
> If you use Google CDN, why does it matter how big jQuery is?
1. Most people won't have the version you wanted to use cached from the location you were trying to use it from. Studies show hit rates on requests for libraries on "shared" CDNs are very low in real world conditions. (Also, caches are small. Oh, you wanted 2.3.4r17 of BigLib? I've only got 2.3.4r18 cached; I'll go ahead and delete that so I have room to download your new copy.)
2. We don't get have HTTP/2, which means that while the number of bytes is important, so is the number of requests. If your app can get by with JUST jQuery, that doesn't matter. But if you need jQuery, plus two plugins, and lodash, and a promise polyfill, and...then all of a sudden your performance starts to be dominated by the number of requests being made, and (again) in real world situations you may see your effective load times improve if you concatenate your libraries.
The dream is "just list the libraries you need and let the browser sort it out super fast". The reality is a long way from it, and I don't think telling people "just use a CDN!" is currently good advice.
To put some numbers to this, lets assume the user has a 1mbit connection with 300ms latency.
If a library is 32kb (on the wire) it will take 256ms to transmit to the client. If its not inlined, it would take 556ms to transmit, or almost twice as long for the cold cache case.
If the client has a 10mbit connection (average cheap 4g mobile connection), with the same latency, its even worse as the 32kb library now takes ~27ms to download if inlined and 327ms if not. This is now a order of magnitude slower.
With a latency of 100ms (typical home internet), its still 127ms to download the cold cache vs the 27ms inline takes.
Since connection bandwidth is slowly rising, it makes CDN library distribution a even worse idea as time passes. The speed of light (in copper/fiber) is the dominating factor in 100ms latency, so it will not be much improved without massive expense. It may even get to the point where inlining the library is faster than waiting for a cheap mobile device to fetch the cached copy out of flash.
You might want to seek out actual numbers before you say that. More often than not, that particular cache entry is evicted because of the billion combinations of frameworks, versions and CDNs. You are very far from downloading 0 bytes even for the popular frameworks.
There's also the issue of availability. If you decide to use one, make sure to have local fallback. The one time I did some remote instrumentation on this, some users actually had trouble reaching the CDN. I have no idea why, it might have been DNS failures or routing problems or whatever, but that made the whole site freeze for those affected since there was no fallback.
I've worked on the Google JS library CDN for a few years now. Our data indicates that cache hit rates are very low. You'd be better off serving jQuery from your libs.min.js.
In general, browser caches fill up so fast and there are so many versions of jQuery in use in the wild, that the cache never ends up being as useful as we'd hope. Sorry. :/
Version fragmentation pushes the CDN hit rate down quite a bit. Unless you're using the exact major/minor version that some popular sites are using you're going to see high miss rates, especially on mobile where it likely matters the most.
My greatest frustration with JavaScript tooling is that it assumes all sites will be deployed on networks with connections to the internet. Trying to make standalone deployment packages for closed networks has been a nightmare for me.
browser still has to parse & execute the javascript, which is half of the startup time, and if you are fully a SPA without server-side rendered markup, this increases your time-to-glass by about double on web clients
How can you use a CDN if you're also using a bundler/minifier?
Put another way, how can you bundle your app such that CDN-available libs are "dynamically linked" but your app and less common libs are "statically linked"?
The doc you linked to describes how to create a bundle of JS that will export a "require()" function exposing the bundled libs to other JavaScript.
That doesn't seem related to the question of how to create a bundle that assumes that some libs have already been loaded with a previous script tag that pulled, say, jQuery from a CDN.
You can always upload the .js files to your own CDN.
The other magic of CDNs (aside from the potential cache reuse that everyone is talking about in this thead) is that the user downloads files from a geographically closer server to them than your server. This is typically a huge speed up, and applies even if your JS files are 100% unique.
Sure, if someone's not using a popular 3rd party CDN because they don't know the option exists or they don't understand the potential benefits, then yes, consideration should be given. They're not always the answer, though, and there are plenty of legitimate business and security reasons why one might prefer to host their own dependencies.
Sometimes technical reasons make the potential benefit minimal, anyway. Once someone's already making one request for your application source, adding a bit more to that response (a bundled minimum-necessary-subset of some not-massive dependency) can make only a very minor performance difference. It won't be as fast as if a file containing the entire dependency was in the browser's cache (or possibly even in the JS engine's compilation cache), but it'll be significantly faster than if it wasn't.
I guess, but if you go to the trouble of localising your website into Chinese, then loading a self-hosted version of the library for Chinese users doesn't seem like much extra work.
One should consider the possibility that the entire web development industry is a giant busy-work factory making half-baked kluges to a fundamental problem that will never be papered over: http is a stateless protocol for sending hypertext. markup. meta-data and context for resources in a folder somewhere. It's a file-system thing.
A web browser is a declarative-markup-tree rendering-engine with scriptable nodes, and the language chosen for scripting is an accident of history.
Using http and web browsers in ways they never were intended to be used is possible, albeit painful.
Now that we have virtual DOM's and isomorphic platforms like clojure/clojurescript and we compile to JS, now that we have JS on the server, now that we have our head so thoroughly up our ass that we forgot the point, now we can consider the circle of ridiculous nonsense complete...
The world wide web took off, and we have to live with it's technical debt, or...
The solution is simple, bold, and risky:
1) pick a port. (Nowadays that's even a joke. we tunnel everything over port 80.)
2) pick a protocol with some future (hey let's just tack websockets on as an upgrade, gradually get browser support, etc)
3) keep moving...
I am not ultra impressed with the web of 2016...
The web of 1996 was way cooler.
I want a VRML3 rendering engine for a browser.
Oh, don't even get me started with build tools.
Nowadays we have build tools to build build tools.
I'm sorry, but when FRONT-END DEV considers build tools standard we are LOST lost lost...
How many JS libraries and CSS scripts do we really need to embed in a page? How many of those functions or classes are even being used in that page?
Why do I have to scroll to view source on a page with mostly text and a few colored boxes?
In a nutshell, I think, with some risk, that some innovators could literally pick a port, pick a protocol, and build a different kind of browser... once there is some content for that platform it's a matter of time before early adopters download one of these new class of browsers and so forth... it's how it happened the first time, and it's how pretty much everything happened... even games (second life) etc.
and meanwhile, the web can go back to being a directory preview/ resource hyperlink web...
To my knowledge, this has never, ever, worked well enough in a dynamic environment. Smalltalkers have spent over 3 decades trying to get this to work. What became the Smalltalk industry standard? Some form of code loading, often based on source code management.
Anyone who is doing tooling/library work in a dynamic environment needs to delve into the history of Smalltalk and ask if it was already tried and what the problems were. Chances are, it was already tried, and that there's useful experiential data there.
All of Google, and most ClojureScript users would like to disagree with you.
Google Closure has been successfully tree-shaking since it was released, in 2009. ClojureScript uses Closure for an optimization path, so the majority of CLJS apps in production (CircleCI, Prismatic, to name a few) use tree-shaking on every deploy.
Google Closure has been successfully tree-shaking since it was released, in 2009
One can still write code that breaks the tree-shaking. How does Google Closure solve that? Through its community and best practices. Arguably, this is where Smalltalk failed, and where other dynamic languages can take a useful lesson. (Being fostered by an environment like Google probably gave Closure a leg up this way.)
ClojureScript uses Closure for an optimization path
ClojureScript has an advantage, in that it's probably designed to not-easily produce code that breaks tree shaking.
It's not as simple as you're suggesting. With Closure you sometimes have to rewrite otherwise correct and idiomatic JavaScript significantly to avoid it breaking during compilation.
If you design your app to use Closure from the start, it's not a problem at all, and as a sibling comment said, CLJS apps get it mostly for free.
The entire JS toolchain lost five to ten years by not adopting Closure. I suspect that in a year or two, someone will release a webpack plugin that does the exact same thing Closure did, with the exact same constraints.
I agree, it only works when you're not too dynamic. Smalltalk wasn't designed that way.
You need a compiler that can tell when thing A requires thing B at a fine-grained level. That often requires type declarations, or at least fairly unique function names.
You also need library owners to design with tree-shaking in mind, so that calling library function A doesn't automatically pull in library function B. You need sane dependencies.
You also need dependency injection (if you have it) implemented in a way that doesn't defeat tree-shaking.
GWT, Closure, and Dart all do this by not being too dynamic (most of the time). In particular, reflection and tree-shaking don't work well together.
Speaking of controversial: I think things are going to keep being bad until developers realize that the problem isn't technical, but rather economic. Back before the current "give it all away for free" trend became a thing, commercialization of software was a given and allowed "winners" to emerge from the chaos. The merits of the "winners" isn't important here, it's the stability that comes with it. As long as everything is given away for free and there are no barriers to entry, you'll keep ending up with chaos and unmanageable churn whereby your job as a software developer has now morphed into a software tester/evaluator for every single piece of functionality that you need and don't want to write yourself.
When you have to actually put your money on the line and have an actual business presence on the web, it's a completely different mindset from "I'm going to write this small library and pop it up on GitHub for free, so who cares if there's documentation or if the software even works as described". The fact that there are developers that are professional and thorough and still give away their software is a minor miracle. But, it's not wise to count on the charity of such developers for the long-term because it isn't realistic. As more and more one-offs are created, it becomes much harder to distinguish one's software from the rest of the pack, so developers will simply not even attempt to do so. The Apple store is a perfect example of this problem.
Python and gcc are free software too. Lots of stability in the world of Python and C. Hopefully you don't think free or open source software is somehow limited to the world of Javascript.
Here's why there is so much churn in the JS world: everyone is attempting to polish a turd.
I don't think those two are good examples. They both had significant commercial ecosystem support, either in the form of a large company that pushed it (Google with Python) or in the form of other commercial compilers that promoted the ecosystem (lots of commercial C compilers over the years - MS, Borland, Watcom, Metrowerks,....).
So look at the projects that stand for a higher degree of quality and hire the developers that author them so they can invest the time to battle test the code.
Since we're talking from an economic perspective. Whoch is more cost (ie time and money) effective. Tasking an internal developer with no domain knowledge to build a module from scratch to cover similar functionality of an OSS project. Or hire an OSS dev who already has a deep understanding of the domain as well as working code to battle test the existing OSS implementation.
If you're talking in terms of risk, the latter is an easy sell. The problem is, most companies are too concerned with building a stable of 'rockstars' that can support their pathological adherence to NIH (Not Invented Here) syndrome.
I'd argue that the most successful companies already use this strategy. Select exceptional devs from the OSS community to build hogh quality tools as open source projects. The community battle tests the tools and accelerates the discovery of new and useful capability.
Meanwhile, the company leverages the tools internally to compose their own proprietary platform in the form of domain-specific applications and services.
The major disconnect is, most software based business want to have their cake and eat it too.
It's not difficult to determine when an OSS project reaches enough success to transition from one-off to mass appeal. OSS is an ecosystem of brutal software darwinism. Projects that survive experience growth in usage and improvement over time. Projects that don't, stagnate and remain or eventually become irrelevant.
The problem is that darwinism keeps biting us in the ass. It simply is not a recipe for orderly transitions between older and newer technologies, which ends up costing companies more, not less. One of the primary issues is always: who wants to work for free on software that is 10-15 years old ? Nobody. So, unless the software finds a sponsor of some sort that is willing to pay developers to fix bugs, make incremental improvements, and other very non-glamorous chores, it's going to languish as soon as it falls out of fashion and/or the primary developers move on to something else. And, what small company that is not in the developer tools business is going to want to finance the ongoing cost of keeping such software working ? Not very many.
I'm not saying that OSS shouldn't exist. What I'm saying is that OSS has serious effects upon how software is developed, and I don't think that these effects were taken as seriously has they should have been. Everyone saw "free", and it was game-over. There's a lot of good with OSS, but there was also a lot of good with commercial software that we seem to want to ignore. One of those good things was more stability.
People use github because it's free hosting first and foremost. 99% of projects on github are never fetched or forked by third parties, let alone maintained, so i'm not sure about what you're complaining about. Are you complaining about people publishing crap on free hosting ? nobody forces you to use all that code.
I'm not complaining about anything. I'm trying to put forth a reason for why so many other developers are complaining about the constant churn in the JS tools ecosystem, and the immense amount of setup required to do even simple web apps. Judging by the downvotes, I'm guessing my opinions aren't very popular or I strayed too far off-topic. :-)
> and the immense amount of setup required to do even simple web apps.
This complexity is sold by people who are selling complex tools. I don't use Babel or Webpack because it is not necessary. I usually try to remain framework free or use a framework that has a minimal complexity cost. You can write a complex application with jQuery , Angular 1.0 or Vue.js or even React in ES5 and it will not require any complex asset pipeline + build tool + 50 plugins in order to get started.
As said earlier, in my opinion the only tools worth using are Typescript compiler and Bower for package management. I discarded everything else because I just don't want to spend my time in configuration files, it doesn't solve my issues or my client's issues. It's exactly like in the enterprise world in that aspect, where vendors try to sell businesses useless and complex solutions that will eventually get discarded when the insane cost becomes too apparent.
Exactly which tools are being sold ? All of the examples that you cite are free, which kind of provides evidence for my idea that the complexity is being caused by the proliferation of the tools/libraries due to them being free.
Hypermodularisation is a good thing in user-facing code. Here we should thrive to shave off every last byte.
But Babel is a tool for developers. We don't need configuration explosion and endless plugins. We need all batteries included. The very purpose of Babel is "hey, I want to write hip code like the rest of cool kids of the block; now, let it run everywhere". Who needs to configure that?
I agree that Babel doesn't get every single thing right, in much the same way that everything manages to get something wrong, but I don't think you're giving credit to the benefits of its modularity.
Another "purpose" of babel is "hey, I want to implement an upcoming ECMAScript feature in an extensible and relatively self-contained way so I don't have to go digging deep inside someone's codebase."
Or, it's also "hey, I only need these 3 features and I want my build times to stay as low as possible."
For the use case you describe, there are presets provided to make that quite simple.
Except that Babel has always ran in the browser, too and "hip code" is a constantly shifting target: ES2016 was just finalized (exponent operator, Array.prototype.includes), and much of ES2015 is already implemented in current browsers, but certainly not all of it and the sets implemented in each browser is different.
Hypermodularization should mean that Babel stays relevant with new standards (this month's ES2016 announcement, plus we already have a good idea of what should be in ES2017 with the new "continuous deployment" approach to the ES standards) and doesn't get bogged down in legacy code as browsers adopt the standards.
So true. Ember is the antidote right now for the frontend land.
- MVC (Backbone's big thing)
- Two way binding (Angular's big thing)
- Components (React's big thing)
- SSR like the others now (FastBoot)
- High performance with the Glimmer Engine (should beat react in theory)
- Books.
- dedicated package repository (emberaddons)
- actually community driven
- stable + mature (+ seamless upgrade paths)
- and many more advantages
plus, as you said, the ember-cli tool, that gets you up to speed fast. The tooling makes the final success, just have a look at go... the language appears fairly weak/trivial, but the tooling is excellent.
I could freak out everytime I have to set up an react project, weighing everyones opinions about every little package for every usecase, IMO react shifts the complexity from the UI to the tooling. Ember is just perfect and the only real option I could recommend right now, and I'm just sick of thousands of "me too!" approaches for everything, this extreme diversification cripples actual progress to a halt.
After working on a few large JS applications (using Backbone or Angular or React), I decided to try out Ember for a relatively smaller application.
Ember feels very Rails-y (likely on purpose) and the documentation wasn't great. It took me a few hours to get a surface level understanding of how things worked with each other, especially Ember Data. I ultimately ended up dropping it because of my frustration. It was only a couple months ago (early December 2015).
Obviously I'm just one data point but I know I've talked to other devs who feel the same way. I'm sure Ember will only get better but I don't think it's the antidote at all.
Maybe not worth much, but the one Ember app I use routinely (that I'm aware of) is YNAB. The startup is slow, the payload obscenely large (over 3MB) for basically 2 different views, a handful of modals. It has odd UI issues occasionally that appear related to data-binding of some sort (tabbing through a field will sometimes cause an associated modal to freak out). Plus the URLs are useless.
I don't know how much of that is Ember and how much is implementation, but I assume their developers are smart enough.
What value did Ember bring to the party? It seems to have made what appears to be a simple thing (at a high level; not that there isn't plenty of effort involved in little bells and whistles here and there), complex and slow.
For me it's like that old saying about Art. I may not know exactly what I'm seeing, but I know what I like, and that isn't it. When your framework has basically the goals as ASP.NET v1 but the code looks worse, that's a major step in the wrong direction. It's been over a decade. This doesn't feel like progress.
Last time I had to use ember (about 2 months ago), the ember-cli tool had major performance issues. I remember rebuilds after a single line of code was changed taking almost 10 seconds. It also generated almost 10 gigabytes of temporary files over time. Plus I remember an issue with import paths not matching real filesystem paths, which was messing with my IDE (and my brain until I figured out what was going on).
It's a good tool when it works properly, but they really need to work on those rough edges a bit.
> I remember rebuilds after a single line of code was changed taking almost 10 seconds.
Do you happen to be on Windows? My Ubuntu laptop compiles changes to our pretty large app in under a second, but on Windows it does take up to 10 seconds.
Yes, it was a Windows machine. I did all the "make it faster" steps, like running the recommended script that adjusts Windows Defender and search indexing configuration, but in the end it was still very slow, even on a fairly fast SSD.
I feel the same. I love Ember, it's the first proper frontend framework and boy is it frustrating at the start. Once you get past that though it's plain sailing and takes my productivity to a whole new league.
It's just not mentioned... anywhere. It's not hip like React or has a big sponsor like Angular I guess. But I don't get it, it's really really good.
Yep. Ember is the best solution out there at the moment. There is a trend, that silently more and more serious dev team switching to Ember. Ember never had hype period. It works and it is really matured. With Ember you can focus to your product and you don't worry any more about tools. I think most of the Ember dev is just smiling when they see these debates, because for us everything work and we ship day by day with a very high level of developer happiness. :)
i wouldn't use Ember precisely because it strongly suggests using a command line tool specifically for the framework. I personally ended up with Typescript as a language + Bower for package management. When it comes to frameworks it is either the framework adapts my toolset or I don't use it. I refuse to compromise and embrace yet another asset pipeline or build tool, no matter how "good" the framework is supposed to be. Typescript support both decorators and JSX, so I can tolerate working with either Angular 2 and React. And since build steps are mandatory in Front End development I, at least, can take advantage of static typing.
tried to use ember-cli w/o enforcing some certain opinions (e.g. directory structure) on a project and reached out to the community via slack, but it hasn't really taken so far.
maybe Goya was a bad choice? someone on the slack used the term "consume" for what i was trying to do w/ ember-cli, and..
A fantastic post. If only to find someone else talking about the downsides of hypermodularization.
> Unfortunately, the spirit and praise of the web in Remy’s post isn’t shared by many of these articles. To the contrary, the web development community is flooding in skepticism, negativity, and pessimism.
I started building on the web a year or two after Remy. I believe one reason for the lack of praise is the reason people came to the industry. He & I came because we loved the web, we loved to see what could be done.
Today, the web's an entirely different commercial being. People do this as a job (as do I, which I'm very thankful for), they have less time to see what can be done (constructive contributions take effort) and time is money, so let's develop fast and move on, slagging off everything as we go.
I've been jaded, I'm guilty of being negative about everything.
I heard similar opinions in 1998. I think it probably has less to do with the state of the industry & more to do with the state of the career of the observer.
Dependency hell is caused by the inability to sanely identify and (if necessary) support multiple versions of a dependency.
The primary cause of dependency hell comes from installing and using global dependencies because there's no way to predetermine all of the dependencies that every module and/or application on a system will use.
Thr Javascript ecosystem actively discourages using global dependencies except for CLI tooling.
This is talking specifically about an optimization problem. How to minimize source size by only inporting code that is used directly in the appplication.
AFAIK, no other platforms don't attempt to solve this problem because it's a non-issue except where you're sending code over a network to be executed remotely.
For example, imagine if you had to send the entire Java Runtime Environment over the wire every time you loaded a website.
Yep, NPM is transitioning to flat in V3. Partly because of the nested folder limits in Windows, partly to reduce the ridiculous amount of dependency nesting that occurs with the current model.
JSPM already uses a flat dependency structure. If you look at how it maps dependencies (and specific versions of dependencies) in config.js it makes a lot of sense. The only feature it's missing is the ability to easily search and list multiple versions of the same dependency.
Are there any documented cases of a business failing because their JS payload was too large? I get that smaller code is easier to understand/work with, but I've never been able to internalize the desire for small payload -- just doesn't seem like it ever matters outside of philosophic reasons for saving user's bandwidth (especially if it comes with a steep tooling cost).
It strikes me as a technical pursuit in search of a problem -- but certainly willing to be convinced otherwise.
My own customers have seen 4x different conversion rate, when grouped by page load time. (i.e. same slide as walmart page 37 above, where the peak is 4x higher than the lowest performing group).
Your business probably won't fail because it's slow, but it will certainly make less money because it's slow.
But in this context, the issue becomes how much JS bundle weight it takes to cause a 0.5 second delay. And there are many other things that can slow down page load time (and especially perceived page load time) that either don't involve JS at all, or are not directly caused by the amount of JS that gets loaded.
Depends on your connection. I gave a talk about CLJS page speed, and server side rendering at the recent Clojure conj (https://www.youtube.com/watch?v=fICC26GGBpg). Trying to load my site on the conference wifi, 200kB took 3 seconds = 60kB/s, so 30kB would have been enough.
"Ok, that's not real world". Today, the cable guy is here fixing my home internet, so I'm tethering my iPhone 5s. Loading the .js for my site took 40kB in 143ms = 300kB/s, so 150kB would have been enough.
So certainly less than half a meg of JS, which is very common to see in SPAs.
Certainly, there are tons of other things that can slow down a site. But the JS is one of the "easiest" to solve, because devs are responsible for it, and it has well known solutions, that people don't apply consistently. (reduce dependencies, only serve one file, Use webpack/closure, use CDN)
I agree there's certainly some correlation, but we can fix e.g. "slow connections" and "slow computers" by not serving 9MB auto-playing hero videos [edit:] when it's not appropriate for your user demographics.
I'm not in the position to intentionally slow down my customers, so I haven't run a controlled test. Google's test was controlled though.
I don't know that it would result in a business failing, but there's certainly lost opportunity. Marks and Spencer is a fairly recent example, perhaps not JS payload specifically, but lower conversions related to site performance.
Considering the audience might help as well. For example, if rural U.S. dwellers are a demographic you want, many of them are on low bandwidth connections.
I think there are circumstances where it is very valuable - it depends on your target audience.
For example on mobile, connection speeds in major cities in developed countries maybe rocketing, but substantial areas still need to make a lot happen with very little bandwidth. So it can be a small competitive advantage.
See my reply to your sibling comment. You can measure very real changes in SEO, user engagement, bounce rate, conversion rate and sales on even small changes (100ms) in page load times. I have personally seen conversion rates vary by 4x based on page load speed.
I’ve been using ES6 modules in Ember.js development for two years now. Using Ember CLI is simplified down the dev process extreamly. The whole dev app development is so simple and fun with Ember. I don’t understand why people cry about it, the solution is exist, they should just use it. Thats all.
One of the major differences between a jr and a sr IMO, is their ability to tell you what each one of those dependencies solves. If they don't know the sub-dependencies by heart, they didn't read the code or at least look it up outside of understanding the API.
Than you have the maintenance developer, and everyone loves 'em. He just figures things out and helps you improve your code while you scream at him for not knowing the 'bigger picture'. Fun times.
In my experience some aspects of the JavaScript experience are great, and others are terrible.
npm is great. It gives you an easy way of specifying your dependencies in your source tree, and makes it extremely easy for people who check out your repo to obtain them. People can run "npm install" and now they have a copy of all your dependencies in "./node_modules". It composes nicely too: "npm install" also pulls the dependencies of your dependencies.
Babel is great. Sure, I've heard some complaints lately about their latest changes, but so far this hasn't affected me as a user. Babel for me means that I get to write using the most modern ES6/ES7 features and then compile to ES5 for compatibility. For me it works great and mostly hassle free.
The frameworks themselves are great. Not perfect sure, but there are lots of great ideas floating around in React, Angular, d3, moment.js, etc. and the packages built on top of them. Whatever you want to do there is a library out there that someone has put a lot of love into. There is a lot of choice -- yes, maybe sometimes a little bit too much, but I'd rather have that than too little.
Flow is great (and I hear TypeScript is too, and getting better). I can't tell you how nice it is to be able to declare static types when you want to and hear about type errors at compile time. Maybe not everybody's cup of tea, but I love it.
The build systems, minifiers, test runners, etc. are terrible. By far the worst part of JS development for me is figuring out how to glue it all together. When I try to figure it out it's like entering an infinitely complex maze where none of the passages actually lead anywhere.
--
For example, let's say you want to run some Jasmine tests under PhantomJS. Jasmine is a popular unit testing framework and PhantomJS is a popular headless browser, scriptable using JavaScript. Both very cool technologies, but how can you use them together? This is a real example: it's something I really wanted to do, but in the end I literally could not figure out how and gave up.
Phantom JS claims that it supports Jasmine (http://phantomjs.org/headless-testing.html) though it gives several options for test runners: Chutzpah, grunt-contrib-jasmine, guard-jasmine, phantom-jasmine. Time to enter the maze!
Chutzpah looks promising (http://mmanela.github.io/chutzpah/) -- it says it lets you run tests under a command line. It says it "supports the QUnit, Jasmine and Mocha testing frameworks" and you can get it by using "nuget or chocolatey". Dig a little deeper and it starts to become clear that this is a very Windows-centric tool -- nuget says it requires Visual Studio and chocolatey is Windows-only. Our maze has run into a dead-end.
Moving on to grunt-contrib-jasmine. I don't really want to use this because I'm currently using Gulp (Grunt's competitor), but let's check it out. We end up at this page (https://github.com/gruntjs/grunt-contrib-jasmine). This page is sort of a quintessential "JavaScript maze". It contains a lot of under-explained jargon and links to other plugins. And it gives me no idea how to do basic things like "include all my node_modules" (maybe I should list each ./node_module/foo dir explicitly under "vendor"?)
Moving on to guard-jasmine, I end up at https://github.com/guard/guard-jasmine, and it's clear now that I've entered a Ruby neighborhood of the maze: everything is talking about the "Rails asset pipeline", adding Guard into "your Gemfile" (I don't have a Gemfile!!). I really don't want to introduce a Ruby dependency into my build just for the privilege of gluing two JavaScript technologies together (Jasmine and PhantomJS).
The final option in the list was phantom-jasmine, bringing us here: https://github.com/jcarver989/phantom-jasmine. It's been a while so I don't remember everything I went through trying to make this work. But I was ultimately unsuccessful.
I don't know if I'm missing something, but why not use Karma?[1] It includes Jasmine support out of the box, lets you run tests in PhantomJS, and offers a CLI to make project initialization pretty simple.
This is what we use also. karma + jasmine + phantomjs. Admittedly, when we first started, I really did not understand which each of those individual pieces were.
If you're trying to test client side javascript with phantom + jasmine through gulp there are a few options.
I googled "gulp jasmine phantom" and found 3 options at the top, one of them being under the jasmine org on github. The other 2 seem to have similar APIs to the one under the jasmine org.
Interesting how the industry has come full circle. Static linking and dead code elimination is a problem we solved in the 1970s for compiled languages. Has the time come to adopt a proper linker for the web?
Google's Closure Compiler has supported dead code elimination [1] since 2009, but the feature imposes some unpalatable restrictions. It never supported popular libraries like jQuery. The process is also rather slow, as Closure Compiler must analyze all code in the bundle at once.
An interesting thing is how that applies to transpiled languages like ClojureScript. There are a lot of efficiencies gained by using ClojureScript with Google Closure. Even react is faster with Reagent.
I thought this was a decent survey of where we're at, but I'm not sure what to take away. I don't care for "opinionated" as a description of tools or code, nor its opposite. It doesn't really mean anything, in my... opinion.
JS modules are still bleeding edge but they're a necessary requirement to put the facade pattern to work.
Each library should ideally be separated into multiple modules internally.
The main source file imports all submodules by default making it easy to get started.
import 'rxjs';
As a project matures and it comes time to optimize dependencies for performance, the main import can be swapped with feature-specific imports to trim the fat.
import { Observable } from 'rxjs/core';
import { FlatMap, Debounce} from 'rxjs/operators';
Optionally, an additional facade layer can be included to import specific classes of submodules.
import 'rxjs/operators';
Or, alternatively:
import { OPERATORS } from 'rxjs';
A facade is nothing but a js file that contains a bunch of imports that are logically mapped to one or more exports.
The 'real' issue is, it wasn't really possible to use this before the ES6 module standard because of the divergence of implementation-specific features of the existing module pseudo-standards (ie AMD, UMD, CommonJS) and the overreliance on bundling required by the expensive overhead of HTTPv1.
JS has a lot of baggage and it's going to be a while before the community as a whole learns how to design non-sucky APIs.
If you want to see this in action, take a look at http://github.com/evanplaice/ng2-resume. In it, I use facades extensively at multiple layers to compose smaller components/directives into larger ones. It maximizes reuse while allowing a great degree of flexibility.
I still have freedom to break off chunks of the source for reuse elsewhere. For example, I'm planning to move the models to a separate repo for use on the server-side. It's trivial to bring those chunks back in via the ES6 module loader and a package management utility like JSPM/Webpack.
Not sure why this touched a nerve enough to be downvoted into oblivion. If it was the cheeky tl;dr I guess I can make a point to avoid any and all attempts at sarcasm in my future posts.
As for the rest, I genuinely think it's a useful approach to consider
1. it's relies on future web standards not tools
Tools come and go. Web standards never go. If the goal is to avoid technical debt, then don't write code in a way that will eventually be obsolete.
2. 'Abstractions are the solution to every problem, except the problem of too many abstractions'
Except, when there's a viable strategy to provide access at multiple layers of abstraction. If we're finally getting a truly universal module standard, why not leverage it to improve our library APIs?
It's not like we have to waste the time/energy/effort supporting multiple module non-standards anymore. The approach I outlined above provides much better usability with very little effort on the part of the dev implementing an API.
3. It doesn't rely on approaches that will eventually be obsolete
Namely bundling. Tree shaking is great if the end goal is to analyze all dependencies and create an optimized bundle.
Except for the fact that bundling will become an anti-pattern when HTTPv2 reaches widespread adoption.
I'm not sure that's precisely the right phrase, but here's what I mean: we all (should) know about technical debt in our own projects, but we also know that every project accumulates technical debt. When we build our projects on top of other projects, our work now has the potential to be affected by the technical debt -- bugs, poor optimizations, whatever -- in the underlying projects.
Obviously, this has always been true to varying degrees. But we've reached a point where modern web applications are pulling in dozens of dependencies in both production and development toolchains, and increasingly those dependencies are themselves built on top of other dependencies.
I don't want to suggest we should be writing everything from scratch and constantly re-inventing wheels, but when even modest applications end up with hundreds of subdirectories in their local "node_modules" directory, it's hard not to wonder whether we're making things a little...fragile, even taking into account that many of those modules are related and form part of the same pseudo-modular project. Is this a completely ridiculous concern? (The answer may well be "yes," but I'd like to see a good argument why.)