Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] jQuery 3.0 Released (jquery.com)
308 points by stop1234 on June 10, 2016 | hide | past | favorite | 153 comments




I would suggest the release notes are a better link target: http://blog.jquery.com/2016/06/09/jquery-3-0-final-released/

Developers can figure out how to download it if they are interested.


That was submitted yesterday and flagged as a dupe. https://news.ycombinator.com/item?id=11871658


It was a dupe, as is the current post. I explained this at https://news.ycombinator.com/item?id=11872457.

Perhaps we should stop having major discussions when things come out as release candidates, so that we can have the major discussion on actual release? I don't know. Trying to make it work that way might just cause more problems.


I for one missed the rc1 post appearing here - as I think a number of other people did too judging from the comments and the interest gauged from the upvotes. Would a potential consideration be to attach the release-candidate discussion threads into the main release discussion, or would that cause more confusion than it's worth?


I think that makes sense. All rc announcements have the same issue.


> "We set out to create a slimmer, faster version of jQuery (with backwards compatibility in mind)."

Wait, isn't the whole point of a major version bump that it breaks backwards compatibility?

Later on it says there are a few breaking changes, but not many apparently. Which in general is a good thing, but focusing on keeping compatibility through a major version bump seems silly.


It's not that silly. Yes, a major version bump allows you to break backwards compatibility in some ways. But if the break is too severe you run the risk of preventing a large percentage of the user base from upgrading. So it's still a good idea to carefully consider exactly where to introduce breaking changes and to try and keep the burden of upgrading proportional to the benefits offered by the upgrade.


Backwards compatibility is still quite important in major version releases. Just look at Angular 1 to 2, Rails 2.3 to 3, and Python 2 to 3 to see examples of this.


I'd say (and probably this is what you meant) that those are examples of what happens when backwards compatibility is broken - the community gets fragmented, finding answers on StackOverflow gets more difficult, lots of helpful blogs/articles/videos are now useless, etc.


> focusing on keeping compatibility through a major version bump seems silly.

I'd offer the Python 3 fiasco as a counterexample.


> Yes, there are a few “breaking changes” that justified the major version bump, but we’re hopeful the breakage doesn’t actually affect that many people.

BC is "desirable", not "required", if breaking compatibility makes sense to develop a major improvement, then do it.


A some point if you break too much compatibility, it just becomes a completely different product instead a new version of an existing product.


Wait, isn't the whole point of a major version bump that it breaks backwards compatibility?

It's not a requirement.

P.S. HN users sure are big on silent downvotes these days.


Whenever a new version of jQuery (or Zepto) comes along, I wonder what would have happened if web development borrowed a page from other ecosystems and browser runtimes had subsumed the jQuery API, shipping it natively.

It's a controversial notion, I'll grant, but what if the DOM APIs had been replaced by "native" jQuery support? Would we have been better off? Worse?

Considering the intricacies of standards bodies and industry lobbies, pondering the pros and cons makes for a fascinating exercise.


That would pretty cool, but one of the biggest reasons jQuery exists is because browsers don't standardize on anything, which makes the idea of native jQuery a bit of a paradox.

Another big reason jQuery exists is because standards change. You can use jQuery.ajax without worrying about a change in the XMLHTTPRequest API, because jQuery will support the old and new version until some point where the old version is considered "too old".

That second reason is the necessary evil that would exist, even if jQuery became the built-in standard.


> browsers don't standardize on anything, which makes the idea of native jQuery a bit of a paradox.

Eh?

Browsers will sometimes implement their own extensions to various standards (which are sometimes incorporated into those standards), but there is certainly a standard DOM API. If you're referring to older versions of IE, they're the ugly ducking, and the situation has since improved dramatically.

jQuery was a great idea back in the day to avoid compatibility headaches, but with modern DOM APIs and CSS3, the only time I personally use jQuery is if my coworkers decide to use it.

https://developer.mozilla.org/en-US/docs/Web/API/Document_Ob...

http://youmightnotneedjquery.com/


Moreso at the time JQuery was created than now.


Kids have it so easy these days, amirite? :D


You must have a short memory; a lot of the really useful bits of the standard DOM API we have now was originally only found in jQuery.


Well it's 2016 and we still don't have nodeList.forEach() so it can't be much worse.


Actually, we do, since Chromium 51. See https://dev.opera.com/blog/opera-38/


Yeah you're right. Just updated stable Chrome and:

  NodeList.prototype.forEach
  forEach() { [native code] }
Thank. God. Finally.

Edit:

  HTMLCollection.prototype.forEach
(which is what you'll get from other vanilla JS methods) is still missing.



well we do have for..of that will iterate over a nodeList natively (and now that i'm using let/const i've found myself using for..of a lot more)


jQuery 3.0 lets you do for..of for a jQuery collection even.


Array.from(document.getElementBy...).forEach ?


There are ways to do it for sure, but imagine if there was just One Obvious Way To Do It.


Don't get me going on Python in the browser... :)


I can't find a reference but I remember hearing a tale that Guido was contacted by Netscape just before Brendan Eich but wasn't interested. What an interesting parallel reality that conjours up.

Hmmmm. On second thoughts maybe we're be living in a world where Python was as maligned as Javascript.


1) There's certainly not "one way" to iterate in Python[1], and they're not even consistently named (xrange is the iterator version of range, but izip is the iterator version of zip).

2) You can use Python in the browser[2][3][4 sort of].

1. https://youtu.be/OSGv2VnC0go?t=3m20s

2. http://transcrypt.org

3. http://pyjs.org

4. http://www.rapydscript.com


From "import this"

> There should be one-- and preferably only one --obvious way to do it.

Key words are preferably and obviously. Of course there are going to be multiple ways to do everything, but it'd be nice if the best practice is pretty obvious.

> Although that way may not be obvious at first unless you're Dutch.


document.querySelectorAll(..).forEach?!?!?!?!


In Firefox, Safari, IE, and Chrome < 51:

    TypeError: document.querySelectorAll(...).forEach is not a function
That's exactly what this sub-thread is about! NodeList doesn't have .forEach.


Yep, that just rolls of the tongue...


Alias `document.querySelectorAll` to `$$`?


var $ = selector => [].slice.call(document.querySelectorAll(selector));

Fo' sizzle


If you're using ES6 you may as well use spread and defaults to make it more useful:

const $ = (selector, context=document) => [...context.querySelectorAll(selector)]


Or just polyfill it with NodeList.prototype.forEach = Array.prototype.forEach and get nice readable code.

Point is: we shouldn't have to.

Vanilla JS also allowed you to look inside HTML5 FormData a few months earlier. And no, it isn't FormData.toString().


querySelector and querySelectorAll exist in browsers only because of jQuery


Please do not forget Prototype that inspired jQuery


Not a bad idea. Personally I'm fine with using pure javascript. I haven't used jquery in a long long time. But I'm sure many people would benefit from having jquery features natively.


What version of jQuery exactly? They've made backwards incompatible changes; browsers can't do that.


In theory, supporting backwards-incompatible changes should be well possible with versioning (with, e.g. the <script>'s type attribute or a <meta> tag, or "use jquery14"; magic string or whatever) - everything should work as long as one can specify "I want libfoo API v2 here".


Right, that is the advantage of having an API layer that the developer (not the web platform) controls. The more we dump into the web platform the bigger it gets because it's nearly impossible to remove anything. The idea of the Extensible Web Manifesto is to provide primitives in the web platform and let people build more powerful abstractions over them in JavaScript.

You could view jQuery as that type of abstraction. If you decide to use some other abstraction (e.g. React) you can switch your web page/app and you aren't paying the cost of having jQuery embedded into the web platform.


Would probably mean that we have multiple libraries or frameworks developed (to various levels of completeness) that sit above the different browser runtime implementations of the jQuery API in order to handle the challenges of having the browser runtime not supporting, only partial supporting or incorrectly supporting the jQUery API specification.


What I wonder is why not have a heavily cached, canonical version of each jQuery library, and every other library as long as it is used on a lot of sites? These days the browsers cache the bytecode, which is almost as fast as shipping native code with the browser.

I think google used to host those things. It automatically becomes more cached the more it is used.

https://developers.google.com/speed/libraries/

The downside is that with a referer header, you then let some centralized DNS know your site is being visited by a new visitor.

What we really need is my httpc:// proposal. The c stands for constant. Download once from a seed and cache the file. Guarantee it's always the same. Or web browsers should support magnet links.


Actually those CDNs for libraries like jquery don't have as high of a cache hit rate as you'd think.

The problem is that there is a pretty wide number of versions of each library that are used. Combine that with the fact that mobile caches are still laughably small (like 5-50mb on some phones), means that it's not really helping all that many people.

As for the caching stuff, there was a push for that with html imports (basically treating all imports with the same name as the same, so if you and another library needed jquery, they could both use the one in your cache without needing to re-import it), but I haven't heard anything on it for a while, and i'm not sure if that's even a goal any more.


they have in many ways you can use fetch instead of $.ajax and document.querySelectAll instead of $,


Apparently there is no migration guide to migrate from React to jQuery :(


I laughed :)


you can still use redux with jquery!


React and jQuery are really two different beasts.


Is anybody in the HN crowd still using jQuery for new projects?

If yes, why not use "vanilla" js?


>If yes, why not use "vanilla" js?

Isn't that like asking why jQuery exists?

Personally I still believe that JavaScript and HTML should be separate, like HTML and CSS. I know that might make sound old, but I really dislike having a template language embedded in my JavaScript library, and as a result I end up disliking most of the newer frameworks.

I know jQuery, the documentation is good, it's easy to get help, and it makes sense to me in a way React, Angular and others frameworks do not.

Yeah, I could use plain JavaScript, I could also just use Python and not have a Django or Flask dependency. It's just easier and more productive to take the dependency.


I can relate to this idea, but I gave a try to react and I changed my position:

Here's why IMO react is not actually mixing the two: HTML is just the serialized form of the DOM, your browser reads it, transforms it the a DOM tree before rendering it.

With react, you interact with the DOM (indirectly via a virtual DOM API), so you manipulate javascript objects, you only do javascript. JSX is just syntaxic sugar over that API, because deep nesting is easier to read with XMLish syntax. JSX is not a templating language (that outputs a string).


Yup. No shame here in using it.

It helps me get the job done quickly for projects when I don't need/want a framework.

It is well documented, organized, and lively ecosystem of devs, documentation, support options, and -- yes -- plugins.

It is a consistent and easy to understand API, and the breaking changes are well understood between versions.


So far every new project in the last few years ended up using jQuery because there is always this one ui widget that does what we need and the other libraries just aren't there yet and its not worth the hassle of reimplementing or extending a different library just to not use jQuery.


True. Although whenever I see a jQuery plugin like that I try to replace it with some combination of "components" [1]. There is a lot of progress in that area.

There are stuff like : File picker [2], dropdown [3], tip [4] for tooltips and many more.

1 : https://github.com/component

2 : https://github.com/component/file-picker

3 : https://github.com/component/dropdown

4 : http://component.github.io/tip/


What is this "components"? I can't find anywhere that explains what it is. Is it just a random collection of javascript utilities?


It's an isolated collection of HTML, js, and scss to achieve a thing. Like a credit card payment form.

They're mainly for react, though other platforms have them (eg, I use 'ractive', from the guardian, which has its own component model).

Go check out npm, there are a while bunch of react components you can just pick up and use.


It was like browserify before browserify. It was a TJ project that was abandoned and replaced by Duo.


CBP (component based programming) has been around (at least the idea has) since the 60s. It is the holy grail of some developers; being able to compose apps from a collection of components that work everywhere. I don't believe anyone has ever truly reached that goal. But surely it'll work this time because of JavaScript.


The dropdown and tooltips look useful, but FYI it's really simple to make a file picker. Just put a hidden input type="file" on the page, then:

    document.getElementById("that-element").addEventListener("change", function(e) {
        // e.target.files
    });
    document.getElementById("that-element").click();


It's not so easy

    document.getElementById("that-element").click();
doesn't exist. You need to use something like

        /** Creates the click on the input */
        const clickEvent = document.createEvent( "MouseEvents" );
        clickEvent.initEvent( "click", true, false );
        document.getElementById("that-element").dispatchEvent( clickEvent );
References :

1 : https://developer.mozilla.org/en-US/docs/Web/API/EventTarget...

2 : http://stackoverflow.com/questions/6367339/trigger-a-button-...

EDIT ( sorry can't reply to you, because HN ) :

Relevant jQuery discussion

3 : https://github.com/jquery/jquery/issues/2476


Here's code I wrote a while ago that did exactly what I specified and worked in production on all major browsers:

https://github.com/MediaCrush/MediaCrush/blob/master/scripts...

Here's the code in the jQuery library you linked to that does exactly this as well:

https://github.com/component/file-picker/blob/master/index.j...


It has some problems with input type=file elements.

The devil is in the details, although obviously your code works for any other clickable elements.

1 : https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement... 2 : http://stackoverflow.com/questions/210643/in-javascript-can-...


Just last week I started a new project, injecting some firebase-backed functionality into a squarespace site, and I used jQuery.

I use it for the same reason I use lodash when I could do everything it does in vanilla js: because it's a slightly more dev-friendly API.

Much like the PHP community, jQuery's community has earned it a reputation as a hacky tool for people who don't understand where jQuery ends and Javascript begins. However, it didn't become popular by accident, and in competent hands, it still has its benefits.


I started using "vanilla" js, then created a method as a name for the huge `[].prototype.slice.call(document.querySelectorAll(selector))`

Then as I continued adding useful methods such as ajax() (no, in vanilla js it's not "solved") and turning events off (also non-trivial) I ended up with a jquery alternative:

> http://umbrellajs.com/

It's not exactly the same, but most methods are the same or highly compatible. For example, the append() method is extended so this does what you'd expect it to do, generate a list with first, second and third items:

u('<ul>').append(text => `<li>${text}</li>`, ['first', 'second', 'third']);


> useful methods such as ajax() (no, in vanilla js it's not "solved")

Right - I use jQuery because I've greater confidence that `.ajax()` will behave correctly across a wide range of browser versions, though I don't have any good evidence about using that vs vanilla `XMLHttpRequest` these days. I'd be interested to know which bits of "not solved" - which aspects in particular are not yet well supported?


It's not only about criss-browser, it's about the simplicity of

ajax('/api/users', function(){}, 'json');

(of course, if you want to know about compatibility, http://caniuse.com/#search=xmlhttprequest )


What about fetch?


Fetch has its own issues - not being able to abort a fetch, or recieve progress notifications are frustrating if you require them.

But the deficiencies of fetch() are another topic entirely, and I tend to use it (polyfilled tbh) for the majority of my async requests because I like the rest of its implementation.


Yup.

> If yes, why not use "vanilla" js?

I can't answer that, but I can answer "Why not use $Framework?"

My current project (a large admin system) is crying out for the more complex parts of the admin UI to be built in React, Angular etc. The problem is it's an all-or-nothing situation.

When picking up a new technology I want to add a little bit to the current project, a bit more to the next, and so on. Progressive Enhancement for the developer. jQuery lets me do that; the frameworks don't.

I have used KnockoutJS in 2 locations in this project; both for a single component on a page which needed to be very dynamic. I have been impressed that it doesn't try and take over, and lets me think of enhancing the experience (and my skills) one component at a time.


Vue.js is a really great way to add progressive enhancement. Can add it to a single page or even a single element on a page and not effect the rest of the page.


Does Vue.JS cover all the features of React and handlebars and can just be used for small parts of the application?


Thanks, I keep meaning to play with Vue.js. You've spurred me on.


Why is it all or nothing?

We've been replacing certain parts of our vanilla js application with small React apps and it's worked brilliantly. There is a really good talk by Ryan Florence where he replaces backbone components (i think) with React components.


Yes, it's not too hard to replace one at a time. We run an extremely feature rich browser client application and have slowly been dropping in independent react components to replace legacy code as necessary.


> Why is it all or nothing?

Erm, because that's the impression I've got from the tutorials and demos. I guess a "Add $Framework to a bit of your app" isnt' sexy: the ones I've seen add their own routing and focus on SPAs. Rather than "Here's some incredibly complicated information to display, and we need to be fairly interactive over it (and how other data on the page affects it)".

Would it work if you didn't have a vanilla JS application but a traditional web app?


You're right that most tutorials dive in with both feet, but this is literally the very first text on https://facebook.github.io/react/

>Lots of people use React as the V in MVC. Since React makes no assumptions about the rest of your technology stack, it's easy to try it out on a small feature in an existing project.


Facebook.com is afaik replacing small more and more small parts with react components


I've seen a traditional server side rendered rails web app try to do this. The traditional pages are nice a usable and fast, and then you hit a page with a react component and you need to wait a second or more for react to boot up, or whatever it does. How do you handle that?


I don't know what you're referring to, if it's an actual case you've seen, or if people talking about React huge size gave you the idea that it behaved like that.

In any case this is an implementation problem. React base is around 30kb after min + gzip, and executing the code should take hundredth of seconds on page load unless you're on a really bad phone.

Maybe you've seen this behavior because of components designed to perform one or several high latency requests before they display data? If that's the case, then it's easily fixable by providing the initial data along with the page, or improve latency by many means.


It is an actual case I've seen, more than once. I didn't debug it, I just removed it. But the initial load and render of react components is not fast. Can you point me to a live site where it is (and serving an initial server side rendered page is cheating).


Every framework out there can be used on a small part of an app at a time.


That's grand, the tutorials and examples gave me the opposite impression (SPAs and own-everything)


They always seem to have an example of being used on a small part of the page, but nobody actually tends to do this and there isn't much in the way of blog posts about it.


Yes of course, for new projects now and for the foreseeable. Its terse expressive fluent syntax alone is reason enough to use it for me. Nothing else comes close for ad-hoc dom work, and your user almost certainly has it cached already anyway.


Unfortunately when you score a page for speed things like jq and bs are counted against you when they are likely to be cached. Why don't any scoring systems (that I know of) able to take can asset popularity into account for things like this?


Just as interesting - I'd like to see stats on which CDNs offer the best chance of a warm cache for my visitors - and which versions of various libraries have the best chance of being cached.


If only the browsers would support caching based on the sha384 of the file then the script/css's integrity property would be even more useful and it wouldn't matter where you were serving it from.


Yup, every project I've ever done has used jQuery:

- Nothing I've done is "big enough" to warrant a front-end framework - It's guaranteed cross-browser - There's a large ecosystems of plugins - The documentation is excellent - The API is clean, abstracts away some of the fiddliness of "vanilla" js


That sounds much like "why would you push that nail with a hammer is there's a perfectly usable shoe just right there?"

Why does anybody use a library? Because it has some good pre-writen code you can reuse. Vanilla js is verbose, full of extra control sequences and easy to get wrong.


because its often less verbose than plain js, and its event-handling is great.

something like $("#wrapperel").on("click",".painbutton",cbfunction),is way more code in plain js.


Had this very conversation today - I still like jQuery as previously it was extremely useful for projects (circa 2011).

I haven't really used it in the last 4 years as most of my projects have had frameworks in place.

I definitely think it still has it's place - it is more concise and reads a lot nicer in many cases than vanilla JS for DOM manipulation and AJAX requests.


XML. For HTML jQuery is rarely worth the dependency any more for me, but trying to consistently parse and manipulate XML with vanilla JS across browsers feels like working with HTML did five years ago.

I know XML is not cool any more, but sometimes you don't have a choice.


Most websites based on full flegded js frameworks break entirely without js. I started to miss the "progressive enhancement" of the jQuery days. I had that thought a few monthes back.


Sometimes for browser compatibility (so an old version of jQuery...)


Most definitely, yes. I have plenty of projects where there's enough JS interacting with the DOM to use jQuery, but too little to justify adding the entire React package. Furthermore, a number of my projects 1) might have to be taken over someday by developers who don't know React, 2) might eventually have jQuery added anyways for a quick plugin (slideshow, etc.), or 3) have jQuery included already because the client insists on the use of a particular CMS.


jquery api are nicer


If it works and you know it, why not? They're still working on it, it's not going away any time soon, and just about everyone who knows JS knows jQuery. This is particularly true is JS is not the core of your application.

Should you write a SaaS SPA in nothing but HTML and jQuery? Probably not.

Should you write the entire application in React just because you want to have a nice fade out effect on alerts and notifications? Definitely not.


Last big UX project I did (a GA admin console) used "Valilla JS". It was my first such project (not sure if it would be appropriate to call stuff done in early 2000s Vanilla JS). But in another recent refactoring I had to replace an ancient (IE5) datepicker and was dismayed to find there was not broad support for input type=date so I used one of the quality JQuery UI datepickers.


1. Yes.

2. Because jQuery provides a documented, backwards compatible, and stable API that works across multiple platforms.


I do, it's still nice for "creative websites" with some animation.


Not for new projects, but we have to maintain quite a few old ones...


Honestly the verbosity of vanilla js apis just aggravates me.


Required component of Bootstrap and Foundation.


Cause I like to get shit done.


@awestroke, got your answer?


I use it. Selectors are great reason. Also the worst traits of Enterprise Java circa 2009-2010 have infested Javascript - so I tend to avoid modern JS frameworks like the plague.


I've complained a few times about what seems to me, to be a less-friendly way of handling Promise rejections in es6.

Consider the relatively common use-case - there is a service object which proxies requests, format's the call to the backend (or fetches from cache) and format's the returned response for the caller of the object. So the pattern looks something like: ServiceObject.fetchData(query).then((data) => { / * display data * / }, (err)=> { / * catch error - display no results * /})

At some-point you want to chain the promise to update another part of the ui: promise.then((data) => { /* display something else / }, (err) => { / catch error - display failed to load */ }).

The problem is you can't squash the error in the 'reject' of the previous promise now, because otherwise the error isn't propagated to the last promise in the link and instead you will hit the 'success' function. This 'reject' behavior is alright if there is something your 'success' function can do when the original request failed, but in a great majority of cases if the request failed there is nothing you can do - you put a 'reject' in the first chain of the promise resolution (potentially in the serviceObject itself) with some generic flash-message like 'request failed please try again' and call it good. As it stands you end up with a call chain where what a function higher up in the chain should return should be based on what a resolving function further down the chain is doing -- not having to do this was for me was almost entirely the plus-side of the promise-style over callbacks-style concurrency model.

I bring this up now because curiously the jQuery model of Deferred() precisely did not do this before -(see section#2 of Major Changes):

> Example: returns from rejection callbacks

if an error wasn't re-thrown in a catch, the promise-handling would stay in the 'reject' chain as long as an error had been thrown. I am quite curious as to why the current-model won, I understand some of the potential benefits but in practice I find that this behavior is worse in 90% of use-cases that I have encountered. If someone has a link to a mailing-thread / issue where this was discussed I would be quite interested.


The way I handle situations like this is to only put the catches where I need them to serve some purpose. Generally, I don't catch errors at the start of the chain because I can't know what to do with them at that point. If I do catch them, it's only for logging or similar purposes and I still let the error propagate further.

One pattern I use is to rethrow the error:

return service.fetchData().then((data) => {}).catch((err) => {console.log(err); throw err;});

Another pattern I use is to split the promise chain so that I let my main results flow be the result that gets passed on, but I can do other things in a parallel manner internally:

let results = service.fetchData().then((data) => {}); results.catch((err) => {console.log(err);}); return results;


> return service.fetchData().then((data) => {}).catch((err) => {console.log(err); throw err;});

Yes - but this kind of code requires that you know that something else chains the promise and handles 'err' - which is my entire complaint, a higher level function shouldn't need to know whether it has children or if they do error handling. Otherwise it's back to the same kind of callback style where you have to go into another file and modify a top-level function to accommodate adding a child.

Edit:

I would rather you and the other commentators here re-read my parent post, as well as the relevant resources[0][1][2] - I seriously think I'm repeating the same thing for the 4th time here, no - I don't need an explanation of how promises work; I only was pointing out that the previous (2.0) dferred.promise model actually fits most better in most of use cases that I've experienced than the es6 one and I found that quite curious; but it seems impossible to have that discussion without being on the same page first.

[0] https://blog.jquery.com/2016/06/09/jquery-3-0-final-released...

[1] https://api.jquery.com/deferred.promise/

[2] https://developer.mozilla.org/en/docs/Web/JavaScript/Referen...


It doesn't need to care whether someone else is handling it. It should just do what it needs to do according to its contract. The caller that receives the promise as a result has its own contract that may or may not involve handling the error as well (and so on up the chain).

You generally have to have an end-of-the-chain catch as a safety precaution. If you don't have one and the promise fails you may get no feedback that an error happened at all. All methods that return promises should be able to expect the caller to handle them appropriately whether they pass or fail - it's not their responsibility to try and figure out how to handle an error in the context of the wider application.


«You generally have to have an end-of-the-chain catch as a safety precaution. If you don't have one and the promise fails you may get no feedback that an error happened at all.»

This will be increasingly less of a need as browser native promises catch up (and more reason to make sure that the promise library you are using is built on top of native promises rather than rolled from scratch). Several browser dev consoles will already show unhandled promise rejections today, and there's a growing convergence to also providing browser/system-wide event handlers for unhandled rejections as well.

http://www.2ality.com/2016/04/unhandled-rejections.html


You're returning a promise. At that point you've already assumed that something else is going to expect a promise, and a promise can either succeed or fail.


If you're not handling the error in the first promise, then throw it again.


> If you're not handling the error in the first promise, then throw it again.

You are handling the error (e.g. providing the user with a flash message telling them their was an error) - but if you don't re-throw it you will end up in the 'success' callback-chain - without a result obviously. Therefore whether you re-throw the error or squash it depends on whether another 'reject' function will be there to squash it further down the chain etc.


You should only ever be squashing an error into a success response if that's what someone would be expecting but that's rarely the case. Normally you should be allowing the error to bubble up and be decided by the caller.

I can think of a couple cases where catching an error down low with the purpose of squashing it makes sense. One would be for a service that you know may not have data and you don't care if it doesn't. For instance, trying to get geolocation data for a user in order to improve their experience but if it fails you don't need to show an error:

return geolocationService.fetch().catch((err) { /* log error /; return {}; / empty result/ });

Another case is when you might have multiple ways of handling the request where one is prioritized over the other. In this case, you can catch errors from the first attempt and try a second method. If they both error out, then the result promise will be an error as well.

return primary.fetch().catch((err) => { return secondary.fetch(); }).then((data) => { / manipulate data */});


> Therefore whether you re-throw the error or squash it depends on whether another 'reject' function will be there to squash it further down the chain etc.

Does it matter if another 'reject' function is there to squash it? I was under the impression that unhandled rejections would just vanish into the ether... no need to worry about another promise handling them. Just re-throw the error and forget about it.

Edit: To be clear, the browser may display an error in the console as a diagnostic tool, but my impression is that unhandled rejections will not result in an actual exception that halts execution.

Edit 2: Here's a fun example for the Chrome console:

    var p = Promise.reject();
    setTimeout(() => p.catch(e => {}), 1000);
It displays an error... and then a second later the error transforms into a non-error!


> Edit: To be clear, the browser may display an error in the console as a diagnostic tool, but my impression is that unhandled rejections will not result in an actual exception that halts execution.

Well if it's not handled there is simply no further promise-chain to call. If you maintain a large-enough app, you probably have some kind of tool for reporting client-side javascript errors. When you have unhandled promises it's not always clear that they are unhandled because you handled it and just happened to re-throw even though the promise has no further chain, or whether someone messed up and actually isn't handling a rejection. Thus to avoid this, adding new promise to a chain involves finding the first .catch(), adding a throw, and an extra .catch() further down the chain.



To me, all of the JQuery examples are much nicer to read and write. Sure, you can use raw JS, but why copy annoying boilerplate? (I'm not a JS dev, so I don't have a dog in this fight).


The authors of that page seem to be making their suggestions principally for library authors, to avoid an otherwise unnecessary dependency on jquery.


That page does an excellent job of convincing me to carry on using jQuery.

  if (el.classList)
    el.classList.contains(className);
  else
    new RegExp('(^| )' + className + '( |$)', 'gi').test(el.className);
vs

  $(el).hasClass(className);
<shivers>


I don't think anyone would suggest you write all of the first codeblock every time you want to check a class. You'd put it in a function and call that, much like jquery does.

The point is that this site explains how you do it because a) that's probably easier than trawling through the jquery source b) including this polyfill when you know you need it might save you from requiring all the rest of jquery.


In that case I'd be much more likely to use a minimal jQuery replacement such as umbrella.js mentioned elsewhere here. That's 3k (vs about 28k for jQuery) so counters most of the 'bloat' objections with jQuery. I'm sure it will still irk some some purists on other grounds however.


Yep, the use of RegExp is enough.


Is that comparison supposed to show that jQuery is bad or what exactly? Because it's clear that jQuery is much more concise in every single case.


No. That site is giving helpful alternatives so that library authors can avoid having jquery as a dependency.


They are just recreating jQuery, no?


Greenspun's tenth rule of js apps?


You aren't recreating jQuery by using basic JavaScript syntax.


isn't JavaScript what jQuery is written with?


Yes, but why drag in 30k of additional library unnecessarily?

Yes use jQuery if it's really providing benefit, but if you're only using it for a few things, you may be better off just doing in in plain javascript, even if it's a bit longer.

Every library you add has an overhead cost on the end user as they have to fetch it and process it. It's easy to lose track of that when you're developing and testing against local webservers or on computers vs mobile, but there is strong value in keeping things small and with as few unnecessary dependencies as possible :)


Not sure if you read the page, but many of the examples are not "basic JavaScript syntax", the fadeIn approach is a function that attempts to replicate jQuery's, probably poorly.


No, jQuery did not invent the concept of fading in. It doesn't even have a unique implementation of it, it just does a few CSS changes that you can easily do without jQuery to get the exact same result.

This is as ridiculous as having a "addition" module in jQuery and saying that people who do addition in JavaScript are just reinventing jQuery.

Wrapping jQuery around something simple (like fading in) does not make it into jQuery. If anything this just demonstrates who incredibly unknowledgeable people are about where jQuery ends and JavaScript begins.


The reason the site exists is explained on the site itself.


A good example of why I will continue using jQuery for the time being.


Yes, but it's always good for us to remember that everything has a cost, and so does jQuery (memory footprint/new dependency/complexity). It's a conscious decision we should make on every project, and yes in most cases it would take 0.5s to make a decision in jQuery's favor. We should do so anyways, since it's good practice. Taking into account the presumed state of Moore's Law and all.


I'm surprised that there's no youmightnotneedprintf.com


I'm surprised you are comparing jQuery (a non-trivial additional dependency with reasonable dependency-less alternatives) to printf (a small piece of a dependency you most likely pull in already with).


One man's trivial is another man's bother. I consider 30k (less than some images or the rather popular fonts) to be quite trivial for most purposes, and if it gives me a nicer syntax for common operations, it's probably worth it.

On the other hand, I've seen actual backlash against printf implementations in libs and the bloat they're causing, never mind potential bugs [1].

So there you go.

[1]: https://www.fefe.de/dietlibc/FAQ.txt


I'm not advocating for or against jQuery or printf. I'm just pointing out that equating the two (in size, utility, or potential gains by avoiding them) doesn't make a lot of sense.

jQuery's ratio of size-versus-additional-utility is much larger than printf's, even more so if you consider typical use cases of each.

But it isn't even a fair comparison to begin with. How about comparing jQuery to libc. I'd still argue the ratio is in libc's favor, but no one would have written "I'm surprised that there's no youmightnotneedlibc.com".


libc seems much more substantial, maybe we can split the difference and call it "youmightnodneedstdio.com", although that sounds much less pithy.

I still wouldn't be so sure about that ratio. The people decrying the printf family of functions are often targeting small statically linked binaries. Where a few kb shaved off might be a bigger chunk than 30kb are for your usual multi-megabyte front page of today's web.


People who target tiny static binaries are by far in the minority of libc users.


Isn't it more like comparing jQuery with libc?


It would have been if he said "I'm surprised that there's no youmightnotneedlibc.com".

As soon as you do that though (put the two on equal-ish grounds), you realize it doesn't make much sense and it might as well not have been written in the first place.


libc would be more like the standard DOM specification.

printf is a part of the standard C library.


I don't think it's a question of necessity, it's just more convenient and usually a lot less code than doing the same thing without it.


from changelog: "Golf away 21 byte"

I like how (code)golf has become a term.


Theres a few things that I want from the next x.0 release. Until these get done, jQuery will look like a library that doesnt know what it wants to be but used to serve a purpose.

- removal of animation from core

- removal of styling from core

- Create a jQuery 'fx' library seperate from jQuery

- have a standardized serialized / deserialize for forms

- Ability to handle multipart forms in ajax post requests




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: