Hacker News new | past | comments | ask | show | jobs | submit login
Cancelable Promises: Why was this proposal withdrawn? (github.com/tc39)
296 points by xatxat on Dec 19, 2016 | hide | past | favorite | 213 comments



Promises have always been extremely contentious in JS for a couple reasons and Domenic has had the patience of a saint getting the original proposal through so it's totally understandable that he might not want to deal with this anymore especially if people in his org are against it.

There were a couple reasons the original promises were so contentious to standardize.

- Monads: there was a very vocal faction on TC-39 that really wanted a Promises to be monadic and very much emphasized mathematical purity over usefulness and any attempt to discuss things tended to get side tracked into a discussion about monads, see brouhaha over Promise.cast [1].

- Error handling: Promises have a bit of a footgun where by default they can swallow errors unless you explicitly add a listener for them. As you can theoretically add an error listener latter on all other options had downsides (not always obvious to advocates) leading to a lot of arguing, must of it after things had shipped and realistically couldn't be changed. Fixed by platforms and libraries agreeing on a standardization of uncaught promise events [2].

- Just an absurd amount of bike shedding probably the 2nd most bike shedded feature of the spec (behind modules).

So cancelable promises reignite all the old debates plus there's no obvious right way to do it, like should it be a 3rd state which could cause compatibility issues and would add complexity, or should cancellations just be a sort of forced rejection, which would be a lot more backwards compatibility but with less features.

Additionally there is a bizarre meme that somehow observables (think event emitters restricted to a single event or synchronous streams that don't handle back pressure) are a replacement for promises or are a better fit for async tasks they should be used instead.

edit: patients => patience

1. https://esdiscuss.org/topic/promise-cast-and-promise-resolve

2. https://gist.github.com/benjamingr/0237932cee84712951a2


The problem wasn't usefulness or purity, it was that the community already standardised on a certain behaviour of promises that was not monadic. The only way to make promises monadic was to therefore not standardise "promises" at all.

At that time, monadic tasks were basically unheard of in the community. By the very nature of the monad laws, they cannot have side-effects, so they cannot be eager. This means that a user "running" an operation would observe nothing until they subscribe to it. In an imperative language like JS, this is confusing for users, and arguably not worth the breakage of the existing promise ecosystem.

Another nice property of Promises is that they can be implemented in a way that doesn't cause stack overflow for endeless recursion (and additionally, in some cases we can get "tail recursion"). This might be doable with lazy tasks too - not entirely sure. See https://github.com/paf31/purescript-safely

On the other hand, monadic tasks make many other things easier, like

* cancellation (just unsubscribe),

* synchronous error propagation, so no issues with error swallowing and perhaps no post-mortem issues in node. Nothing runs before the subscription, so at the point the side effects are running, we already know whether there is an error handler. Effectively the `done` method would've been the eqiuivalent of `subscribe`, and mandatory

* defining parallel, sequential or limited-concurrency execution order after the tasks have been created


> By the very nature of the monad laws, they cannot have side-effects, so they cannot be eager.

I'm pretty sure monads can have side effects and be eager. It's just that not everything will be captured by the type signature. AFAIK this does not prevent it from being a monad. You can have monads without any type signature. Also, AFAIK the eagerness is a requirement due to the potential for errors changing program flow that is not captured by the type signature. But AFAIK this does not prevent it from being a monad.

AFAIK monads only have to conform to the monad laws, bind and return.

I'd like to know if I'm wrong because I'm concerned with the conflation of monads and language features of haskell. Specifically we can benefit from monadic composition without making javascript into haskell. I think Erik Meijer has been rather damaging in this regard. I know I'm in the minority so I'd like to be proven wrong so I can come join the party :)


IO is a monad in Haskell, right?

I mean, we can talk about how it's pure in the sense that it just lifts other things in to being tied to IO programs, and it's our evaluation of them that causes mutation, but that's pedantic even to me and seems to miss the point.

That sense of purity isn't what most people mean or are concerned by.


IO a is short for roughly World -> (a, World) and is thus computationally pure. You cannot figure out what happens to the World in between but you don't need to, thus the side effects "don't exist" from a analysis standpoint.

However IO follows all of the Monadic laws, I believe the comments about side effects are focusing on the side effects that show through when you try and apply the monadic laws to promises.


The "world" equivalence is not a useful way to think about it. A more useful description would be a tuple that contains two items, the first one being the description of the action to execute, and the second one being the function that will be called with the value retrieved with the action executed.

So in JS notation:

  { action: {type: 'readFile', fileName: f}, next: content => mkPrintAction(content) }


> The "world" equivalence is not a useful way to think about it.

Regardless of your opinion on the matter, that's how it's actually implemented. IO is roughly equivalent to "State RealWorld#" with some extra strictness/unboxing stuff.

What you're talking about is basically the free monad over the (Request, Response -> a) functor. That would probably work, but be less performant and harder to extend.

IO is nice because it's really easy to safely integrate with FFI calls. I don't think you could easily do the same with your proposal.


Its just that when you mention this, the assumption is that "World" is the state of the world. Which is not true, since world is just a token to impose ordering of execution in a lazy language, but its a fake state token since its not present at run time... and that complicates the explanation further.

The interpreter is simpler and easy to implement in any language too.


That's true, the compiler does optimize away the explicit dependency on the world. But it still makes sense; when you call "print", it returns a "new world" that is the state of the world after printing. You can't inspect the state of the world, so it doesn't really matter whether it's just a token or an actual description of the universe. The behavior is indistinguishable either way.


That assumes there is only one program running in the world. Its simply not clear what the "world" type means. Its a bad name. I'd rather go ahead and explain it as a token.


I don't see why it assumes there is only one program. If the world value is impossible to inspect (which it is), you can assign whatever semantics you like to it.

If you look at the generated core code, it's important to notice that it's not the same "token" being passed around everywhere. Each IO action returns a new token, which could be a different value from the old token. (Again, this is entirely theoretical, as the token/world state doesn't exist in the binary.)


Ok, I'll put it this way:

Saying

> when you call "print", it returns a "new world" that is the state of the world after printing

makes no sense, since the word "state of the world" will be interpreted completely differently by the receiver of the explanation. State of the world = a large state object with lots of fields describing every single thing in some "World" (what world: program world? computer world? network world? all the world?). Thats what people hear.

The name of the type is horrible, and the world-passing intuition is also horrible. "Token" would've been much better, but its still not clear how to implement or why its even necessary (in a non-lazy language)

On the other hand, a tuple representing the current action and the function that will return the next action (or nil) is fairly clear, easy to implement an interpreter for, the pure/nonpure distinction becomes obvious (the runtime interpreter is the impure part), and laziness doesn't even need to get into the mix.

oh, and also regarding FFI: {type: 'FFICall', fn: string, arguments: [...]}


That's a fair argument. People might get caught up on thinking that the world object actually has to contain a description of the world.

> {type: 'FFICall', fn: string, arguments: [...]}

I agree that this is simpler to understand, but it's no good in practice. It's obviously not type safe, and you still have to bake support into the interpreter/runtime for whatever your string value is. It's also effectively impossible to inline/optimize away. Which, again, is probably why it's done using "State RealWorld#" in GHC.

I would like to reiterate that however RealWorld# happens to be defined, the way its use is enforced in GHC is the exact same as if it did actually contain a description of the entire universe. You just aren't allowed to look at it. But, you are correct, this is more confusing than a request/response free monad.


Honestly I've always just thought of Monads as-in Haskell as a tricky way of sequencing in the presence of lazy evaluation. Looked at this way, the "trick" is to make a data dependency between each step in the form of an argument. Since Haskell names must be bound before they're read this means we can guarantee the order in which the steps are evaluated relative to one another. Then the whole thing is wrapped up in a data structure (this latter part seems to be what people normally focus on).


This is a late reply, but your abstraction is leaky: World might mutate during an IO operation in a way which impacts the outcome. So the (a, World) result isnt fully deterministic from World. Even if you make each IO operation atomic, a chain of IO is still impure because the final (a, World) result depends on mutations introduced during execution.

The lift to constructing an IO program from a Haskell type is pure, but the execution of that program to generate a value or effect on the system is not.

So Haskell is "pure" when it comes to IO, but not in the way anyone except overly pedantic people mean the term.


World cannot be inspected in any meaningful way in this paradigm so it is fully deterministic because the differences aren't important.

> So the (a, World) result isnt fully deterministic from World. Even if you make each IO operation atomic, a chain of IO is still impure because the final (a, World) result depends on mutations introduced during execution.

It is fully deterministic from World but you cannot possibly describe a World so you can't supply it nor replicate it.

> So Haskell is "pure" when it comes to IO, but not in the way anyone except overly pedantic people mean the term.

You can't create a World so it is pedantic in that fashion but you are able to think of the functions as pure otherwise making integration easier.

A simple example is that IO happens in the correct order in Haskell because of the evaluation order not due to some external construct ensuring that lazy doesn't bite you.


Which only goes to show that you can't apply real world reasoning to explain an academic concept. In this purely theoretical concept, World (which cannot be inspected) does contain the entirety of the world, including all other actions running in parallel, the current time and the state of every atom in the universe. So given the exact same state of the universe and assuming purely deterministic laws governing it (or the state of the world containing the future of the world too), yes, the action will produce the exact same output, because all other actions running in parallel or the things that will lead to the future actions are also part of that input World state.

Thats my best attempt, and it still sucks! I bet someone will start wondering about the uncertainty principle there... So lets just use a free monad or thunks in a strict language

  function pureAction() {
    return function() { return impureAction(); }
  }

  function chain(pureAction, fnReturningOtherAction) {
    return function() {
      var impureResult = pureAction();
      var newAction = fnReturningOtherAction(impureResult);
      return newAction();
    }
  }


  function interpreter(mainAction) { return mainAction(); }

Everything is pure, since all the functions don't do anything but return thunks for the interpreter to run. Except the interpreter which actually runs those. This isn't even a theoretical explanation - its exactly how PureScript's Eff works.


Would it be fair to say that the IO Monad get passed along implicitly with the other Monads in an eager evaluating system? Would this make it a stack? Just one that's not represented or enforced by the type system.


I dunno, so much of the IO Monad is specific to Haskell and lazy evaluation. Generally I think of it as Haskell's quirky way of handling the real world in a pure lazy mathematical construct and not something fundamental.


The monad laws are defined in terms of equivalence of values. So if your promise is not a value then you can't even begin to talk about whether it conforms to the monad laws or not. If `Promise(someWebRequest())` starts performing the web request immediately then we can't talk about it as a value (or rather, in order to regard it as a value we have to elide some aspects of it that are actually quite crucial, such as the web request being performed).


Perhaps you're a minority, but you're not alone - I too think that the usual narratives around monads (and many other theoretical CS concepts) are quite harmful. Do not get me wrong - they can all be useful, but not when they are conflated with many other things that some people like to have together with monads.


Well lets see if the first law holds. It says

Promise.resolve(a).then(f) == f(a)

Immediately this is not true, since `f` can throw or return a non-promise.

But lets say we restrict the type of f so that it can only return promises. Once again, the statement is not correct - the executed side effect is not the same. The left side will execute with one tick delay of the microtask queue, while on the right side the side effect executes immediately. So the side effects are not equivalent. If we write `f(a); f(b);` and `Promise.resolve(a).then(f); f(b);`, in the first case the side effects will start in order; in the second case, they will start in the reverse order. If we cannot replace one side of equal sign in the law with the other and have things run exactly the same, that means equational reasoning doesnt work [1], and the law doesn't apply.

In theory you could probably make monads with side-effects, but promises don't satisfy the laws. Its much easier if the monad only represents side effects but doesn't actually run them. You can think of them as redux action objects - they don't actually do anything once they are created. They only represent the side effect that will be run. For example, their internal representation could be {action: 'readFile', argument: 'path/to/file.json'}. ($)

You can construct actions from other actions by chaining them with a function that will take the result from the action and produce a new action object. Those would be derived actions. You can think of them as cons cells - they have the original action, and the function that produces a new action. Example representation: {originalAction: {action: 'readFile', argument: 'path/to/file.json'}, nextActionFunction: result => printLine(result)}

Finally, you pass the action to the "interpreter". The interpreter reads each action, runs the originalAction part of the cons cell, then produces a new actioin using the function in the nextActionFunction cell. It repeats that until it gets to an empty nextActionFunction cell.

($) Anoter practical representation for compile-to-js languages would be to use a thunk for the base (original) action. A thunk is a function without arguments, that when run, will produce a side-effect and return a value. The interpreter would then be simply a thunk runner. An async thunk can also be used, and thats a thunk that takes a callback that will be called with the eventual value. Still, we're only producing values without creating side effects: the laws are much easier to uphold.

[1]: http://www.haskellforall.com/2013/12/equational-reasoning.ht...


The first one is a pretty nitpicking one, IMO; all that's happened is that then/fmap and map have been collapsed into a single function that dynamically dispatches on the type of the returned value to decide which behavior to execute. More of an ergonomic convenience for programmers who don't quite grasp the distinction.

Edit: And with regard to the side effect ordering question -monads are only well-known defined for pure functions anyway! By that standard, any Scala code using non-pure functions is also not using a monad. This I inevitable in a language that does not enforce purity, which is basically every widely-adopted one that's not Haskell. A definition of "support" that excludes implementations that allow (but do not require) non-pure functions is not a very useful definition.


Thats not the issue here. Even if the functions themselves are pure (beyond the act of creating a promise), the strictness of promises together with their micro-task queue rule impose execution ordering that make them a non-monad. So its impossible to write your functions in a way that the laws hold.

And if you can't use the laws for equational reasoning, then what good is it whether its a "monad" or not anyway?


How would it be less of a Monad than the IO Monad? That's the part I'm not getting. It would make sense to me if neither were considered monads.


If the monad laws don't hold, its not a monad. I demonstrated the first law not holding, therefore promises are not a monad.


Why do you say "bizarre meme"? Observables are not like event emitters. RxJS Subjects would fit that description. See http://stackoverflow.com/questions/25338930/reactive-program... and https://www.youtube.com/watch?v=uQ1zhJHclvs

Observables are just an API for any push system based on callbacks, to make it composable and lazy. Array.forEach uses callbacks, Promise.then uses callbacks, Node.js streams are consumed using callbacks. And observables can model those cases without losing their properties (e.g. Array.forEach is called synchronously).


In fairness, while Observables look similar to Promises and while it seems that a Promise could inherit from an Observable (like the Single type is doing in RxJava), this is one of those real-world cases of inheritance gone terribly wrong.

The fact of the matter is Observables get used for different use-cases than Promises. And for the use-cases for which Promises end up being used Observables are broken.

For example Observable's flatMap operation is most likely not memory safe and so it cannot be used in a "tail recursion". This is because we have to wait on the "onComplete" event in order for resources to be de-allocated and that cannot happen before the last Observable emitted by the last "onNext" signals its own "onComplete". But tail-recursion is a primary use-case for a Promise type and so its "flatMap" needs to be memory safe.

I've thought long and hard about it btw, I even have a library that implements both a lazy Task abstraction and the Observable pattern: https://monix.io

I love observables obviously, but Observable isn't a substitute for a Promise and I hope we don't go down that road.


flatMap isn't memory safe but switchMap is, and it's often used for the type of request/response chaining that we do with Promise.then.

With pratical experience using Observables in production for any use case people normally use Promises, I disagree with you about not being a substitute, and specially about being "broken".


From the FRP literature, an observable can be viewed as a stream of promises, something like:

    { "value" : ..., "next" : new Promise(...) }
So it's basically a linked list of the current value, and the next value which is a promise for the same type of structure.


An observable is definitely not a stream of promises, because an observable is push-based, while a stream of promises would be pull-based.

Big difference in performance, implementation and what operations are possible or not. Even if you add back-pressure in that protocol, it's still a big difference.

And yes, my point is that an observable is close to being a sort of superset of a promise, but not close enough.


> An observable is definitely not a stream of promises, because an observable is push-based, while a stream of promises would be pull-based.

Why would a stream of promises be pull-based? That seems like a pretty odd assumption to me.


> Why do you say "bizarre meme"?

the bizarre meme is that they are 'replacements' for promises, they are really intended for (as you rightly put it) push streams which is a different, though occasionally overlapping, use case then promises.


I said push system, not push stream. Promises are a push system.


If there was a specialized Observable that is guaranteed to only generate one value per subscription, and the API indicated this clearly by naming such an observable "Task" or "Future", I'd be all for it.

Otherwise, its like using Arrays or Lists instead of Maybe. Hope that clears up why its "bizarre", but if not, its because the contract is not restrictive enough.

Example: `let x:Observable<Row[]>`. Does this generate a single event with all the rows? Or does it do row batching? No idea. Can I consume both the same? Not really, depends on the use. e.g. if I want to sort the items, I'd have to implement merge sort for the second case.


> If there was a specialized Observable that is guaranteed to only generate one value per subscription, and the API indicated this clearly by naming such an observable "Task" or "Future", I'd be all for it.

Given any Observable `obs`, you can consume it with `obs.take(1).subscribe()` instead of `obs.subscribe()` and then you'll be sure it only generates one value per subscription.

But in practice the generation of multiple values in itself is rarely an issue to worry about. Often you just want to drop consecutive duplicates, there's an operator for that: `distinctUntilChanged`. You can also end an observable based on a timeout.

Multiple values emitted is a pseudo-problem, and often your programming becomes more powerful ("does more in less lines of code") by using that capability.


Okay. Please replace all Maybe instances in your code with Arrays and let me know how that goes.


I think the argument here was indeed to treat promises as a subset of observables, not replace them with observables. Just as option type is a subset of collections that can have at most 1 element (but all operations that apply to any other collection, like say "map", still make sense for it).


> Promises have always been extremely contentious in JS for a couple reasons and Domenic has had the patients of a saint getting the original proposal through so it's totally understandable that he might not want to deal with this anymore especially if people in his org are against it.

I know this is probably a typo (patients => patience) but that spelling seems to work as well.

> So cancelable promises reignite all the old debates plus there's no obvious right way to do it, like should it be a 3rd state which could cause compatibility issues and would add complexity, or should cancellations just be a sort of forced rejection, which would be a lot more backwards compatibility but with less features.

IMHO neither as cancelability isn't a common requirement. People that need it tend to have more complex requirements and end up tying the cancellation to other chains of logic anyway. Also, unlike say catching an error, you generally want all cancellation handlers to fire rather than just the first one (like with errors).


It's fairly common to cancel an HTTP request. The new fetch API doesn't let you cancel a request as a consequence of this shortcoming of promises, while XMLHttpRequest does.


I think cancelling HTTP requests is quite a useful in some scenarios. Eg 1) Say you're fetching heavy json/binary data on an user action, and before data arrives, he changes his action. Not only do ypu download all of the extra data, but the fetch for the next action is slowed down like hell.

Not sure, but does this mean that I can't cancel a heavy file upload if a user changes his mind?


Herein lies the rub.

So I cancel my GET request, all is fine with the world. I cancel my POST, and now I have no idea at which layer it was cancelled, if at all. Certainly the cancel doesn't propagate to the server.

I've used cancellation in C# and agree that it's super useful, but it's not a panacea and has to be handled thoughtfully. JS can manage without it IMHO


The counter-argument to that is what happens when you make a GET request for a 1gb file? Or you POST a 1gb upload?

Canceling them is now a big deal if it's needed.

Personally, I think they could be handled by the WebStreams API, so when you want to "cancel" a download or upload, you just throw in the function which is consuming or creating the actual data which will reject the promise entirely.

However only ReadableStreams are available in some browsers right now, and WritableStreams are still a bit away AFAIK, so it's not possible to do that right now with promises, but i don't think that warrants adding even more features to a language which will most likely never be removed.


I agree. If you're handling objects of that size in single requests, then you're already in footgun territory. Step carefully.


Cancellation is a high-level expression of intent that you no longer care about the result of the operation.

Due to the very nature of asynchronous operations, it is, in general, impossible to even determine which layer will ultimately handle that request (e.g. in your example, whether the POST won't happen, or it'll be aborted, or it's already completed), much less how it'll be handled.

The assumption is that all layers will do their best effort (defined vaguely in a "common sense" way) to service that request.

That is a very useful abstraction to have, and I don't see why JS is special in that regard.


I mean, this isn't new. When you make a POST request in JS a bad network may mean that it gets delivered but errors out on your end. You need to code your application to be robust to this anyway.


This. Absolutely.

Those applications that already aren't robust to this (I sadly assume that is most) aren't going to be helped by cancellation


fixed the spelling and you're right it's not a super common requirement, but it comes up in context of designing APIs to use promises, and if you are trying to add promises to an API that is cancelable, you'd want some way to express that.


Do you have any idea what the current direction is(if any?) that they are moving in? I'm mostly familiar with the use of tokens; Golang using Context and C# using CancelationToken.

I currently utilize cancellation tokens myself in nodejs(TypeScript) when creating libraries or wrapping the stdlib event/stream based stuff.


With this canceled there is no official direction, and any further movement will likely be based on what is decided in the community.


> So cancelable promises reignite all the old debates plus there's no obvious right way to do it, like should it be a 3rd state which could cause compatibility issues…

It seems obvious to me that the 'right' way to do it is a cancelled promise is in the error state, with the error object just being a new type of error CancelledPromise or something.

But I guess once you start getting into it, this becomes less obvious?


No, the right way is to fix the fetch() api to not return a promise and reflect on why the system was so broken that fetch() api was released with such an obvious flaw.

Because Promises don't need to be cancellable. This has nothing to do with promises. Fetch requests need to be cancellable. But they, in all their wisdom, decided that the fetch API returns a promise. So now promises need to become whatever the hell the fetch API actually needs to return.

You would almost think design by committee is fundamentally flawed, and design by community even more flawed.

Because the edge cases are never obvious. For every new feature the question shouldn't be "do i make 99 programmers 1% happier?", but do i make 1 programmer cry upto the point where they would stab my eyes out if they could?

We've all been that 1 programmer, and yes, i would if i could. It's a clusterfuck of bike shedding. Most new ES2016 have details that haven't been though through and will end up being hated on the same level as '==' and 'NaN'.

The dev-leads of the different javascript engines should not just be able to veto. They should be able to veto people out of the room, out of the debate. And we should celebrate it every time they take advantage of that power. Or just give up on Javscript all together and just let the rot fester.


I think fetch definitely needs to (at least as an option) offer some kind of async API built-in without adding another dependency. Async is likely the most common use case in the browser. I have not been displeased with the fetch->promise API, it works well for me, and is definitely a lot more convenient than XMLHttpRequest. Which part do you disagree with from your experience?


I love the fetch-api as well, but it doesn't allow for one to cancel the request. Which is why people want to change Promises in the first place. Because the committee in all its wisdom has specified that the fetch-api returns a promise. So the only way to make the requests of the fetch-api cancellable is to change the return type of the fetch-api. But that has been standardized to be a Promise. So now they want to change Promises to be whatever the hell they need from the fetch-api.

Do i need to cancel requests? Not really. But not being able to cancel requests is now driving a shitstorm of short-sighted proposals to mutate the Promise contract into some sort of 'whatever the hell the fetch-api really needs reply object'.


There's probably a (hypothetical API) way to get a _fetch_ to be cancellable, even though it returns a promise, without altering the API of general promise. Some way to get a reference to the request going on and cancel it, which would result in the promise failing (assuming it hadn't completed yet) with a FetchCancelledError or something. One could certainly imagine building such an API if one were polyfilling fetch onself built on XMLHTTPRequest (which can be cancelled).

I won't bother trying to think it through though cause it's clearly pointless except as an intellectual exersize!

I actually like that fetch returns a Promise, I find it convenient, I don't think it was a bad choice at all. To make Promise part of ES6 as a general-purpose async primitive, and to make Fetch return one. I also see how it would be nice to allow fetches to be cancelled -- although I'm not sure I've ever needed/wanted to do that, I could see how I might in the future.

It is boggling that a bunch of smart interested people haven't been able to figure out _some_ solution to this.


> The dev-leads of the different javascript engines should not just be able to veto. They should be able to veto people out of the room, out of the debate. And we should celebrate it every time they take advantage of that power. Or just give up on Javscript all together and just let the rot fester.

this would ironically tend to be applied to people like you


No irony :) I don't think i should be in that room. I know enough to know how little i know. I also know enough to recognize a naive short sighted proposal such as 'cancellable promises'.

That doesn't mean it should be me dismissing that feature. But i am very much smiling that the people that should be in that room, did dismiss it.


Bingo, it turns the same thing wasn't wasn't obvious to everyone


weird. it would be interesting to read more of the debate/discussion, is it actually avail anywhere?


> Monads: there was a very vocal faction on TC-39 that really wanted a Promises to be monadic and very much emphasized mathematical purity over usefulness

I don't think this was the case. The main argument of FP side was having more composability built into the core of the language.


While it's great that they were so nice, they are unable to effectively moderate GitHub and ESDiscuss. One ticket for the private member proposal had 600+ comments, some of them crazy stream-of-consciousness posts that should have been shut down.

Dealing with this volume of input requires aggressive moderation, community participation guidelines, and canonical explanations of the decision making process.


> Promises have a bit of a footgun where by default they can swallow errors unless you explicitly add a listener for them.

"a bit of a footgun" is such a huge understatement. Now that I know about this, I'm skeptical too.


and you've missed the last part of that paragraph where I mention unhandledRejection events becoming a defacto standard which straddles a nice line between avoiding false positives (error listener attached latter in the program) and errors going into a black hole


Not sure if that's gone in yet so I've continued using Bluebird for my stuff. You can(and should!) create a global handler for unhandled rejections. It's almost always a bad sign if you get one.


Then don't call them "a bit of a footgun". It's a huge issue, in the process of being maybe fixed, and underplaying it to argue your side does not help matters.


Its already fixed. Thats why it was just "a bit" of a footgun i.e. not unfixable.


Promises work fine in some cases, but absolutely I hate how exceptions and errors are grouped into the same bucket. Oh, you're handling 4XX responses as errors? Well, now you get exceptions there as well, good luck!

e.g. If I'm sending some data over the network, I want runtime errors to cause activity to be aborted and logged. But if I get a 429, I'll want to wait and retry.


Which is why you write a catch combinator to filter out only the errors you want

  function ifErr(predicate, handler) {
    return e => { if (predicate(e)) return handler(e); throw e; }
  }

  function codeIs(n) { return e => e.code == n }

  promise.catch(ifErr(codeIs(429), e => handle error here))


What treats a 429 response as an error?

XMLHttpRequest doesn't use promises, and the Fetch API treats all HTTP responses -- including 429 -- as success.

I think you have a problem with a library.


Yeah, I never said otherwise. I had jQuery.ajax [0] in mind when writing that. If I recall correctly, it rejects the promise when you get a 4XX. I haven't used it since a few years now, but I still see people handling responses incorrectly with other wrapper libs.

[0] https://api.jquery.com/jquery.ajax/


Sorry, meant to reply to parent....

But yeah...one of the several reasons I much prefer Fetch API/polyfill to $.ajax.


no, libraries should not throw exceptions if they get a http response. that is a perfect example where the programmer has to roll his own, because this is entirely situational.

the pseudo-code example of the grandparent seems to me to implement a) detecting the http code, b) throwing an exception and finally c) handling this except.


See superagent where anything that's not a 2xx is treated as an error (https://github.com/visionmedia/superagent/issues/283)


Which pseudo code are you referring to?

Exceptions being for "exceptional" circumstances is a popular misconception. Exceptions are for "an upper layer to handle". You can use them for any alternative values you wish to propagate to the upper layer.


> Exceptions are for "an upper layer to handle".

So are return values :) The difference is that return value is for the next immediate layer; an exception is for some layer, but I don't know what.

IMO your HTTP layer should not be throwing exceptions based on status code. The layer above that should.


Oh ok. Its not pseudocode, the combinators work :)

I would put it the other way around. The http layer should not decide that the layer immediately above it must be the one to handle all its errors. It should also be the one that defines the default (success) case and the error case, since those are defined by the HTTP protocol.


I didn't realize this was a point of contention. I had jQuery.ajax in mind when I wrote this, if I recall correctly they reject the promise when it doesn't receive a 2XX.

For comparison, the new fetch API resolves to a response which can have any status code.


> Monads: there was a very vocal faction on TC-39 that really wanted a Promises to be monadic and very much emphasized mathematical purity over usefulness and any attempt to discuss things tended to get side tracked into a discussion about monads, see brouhaha over Promise.cast

Monads are about usefulness, because a type being monadic means there is a bind/flatMap operation you can rely on to have a set of laws. So a Promise being Monadic means that it is lawful and more composable. And it is especially relevant in the context of a Future/Promise. Aren't you tired of Javascript Future/Promise implementations that are broken?

Like, please learn about the concept.


This is one of the most ironic comments I've ever read on HN, right down to your final knife twist.


I don't get the value of composability and refuse to learn, and "monad" and "laws" sound like math which I don't want to understand. So instead of attempting to comprehend what this is about I'm just going to make fun of you in a comment.


I don't see a lot of point in making better programming concepts seem accessible, and "try to meet people where they are" sounds like populism which I don't want to understand. So instead of attempting to argue that there's value in the idea in a way that doesn't put people off with an implicit accusation of lazy ignorance, I'm just going to go with the implicit accusation of lazy ignorance.


Like, yeah! It's just a monoid in the category of endofunctors. What's the problem?


annnd there we go


It sounds like some Google employees on TC-39 had significant resistance to the idea, and it is worrisome that employees of a private company can block proposals in favor of "their" way, in what is supposed to be an open and transparent Technical Committee, for a language that is supposed to be for all of us.

Google trying to strong-arm control of JS for themselves.


From the Github comments:

"One of the important priorities in TC39 is to achieve consensus - which means that any employee of any member company can always block any proposal. It's far far better for anyone to be able to block, than to arrive at a spec that "not everyone" is willing to implement - this is, by a large large margin, the lesser evil, I assure you."


This is why you don't do development by standards committees. It never ends well.


You miss the point that TC39 uses a champions model: 1 or 2 people who do design, with the committee reviewing.

That broke down here, to the extent that the champion, Domenic, withdrew under some kind of internal (to Google if not his own state of mind) duress. TC39 never had a chance to consider the argument or whatever it was that caused this.

More to say after January's meeting, I'm sure. I'm not going to throw stones at Google but it would not surprise me if there were backroom objections. If so, they should be brought to the committee cleanly. There is still time.


This is a great summary of the situation Brendan, thanks for the clarification about TC-39 committee not actually getting to review the proposal.

The main issue regarding the proposal and its sudden withdrawal seems to be an all-around lack of transparency.


Do the champions provide an implementation of the proposal for the committee to review?


See https://tc39.github.io/process-document/. To get real engine implementations may take cooperation among champion and one or two browser vendors, as the "real engines" (meaning ones that can be tested against the Web at scale) are all owned by browser vendors.

Examples:

* SIMD started from Dart's proposal, which John McCutchan then championed at first for JS, and later handed off to others. Intel, Mozilla, and Google all cooperated.

* ES6 Proxies, where Mark Miller and Tom Van Cutsem were the champions and did a terrific job covering the design space and finding the sweet spots. Andreas Gal implemented in SpiderMonkey and then the spec changed, based on the implementation (but in a mostly co-expressive way, for wins beyond expressiveness; the new proxy implementation layered on the internals of the old). Proxies were quite challenging as they exposed observable internal order of operation details and required reformulating spec-mandated invariants that matter for not only interoperation but also security.


They could, but I'd expect that they generally do not.


What's the alternative, sell your software in a box at Best Buy?


I guess for language like Javascript there is no real alternative. If you want to push the language further without creating multiple dialects, you simply must have consensus among the major parties writing implementations. These parties most certainly would not give the language to hands of (benevolent) dictator and agree to implement what ever he comes up with.


If you're delivering software in 2016 in any language you're probably dealing with standards at some level. Are you using HTTP? Using a standard with a committee. Using TCP? There's an RFC for that too.


Standards committees work best when standardizing something that already exists. That's why the IETF has a policy that any standards track technology has to have two interoperable implementations (although that policy isn't always followed all that closely in recent years).

Is there any implementation of this, or any, proposal? Or do the proponents bring a paper spec to the committee to discuss theoretically, evaluate, compromise, and accept; only then implemented with no particular guarantee that it can be done usefully?


No, implement a proposal, demonstrate that it is worthwhile, and then standardize it.


Anything by committee, really.


I also read that as Google will refuse to implement features if it doesn't get it's way in how they are designed.

Perhaps the people providing the Environment shouldn't be the people designing the language?

Google is very quickly turning into 1990's Microsoft.


> I also read that as Google will refuse to implement features if it doesn't get it's way in how they are designed.

There's no reason for Google to do this, in general, since there are enough Googlers in enough standards groups (all of which operate by consensus) that they can simply object to changes they don't like (like other members of the committee).


> Google trying to strong-arm control of JS for themselves.

The person championing the proposal was from Google too; this narrative doesn't make sense. Let's not conjure up conspiracy theories if "programmers disagreeing about something" adequately explains the data.


To be completely honest, they are the maintainers of one of the largest Javascript implementations.

It really only makes sense for them to have significant input on the features that make it into the language.

Mozilla, Microsoft, and Apple could have done the same thing, as they all write the software, and that only seems fair. If there is a significant reason why something won't work in one or more of the major engines, it might need to be rethought.

I'm not sure why everyone is jumping on this like it's some kind of nefarious thing. What would google have to gain from killing this proposal?


When the people designing the language are the same people providing the implementation, it leads to vendor lock in.

Remember Microsoft in the 1990's, remember the massive Antitrust case about IE?


The antitrust case about IE was related to Microsoft using their position in the operating system market to give them an unfair advantage in the browser market. It had nothing to do with the embrace-extend-extinguish which they attempted on JavaScript.


They couldn't feasibly attempt EEE on JS without Internet Explorer's dominant market share.


Plus the advantage they wanted in the browser market (to strangle Netscape dead, in their own words) was purposefully to artificially maintain their monopoly in the OS market.


I like to complain about big companies too, but Google isn't trying to strong arm control of JS. They hire a lot of smart programmers with strong opinions who express them and it seems like a lot of those were vocal enough to make this happen. Just how it is.


And did so behind the scenes within Google rather than in public?


Right, this exactly.

Also, why is every response with a hint of negativity towards google getting down voted?

If Google was doing the right thing, they wouldn't have to be shielded from online criticism.


Because clearly right now isn't when the transparency is going to come. It might come soon. If we don't hear anything then yes that's a problem, but clearly even the person who wanted this proposal doesn't want to talk.

This does not appear to be avoiding transparency, but just some people had a VERY heated discussion about this and are burnt out.

Mine you, I'm a big user of Go for personal projects and so I've seen Google make it's unilateral decisions before. I just don't think this is one of those situations.


I half-agree. If this issue was discussed in public rather than in private, perhaps the details could have been worked out without the whole proposal being withdrawn.

But because of the Github thread and HN thread, this proposal is finally getting a fair shake by the public dev community. So, that's good at least :)


> this proposal is finally getting a fair shake by the public dev community

This says a lot about the pain of getting anything standardized, honestly. The proposal has been in the works for two years, essentially in its current state for over 6 months, and it wasn't given what you consider a "fair shake" until there was some (imagined or not) soap opera drama.


Didn't some high-ranking Googler basically adopt "if you have nothing to fear, you have nothing to hide" a while ago? Wonder what they're afraid of.


It's only a coincidence that the most vocal opponents to this proposal each receive a paycheck from Google?

I'm starting to wonder why these TC's have so many Google employees, and you can claim it's because "they hire a lot of smart programmers", but the ratio of Googlers to not is way too high to represent the web development community as a whole - the majority of whom do NOT work for Google.


Google is honestly not alone in this, tail call elimination is being blocked by safari, websql was blocked by Mozilla.


In both of those cases you mention I'm in agreement with them, but there was a bit of "shock and anger" when I first heard about them.

And I'm treating this the same way. I'm gonna reserve judgement until after Google have spoken about their reasoning.


This is emphatically NOT true about tail calls and Safari -- the JSC has already implemented proper tail calls, and they were the first to do so.


Safari was against an explicit tail call syntax [0], and as part of that were against "TCO" as an optional optimization, however they are the first to implement PTC.

[0] https://github.com/tc39/ecma262/issues/535


To be fair, Google could easily kill the feature by refusing to implement it in Chrome, TC39 or not.


Yeah it's getting pretty worrying, they're just trying to turn the web into their platform in every way.


promise for cancelable promises has been cancelled, how poetic


And the author unsubscribed from the subject.

Context: https://github.com/tc39/proposal-observable/pull/97


TC39 member here.

cwmma gets a lot right here: Promises have always been contentious in JS (and TC39) and Domenic has indeed had the patience of a saint attempting to wrangle the various points of view into a coherent proposal.

TC39 as a group is generally very motivated to find positive-sum outcomes and find solutions that address everyone's constraints in a satisfactory way. That doesn't usually mean design-by-committee: champions work on a coherent design that they feel hangs together, and the committee provides feedback on constraints, not solutions.

As a member of TC39, I'm usually representing ergonomic concerns and the small-company JavaScript developer perspective. I've had a lot of luck, over the years, in giving champions my perspective and letting them come back with an improved proposal.

The staging process (which I started sketching out on my blog[1]) has made the back-and-forth easier, which each stage representing further consensus that the constraints of individual members have been incorporated.

Unfortunately, I fear that promise cancellation may be a rare design problem with some zero-sum questions.

It's worth noting that there has been no objection, on the committee, to adding cancellation to the spec in some form.

The key questions have been:

First. Is cancellation a normal rejection (a regular exception, like from `throw`) or a new kind of abrupt completion (which `finally` would see but not `catch`). The current status quo on the committee, I believe, is that multiple people would have liked to see "third-state" (as Domenic called it) work, but the compatibility issues with it appear fatal.

Second. Should promises themselves be cancelled (`promise.cancel()`) or should there be some kind of capability token threaded through promises.

What that would look like:

    let [sendCancel, recvCancel] = CancelToken.pair();
    fetchPerson(person.id, recvCancel);

    async function fetchPerson(id, token) {
      // assume fetch is retrofitted with cancel token support
      let person = await fetch(`/users/${id}`, { token });
    }

    // when the cancel button is clicked, cancel the fetch
    cancelButton.onclick = sendCancel;
This approach had many supporters in the committee, largely because a number of committee members have rejected the idea of `promise.cancel()` (in part because of ideological reasons about giving promise consumers the power to affect other promise consumers, in part because of a problem[2] Domenic raised early about the timing of cancellation, and in part because C# uses cancel tokens[3]).

In practice, this would mean that intermediate async functions would need to thread through cancel tokens, which is something that bothered me a lot.

For example, it would have this affect on Ember, if we wanted to adopt cancel tokens:

    // routes/person.js

    export default class extends Route {
      async model(id, token) {
        return fetch(`/person/id`, { token });
      }
    }
In other words, any async hook (or callback) would need to manually thread tokens through. In Ember, we'd like to be able to cancel async tasks that were initiated for a previous screen or for a part of the screen that the user has navigated away from.

In this case, if the user forgot to take the cancel token (which would likely happen all the time in practice), we would simply have no way to cancel the ongoing async.

We noticed this problem when designing ember-concurrency[4] (by the venerable Alex Matchneer), and chose to use generators instead, which are more flexible than async functions, and can be cancelled from the outside.

At last week's Ember Face to Face, we discussed this problem, and decided that the ergonomic problems with using cancel tokens in hooks were sufficiently bad that we are unlikely to use async functions for Ember hooks if cancellation requires manually propagating cancel tokens. Instead, we'd do this:

    // routes/person.js

    export default class extends Route {
      *model(id) {
        return fetch(`/person/id`);
      }
    }
The `*` is a little more cryptic, but it's actually shorter than `async`, and doesn't require people to thread cancel token through APIs.

Also notable: because JavaScript doesn't have overloading (unlike C#), it is difficult to establish a convention for where to put the cancel token ("last parameter", vs. "the name `token` in the last parameter as an options bag" vs. "first parameter"). Because cancellation needs to be retrofitted onto a number of existing promise-producing APIs, no one solution works. This makes creating general purpose libraries that work with "promise-producing functions that can be cancelled" almost impossible.

The last bit (since I started talking about Ember) is my personal opinion on cancel tokens. On the flip side, a number of people on the committee have a very strongly held belief that cancel tokens are the only way to avoid leaking powerful capabilities to promise consumers.

A third option, making a new Task subclass of Promise that would have added cancellation capabilities, was rejected early on the grounds that it would bifurcate the ecosystem and just mean that everyone had to use Task instead of Promise. I personally think we rejected that option too early. It may be the case that Task is the right general-purpose answer, but people with concerns about leaking capabilities to multiple consumers should cast their Tasks to Promises before passing them around.

As I said, I think this may be a (very, very) rare case where a positive-sum outcome is impossible, and where we need, as a committee, to discuss what options are available that would minimize the costs of making a particular decision. Unfortunately, we're not there yet.

Domenic has done a great job herding the perspective cats here, and I found his presentations on this topic always enlightening. I hope the committee can be honest enough about the competing goals in the problem of cancellation so that Domenic will feel comfortable participating again on this topic.

[1]: https://thefeedbackloop.xyz/tc39-a-process-sketch-stages-0-a...

[2]: https://github.com/tc39/proposal-cancelable-promises/issues/...

[3]: https://msdn.microsoft.com/en-us/library/dd997289(v=vs.110)....

[4]: http://ember-concurrency.com/#/docs


Thank you for this response. It's very well thought out and nuanced. I'm not sure what the OP had to deal with, specifically, but I share the same opinions as yourself.

Promises themselves are generic async handling work horses. Cancelling a HTTP call makes sense. Does it makes sense to cancel an operation currently happening in, say, a loop? Likely not.


Cancellable Promises are a reality in our software stack for years in Liferay and related open source projects. JavaScript is a flexible language and this is what makes it what it is, fits into different realities and skill levels. Optionally, you can use Object.defineProperty() to define how your object behaves, if it's configurable, enumerable, writable and so on. You can Object.freeze() to freeze an object. Having a Promise that the consumer can define whether or not it can be cancelled makes sense in general. There are lots of use-cases that the consumer can safely decide if canceling a promise or chaining of promises is problematic or not, e.g. ajax calls, async life-cycle of a UI component.

There are many good arguments about cancellation tokens vs promise.cancel() on ES Discuss https://esdiscuss.org/topic/cancellation-architectural-obser.... One argument that is not accurate is that "cancellation is heterogeneous it can be misleading to think about canceling a single activity". In most systems, it's implemented expecting that async operations can be cancelled, intentionally or not (network problems for example). There is no single answer for this problem, it can be wrong and dangerous, or it can be safe and predictable, that all depends on how the consumer utilizes it.

Domenic mentioned in the proposal https://github.com/tc39/proposal-cancelable-promises/issues/... that most of Googlers on TC39 are against cancellable promises, on the other hand Google Closure Library has a very robust implementation brought from labs in 2014 by Nanaze https://github.com/google/closure-library/commit/74b27adf7. It's heavily used on large real world async applications, such as Gmail, G+, Docs. It shows clearly that there is a real space for cancelling promises without being dangerous.


Thank you! This is helpful. Do you believe there is no positive sum because Promises are out there already, and this could have been avoided had they been released with cancellation in the first place?


I think it's very likely that we could avoid turning this into a zero-sum debate with one total-winner and one total-loser.

That said, I think there's no positive sum because some folks believe that (1) Promises should be the primary async story in JS, and (2) Promises must not allow communication between two parties who hold a reference to the Promise. This means that `async function`s must return a Promise, and the Promise but not have a `cancel()` method on it (because it would allow communication between two parties holding references to the Promise).

Others (I'll speak for myself here) believe that the return value of `async function` should be competitive (ergonomically) with generators used as tasks (I showed examples in the parent comment). Since generators-as-tasks can be cancelled (via `.return()` and `.throw()`), the desire to make `async function x` as ergonomic as `function* x` conflicts with the goal of disallowing the return value of async functions from being cancellable.

In Ember's case, since generators already exist, it's hard for us to justify asking application developers to participate in an error-prone (and verbose) protocol that we could avoid by using generators-as-tasks instead. And that is likely the conclusion that we will ship (and the conclusion that ember-concurrency has already shipped).

For me, the bottom line is that we have very little budget to introduce new restrictions on `async functions`, because people can always choose to reject async functions and use generators instead (with more capabilities and less typing!). I think the cancellation token proposal is well over-budget from that perspective.


In my opinion, having a separate cancellation token that is "threaded through" the call stack is indisposable, because sometimes you need to do more than simply pass along the token that you've got - usually it involves linking several tokens together, e.g. with another representing the state of the component as a whole, as opposed to that particular call chain.

Now, this doesn't happen often, and most of the time you do indeed end up just passing the token along. But when you do need it, it allows for code drastically simpler than any workarounds.

My takeaway from this is that first-class cancellation tokens are the right approach, but languages need some kind of syntactic sugar to eliminate, or at least reduce, the verbiage for the most common case of propagating it around.

(All of this is based on experience working with a heavily async/await codebase written in C# for the past few years.)


> My takeaway from this is that first-class cancellation tokens are the right approach, but languages need some kind of syntactic sugar to eliminate, or at least reduce, the verbiage for the most common case of propagating it around.

That is also my perspective, but I think the syntactic sugar cannot have much more overhead than `async function`.


The good thing is that it's also a pattern that is highly amenable to syntactic sugar, simply because of how regular it is.

Actually, come to think of it, it is a particular instance of a more common pattern where you receive some state, and propagate it to most (usually, all) further calls that expect it. Scala implicit parameters and variables cover this in the most generic way.


makes me think about enabling workers (or threads in node) to cancel promises from other workers or threads (serialization?)


After something like that I would leave the company, probably. If something you care so deeply encounters a very strong internal opposition, probably there is an unaligned design idea and is better to provide efforts where those are better accepted, because otherwise the same story is very likely to repeat. And you know... life is too short.


There are a lot of stakeholders in a large organization, and a change can sometimes elicit strong reactions from involved parties. In particular, in the JavaScript world, implementors are now also explicitly represented in standard process of new language features, since given some of the preexisting warts, there can be nontrivial interactions and many JS abstractions have holes in them. A lot of language features have been rejected for a lot of different reasons. To be fair, some really unfortunate language features have been _accepted_ that have had unforeseen interactions or implementation challenges. I could name half a dozen off the top of my head.

Disclaimer: I work on V8 but was not involved in this saga at all. I don't know which Googlers in particular Domenic is referring to in his comment. And of course I make no judgment on the particular value of cancelable promises.


I don't think this situation would be worth quitting one's job over. So you didn't get your way after arguing with your peers over it, it happens. As you say, life is short.


Once it becomes detrimental to your Mental Health, all options are on the table.


Depends. You may end up being "that guy with the stupid idea" for the rest of your time there.


Only if that's the only idea you ever have.



Still, the debate doesn't happen publicly. I don't know why and who opposes it. Google has no position in it. But someone undisclosed with sufficient power at Google opposes it. And does'nt share his point of view with the community...Sad day for open source.


It's typically written up in meeting notes and someone publishes them if it happens during a meeting. If it happens outside of a meeting it likely happened in GitHub or the ECMAScript mailing list.

The person(s) opposing have likely already shared their view point with the community at different stages. Proposals are not simply created and championed with zero feedback from the ECMAScript folks util they're proposed at a meeting.


Serious question - why is cancelling a promise a reasonable use case? Doesn't cancelling potentially invite non-deterministic state into your program? If not, what would be the difference from throwing?


The presentation about the proposal linked in a comment above [0] and the proposal itself [1] explain the difference to an exception quite well

[0] https://docs.google.com/presentation/d/1V4vmC54gJkwAss1nfEt9...

[1] https://github.com/tc39/proposal-cancelable-promises/blob/0e...


The use case I heard was if for example the user clicks to load a view, which launches a background request, but then changes their mind and clicks to load a different view, which launches a different background request.

The user no longer cares about the original request. You could just discard the data when it arrives, but what if it's an expensive request for the client, or server, or both? You'd want a way to opt out of all that extra load on the system.


I have been waiting for cancellable promises for a while primarily for cancelling long lasting fetch requests which is a very common thing to need.

(Not really sure how to parse non determinism and throwing being related to cancelling promises)


What's a good existing way to deal with this use-case? Is there a useful library that wraps setTimeout / other logic to determine the quality of a user's internet connection in case of very poor connectivity (2G etc) ?


The way to handle this case is to use Promise.race with 2 promises. The first promise is your logic. The second promise is rejected after a timeout. For more detail read the section "never calling the callback" from "you don't know js" book on "async" in chapter 3 https://github.com/getify/You-Dont-Know-JS/blob/master/async...


But that doesn't actually cancel the request, and thusly the data is downloaded anyway? Or am I missing something?


Thanks I'll check it out


In react native, NetInfo has an api to detect network changes and I use that to stop all my setIntervals and start again with a different set of args, including wait time before calling again.


This was deterministic (cooperative).

And yes, you can absolutely handle it by throwing an exception. But cancellation is such a common mechanism that you'd want to have 1) a standard exception type that can be used to indicate it, and that other async operations can handle in a composable fashion, and 2) a standard mechanism to request cancellation cooperatively, again, so that various layers can collaborate on handling a high-level cancellation request all the way to the lowest level like an I/O read.

And that's what this proposal was all about.


A common case is for typeahead completion. You have a requst to the sever in flight to get typeaheads but the user has entered more characters and triggered another request. Now you would like to ignore the old one. Since async calls are not guaranteed to arrive in the same order they were sent, it would be nice to cancel the old one instead of maintaining logic to ignore it.


I'm not really familiar with the backstory here but it sounds like domenic was being seriously pressured/coerced to drop the proposal. What happened?


It sounds to me like most of the other people were disagreeing with him.


It was a case for practicality, versus the (for the lack of a better word) perfectionist arguments from the other side.

I disagree with none of them but I think the discussion was a bit too intense.


Can't tell if "most" or "small group - but enough" people.


"vocal minority"


Funny thing is, they didn't stop it because it was Bad, but because some ppl at Google didn't want it.

(probably because it was Bad, but it's sad to See that one company has so mich power)


Based on the comments in the linked thread, its sounds as if all participating companies have that same veto power. The justification seems to favor filtering false positives over false negatives in a sense. I suppose I can agree with that.


Yes.

Getting crap into the language is worse than not getting good stuff in.

But did they really veto here?

As I read a few days ago, the guy doing the proposal just was forbidden by Google to work on it anymore.

I wasn't in favor of this proposal think it's one of the better outcomes, but the form how if came to be dropped was a bit strange.


This is what I'm seeing. I wasn't really a supporter of the feature... but I wasn't going to try and tear it down. Google has limited resources, so, since they didn't see any usefulness for their business model, they shot it down. They've instead chosen to throw some resources into evangelizing AMP - a technology that I hope dies quickly.


Cancellation can be implemented with another promise passed into cancelable function as an argument. This way it is easier to see that asynchronous function is cancellable. And Promise implementation would become simpler.


was it an actual mid-computation cancellation with collaboration from the VM?

I have done some heavy computations in JS, and it's a nightmare: your workers can be killed (I guess it's the browser saving resources) without any even escaping them, and webGL is always blocking the main thread (I had to chunk my computation myself).


No it's for I/O ops. Think cancelling a fetch() like you can do with XHR.


ah! that seem like the easy stuff.


But fetch() returns a promise, rather than a handler you can use to cancel the request. So some people wanted to add "cancel" as a third potential state of a promise.

Which smells a bit like a hack to me. Why stop at cancel? The Fetch API shouldnt return a promise, maybe?

But im assuming a lot of people gave it a lot of though. Maybe it is common enough. Maybe it is horrible to implement in a performant way.

But so far the new ES features ive seen should have been critisized more not less. Look at the ugly mistakes that did make it:

    f => { pizza }
Is the block on the right a statement block or an object literal? Its a statement block so if your lambda function returns an object literal than you need to wrap it in a statement block with a return statement. If it is anything else, than you dont.

Thats the horrible crap that actually gets through the proccess. Imagine how naive and broken the propositions must be that get killed?


> if your lambda function returns an object literal than you need to wrap it in a statement block with a return statement

Or you can just parenthesise the object literal:

    >>> (x => ({ x }))(23)
    Object { x=23 }
The grammatical ambiguity is unfortunate, sure, but the simplest work-around isn't so bad.


If one forgets the (parentheses) you may or may not immediate or later can an error _somewhere_.

It isn't about the workaround -- it's about all the subtle bugs it will introduce in the edge-cases, for example:

     f => () => {}
     f(); // undefined instead of empty object


It is a statement block. You need to say

    f => ({ pizza })
to get an object literal. This isn't that hard to understand.


But it's easy to forget, just once. For example:

    x => () => {}
    x(); // undefined instead of empty object
This can lead to immediate syntax bugs, eventual errors throws and silent semantic errors.

Which reinforces the whole argument that they should think a bit harder about all the new features.



I'm trying to understand why Google people don't want cancellable promises?

Did I miss an explanation somewhere?


They probably want some form of cancellable promises, but think that the semantics and syntax in this proposal were bad. I agree with that.


Cases like these show how needed transparency is (see Mozilla). Hopefully this will cause Google to open up some more (although doubt it).


They are being transparent, there is a scheduled meeting of everyone involved in january.

And using this comment to rant a bit, I hate the idea that just because information isn't being announced immediately after someone finds out about something means that you aren't being "transparent".

In this case, this was most likely discussed internally, and come jan they will present the results on why it was canceled giving insight into the why and perhaps it's replacement or their ideas moving forward.

Now if come january it's just said "it's canceled" and nothing more, then i'll be right there with you calling for transparency and more discussion on the reasons, but you need to give the people involved a chance to actually have that discussion.


I have a different view on 'transparency': something is 'transparent' if you can see the internals of that thing at any given moment. Time is a critical factor for transparency: What if, instead of the next meeting being in a month (where we hope this will be discussed), it was in a year? 2 years? 10?

People have finite attention spans. There's probably a well-intentioned, innocuous explanation as to why the process has been set up like this: I imagine the time delay is intended to give people time to consider the issue (reducing the chances of irrational emotional outbursts, increasing the chances of rational discussion).

However, introducing a delay between 'decision' and 'revelation' of the internal machinations behind the decision creates greater scope for people to act in bad faith. This seems at odds with what (I think) is the function of transparency.

Whether this is an acceptable trade-off is a separate issue.


The reason they have to have the announcement is because they discussed this and made the decision themselves in a non-transparent way.

Or if that's not your definition of transparent, then what we want is more than just mere transparency. We want open discussion. I'm getting sick and tired of "open source" only happening when Google says so.


But we already got our pitch forks out and everything!


The proposal has been withdrawn. Why would it be on the agenda in January?


I use Promises extensively, including cancellations (via the bluebird library) and find the following to be strong reasons not to ever use them, if they are to be implemented in a similar way that Bluebird uses them:

1) forward propagation side effects: canceling a promise doesn't mean rejecting. it means that further .then() or catch clauses are not invoked. This breaks (imo) a fundamental tenant of promises.

2) backwards propagation side effects: if you are waiting on a promise and cancel it, this is causing the previous promises in the chain to suddenly stop processing, again without any rejection, it just stops.

3) simple workaround: I don't think it is appropriate to backwards propagate promise cancellation, but it is very easy to emulate cancellation for downstream users (those using your promise). simply attach a "pleaseAbort=false" property to your promise and if a downstream caller wishes to cancel, they set .pleaseAbort=true and you can stop your processing and return a rejected promise.


It would've certainly helped to avoid this: https://facebook.github.io/react/blog/2015/12/16/ismounted-a...


Can someone give a good use case for these that can't be easily solved with existing tools? I'm really struggling to find a good example. I seem to have this problem for a lot of new stuff introduced into Javascript...


Unlike an XMLHttpRequest which can be aborted, the new fetch() API does not provide a way for you to abort the request. This is because there is no standard way to cancel a Promise which is what fetch() returns.


So.... we introduced Promises so that we could introduce fetch() that returns a Promise, only to discover that it meant we couldn't get all the other useful behavior of XMLHttpRequest. So while a proposal to add cancelable Promises would fix the inability to abort, we'd still need yet more additions to completely get back to functionality we had previously.


That wasn't the reason promises were introduced. An XMLHttpRequest can be synchronous which is Bad™ and that is something explicitly not allowed by the Promise/A+ spec which means it will never be something that fetch() could do. The advantage of promises is that they can be `await`ed in the context of an `async` function which takes a lot of the cognitive load off of the developer.


Furthermore the `await` syntax is one of the reasons why cancelable promises is contentious.

There's no easy way to "cancel" an awaited promise without basically resorting to the .then() syntax.


Can't you use bluebird promises with fetch to add cancellation? I haven't tried it. I suppose you may want abort semantics but those appear to be impossible to implement in a generic way.


Why should you be satisfied with using tools? If the feature is compelling enough, it should be included in the standard library.


Because a tool can be replaced, it can be upgraded, it can have multiple competing implementations that will work across browsers and even languages. Tools can be easily deprecated, removed, changed, fixed, and improved. Tools can be made much simpler and faster by only supporting a single use case, or they can be slower and more complicated but solve them all.

Language features are much more set in stone, mostly because breaking backwards compatibility in a language implementation is not something to take lightly. You don't have the option of only supporting a subset of a feature for a massive speedup, it's all or nothing (for an example of this, see benchmarks between the native .forEach and alternate implementations which are much faster but cut some corners).

JS is still feeling the pains of mistakes made in it's early days, and throwing more features which aren't fully thought out into the language is a recipe for disaster. (This does not mean that I think every feature added to JS is perfect, or even warranted. It's just the reasoning for not trying to add some features which could be better addressed with tooling)


I might be talking past you here, but I think promises definitely qualify as a language feature for JS.

Promises get baked deep into API signatures.

Getting the tradeoff wrong leads to the horror of e.g, the situation for years in c++ where every major library had its own competing string class. Even if std::string was not the best possible string class of all possible string classes, just being able to call all the things without wrapping or thunking or templating everything fixed a major source of pain.

We have already seen this in JS to some extent with various libraries offered in versions supporting native or q or bluebird promises. Fortunately JS is a lot more forgiving than C++ in this respect due to its dynamic nature.

Anyway, the fact that they will tend to appear somewhere in API signatures for any large library is why I support native promises as language feature, not tooling.


That's a really good point, and it's actually close to the reasoning that `class` stuff was added to JS.

Since everyone was making their own incompatible ones, they decided to standardize it even though it really shouldn't have existed in the language at all.

I don't know, personally I'm "neutrally against" them. I'm not going to be upset if they make it in, but i'm not going to personally use them and I don't really feel they are needed as the problems they solve can be better solved by other methods/architectures (some of which admittedly aren't in browsers yet).

That being said, i'm really hoping that the opponents of it come out in january and explain their reasoning.


Fair point. Thanks for your input.


To clarify, I was talking about existing tools in the standard library


Sounds like MIT vs Worse is Better


For anyone else left clueless by title: a proposed new feature for JavaScript, as in ECMA TC39, was "cancellable promises", trying to handle "cancellation" of async actions ("promises") by extending normal try-catch structures with a "canceled" case. Due to unspecified internal Google drama, this (presumably popular) proposal has been withdrawn.

Here are some slides with technical details of the proposal:

https://docs.google.com/presentation/d/1V4vmC54gJkwAss1nfEt9...


So, I'm not saying they were wrong or right, but this is interesting to see what happens when the "consensus" is a major corporation.

I'll make a prediction now. We'll be seeing things like this happening in the Microsoft code base in a few years. I'm a huge fan of Microsoft and their open source effort, but I'm not above believing that corporate interests win in the end.

EDIT: Thanks for the downvotes, care to explain where I'm wrong? I'm sharing something called an opinion.


Your opinion is wrong, because it's off-context.

Google, in this context, is the developer of one of the main implementations. It is also many other things, which do not matter. You might equally well say "what happens when the consensys is a six-letter word". You're being downvoted for dragging in irrelevant and inflammatory rubbish.

I don't know, but I wouldn't be surprised if the C++ committee has a similar consensus goal, possibly informal, and that gcc has an effective veto. It's a major implementation. Is that bad? No?


FTA:

> One of the important priorities in TC39 is to achieve consensus - which means that any employee of any member company can always block any proposal. It's far far better for anyone to be able to block, than to arrive at a spec that "not everyone" is willing to implement - this is, by a large large margin, the lesser evil, I assure you.


Maybe I just haven't been part of software development in a large enough company, but I don't think a vote of all-or-none is the best idea. This of course is just my opinion and it is based on ZERO proof. I feel like it should at least be majority rules.

My concern would be if Google would oppose something desperately needed by the community because the feature they are blocking for will benefit them in some way financially. I'm sure it's a hard case to make, but I am curious nonetheless.


Consider your hypothetical: a browser vendor doesn't want to implement some feature. Because of your proposed process change, the feature gets rammed through into the standard over that browser vendor's objections. They probably won't implement it (their objections remain, despite the political process). The standard is now a "standard", and we're right back to the browser wars again.


It's kind of funny to be in a world where Microsoft gets the benefit of the doubt (the 90s are rolling over in their graves).

I'm a huge fan of Microsoft and their open source effort, but I'm not above believing that corporate interests win in the end.

You can save yourself some suspense: yes corporate interests win. That's a result of a business model where corporations serve faceless shareholders. You are either making money or you aren't. The corporation is a soulless entity that workers and money pass through.

This seems like pretty common human behavior, no?

On a less cynical note, this kind of behavior is why a language designed by a bunch of corporations (JS) seems less and less interesting. Good luck practically influencing development if you aren't embedded in an organization already.


Oh yeah, I'd count project.json in dotnet core to be an example of that already. The community was told it would be pulled eventually with no real involvement from the community.

EDIT: Or just robo-downvote me, lol instead of presenting a counterpoint.


It was not like they didn't have a reason. The project.json was completely incompatible with the rest of their build chain, and changing it would be a colossal undertaking, that would mean they had to support two different build systems to remain backwards compatible.

I'm not in favor of the msbuild system at all, it's not good. But it works, it's reasonable once you learn it (like most other build systems), and since they're footing the bill and have to keep the entire system in a coherent state, they decided what they did.

The only reason we saw it, was that they've opened up their development process a lot earlier than they used to do.


Totally agree, I think they realized what they did once they dove into the deep end. I won't criticize them pulling that feature back, it was just the way they went about. I can't be too hard on them, I'm sure they've learned a ton about maintaining source publicly since then.


Serious question, is there a build chain out there that anyone is really happy with?


Serious answer:

No.


project.json was pulled due to very strong negative feedback from the community. Not sure how you missed that, but it's actually an example of Microsoft changing course due to significant community involvement.


I'm willing to accept that I'm wrong, but can you provide the discussions around that? I did somehow miss it. The only discussions I saw were hand wringing from people about not being involved enough, and these weren't small threads.

Thank you for correcting me.


> when the "consensus" is a major corporation

What do you mean by that? If consensus is required any member can veto a proposal, not just those belonging to the major corporations?


I probably should clarify. I mean when consensus is a group that is making decisions with a potential financial impact to their employer.

Google cancelled a proposal because reasons, which will be found out in January apparently.


"Unspecified internal Google drama" may well have been "unspecified internal Google technical arguments", yes?


In my head, I'm imagining them furiously white boarding architecture while someone is way over-sensitive that their ideas aren't being considered. Of course this probably isn't true.

I have an active imagination.


No, I suspect that may be clairvoyance.


I would say that's quite unlikely. Granted, I have never worked at Google, but it seems that employers have quite a difficult time gathering more than 1-3 employees per group who can disagree on technical details without pitching a fit and harboring a major grudge. My employer's culture has recently entered a nosedive triggered by just this thing. Hopeful, but not expectant, of a recovery.


> presumably popular

Proponents were definitely vocal about it, but I don't know how well that translates into actual popularity; at least not like async/await. Nor does it seem to be the type of thing where all sufficiently experienced experts agree. As far as I can tell, it's been a divisive topic among the intelligentsia since day one. Correct me if I'm wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: