Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript Coercions Grid (getify.github.io)
86 points by bfoks on March 30, 2019 | hide | past | favorite | 40 comments



JavaScript gets a lot of flak about weird coercions but I never found them a big source of errors compared to say more basic things like forgetting function arguments or getting them in the wrong order. Coercions are way less of a problem in TypeScript as well because coercions that change the type will get flagged at compile time.


Agreed, These corner cases are weird only because there's basically no sensible thing to do in these situations which would be consistent with the rest of the rules. Since there's no real reason you'd be doing any of these weird coersions I think it's pretty unlikely that they'd result in bugs. It's not fail-fast, but I don't see this stuff resulting in any subtly incorrect behaviors.


What about calling toString() on an object? I think serializing the object would be more useful than [object Object]. I'm sure JSON.stringify didn't exist originally, so it wasn't an option. I guess it must be for backwards compatibility, but I can't think of much of a use case. Maybe something somewhere uses it instead of typeof?


> there's basically no sensible thing to do in these situations

Are there any better options for dynamic languages when a weird coercion is going to be done? Throw an exception? Exit with an error? Convert to some special invalid_coercion value similar to NaN?

It's another reason static type checking (i.e. don't let the code run) is a no-brainer to me.


>Throw an exception?

Seems like the best choice rather than plowing ahead with assumptions


I feel like it's fine for languages in a web context to take a "the show must go on" attitude as they do. Partially working webpages (slightly broken html/js/etc) are generally much better than a blank page/error message or none of the js working at all.


Dynamic languages are all about letting the caller decide what makes sense and what doesn't, rather than the callee. If you are too conservative with the input types you allow then it can create unnecessary coupling.


Forget to convert a string to a number for example, and you'll get very surprising results for arithmetic expressions, eg. (x+1)*2.

Discovering how the sort function works was also not fun.


I'm not entirely sure why all of this even exists, it seems like it creates a lot of potential for bugs. I've been developing in JS for years and I've never intentionally used type coercion. If I ever needed to operate on multiple types I would always explicitly cast them to the correct type beforehand.


I think it's mostly just backwards compatibility at this point; unlike with server-side languages, you can't just bump the major version and let people switch to it at their own pace. No browser is motivated to make breaking changes because there's a chance that they could break existing sites, and from a user's perspective, all they'll see is that site Foo works in Browser A but not Browser B, so Browser A will presumably lose some market share.


>unlike with server-side languages, you can't just bump the major version and let people switch to it at their own pace.

That's what strict mode was for.


I'm not super knowledgeable about JS, but my guess is that most of the stuff described in the article probably don't happen under strict mode, right?


Correct.


Using variables in Boolean expressions (to check whether the value exists) is the one place I (and probably most JS developers) use type coercion. Which can still bite you, especially if you have a zero that you expect to be considered a present value.


I don't really get why `-0 + 0 == 0` is marked as a WTF? I guess getting -0 would be marginally better, but 0 is a perfectly fine answer.

I also think `true + 0 == 1` is perfectly reasonable, but I get that this might not be reasonable for everyone. All the other fixes look like genuine improvements.


I think -0 + 0 == 0 is correct IEEE 754 behavior too. Two numbers that only differ in the sign cancel out to +0 when added together.


Yep, it is +0 under default rounding direction. But +0 == -0 anyway.


The "WTF" label is not about correctness, it's about surprise/intuition. It doesn't really matter to me whether it's mathematically or IEEE correct. It's strange and inconsistent.

Yes, two numbers of different signs added together are supposed to be 0, but... a number is never supposed to change (sign or magnitude) when you add 0 to it, and here it does... so... it's a strange corner case that I think defies intuition.

Given the two precedences that are incompatible, I think far more people are likely to think "anything + 0 ======= anything" than they are to think "anything + -anything ========== +0". So that's why I marked it as a WTF.


> number is never supposed to change (sign or magnitude) when you add 0 to it

In the “real numbers”, zero doesn’t have a sign at all, and -0 and 0 mean precisely the same thing. Floating point is an approximation which needs to make some choices about edge cases, for the sake of practical uses (for instance it is useful to distinguish negative underflow from positive underflow, so there is an extra -0 value included).

The behavior that 0 - 0 or -0 + 0 produces 0 as output is not an unreasonable choice (it is what I would expect, as someone with a decent amount of mathematical experience). I would not expect very many people to have the “intuition” that -0 + 0 or 0 - 0 should produce -0 as a result, assuming they had any intuition at all about what should happen in this edge case.


In JS:

  -0 + 0;   // 0
  -0 + (-0);  // -0
  -0 - 0;  // -0
I claim that the `-0 + 0` case is the strange inconsistent one, so that's the reason for my WTF label.

----

Consider the counter-argument, that it's intuitive/correct because in math `X + (-X) = 0`:

  3 + (-3);   // 0
  -3 + (3);  // 0
  -0 + (0);   // 0
  0 + (-0);  // 0
It's true that this characteristic by itself is preserved, but where it falls apart:

  X + (-X) = 0    // -->
  X = 0 - (-X)   // -->
  X = 0 + X
That final statement should be true for all X, but as demonstrated above, it's not true for -0.


-0 is truly a mistake on the part of the IEEE committee. You can get it by dividing by -infinity, it's supposed to indicate "zero approached from the left in this case, but it's not consistent; sqrt(-0) will give you -0 in most implementations


1/–∞ == –0 seems like obviously the correct behavior in context of IEEE floats.

I think if someone is careful it should be possible to make an implementation of complex square root on top of IEEE floats such that √(–a² + 0i) == ai, whereas √(–a² – 0i) == –ai, representing the two sides of a branch cut.

Yes, √–0 should be 0. File a bug against whatever implementation returned –0 for that one.


IEEE 754 says √–0 is –0 though.


Ah really? What is the purpose of that?


Not sure. Via https://stackoverflow.com/questions/19236117/what-numerical-... I found https://people.freebsd.org/~das/kahan86branch.pdf, but that's a more complex read than I'm prepared for right now ;)


That paper is about implementing branch cuts in a complex-valued function (e.g. complex square root), and doesn’t discuss real square roots.

In the context of that paper it seems to me that √(–0 – 0i) should be 0 – 0i and √(–0 + 0i) should be 0 + 0i, but under no circumstances should the result be –0 ± 0i, which is on the wrong branch.

The obvious extension to real-valued square root would be √(–0) == +0.


I made a PR for -0+0 based on your suggestion: https://github.com/getify/getify.github.io/pull/1


See also: JavaScript Equality Minesweeper: https://eqeq.js.org/


Turning +true into 1 and +false into 0 does not seem surprising or problematical. To me this seems like the obviously only correct behavior. Turning these into NaN would be a giant terrible WTF for me.

The author might want to add !!foo as an additional type of coercion (to bool).


Don't worry, it looks scarier than it is


Brendan Eich never really understood the concept of equality.

https://dorey.github.io/JavaScript-Equality-Table/


Sometimes I feel like we are living in the wrong timeline.

In the right timeline, a deeply flawed language like JavaScript would never have been successful and we would use a decent application delivery mechanism that is platform-sensitive and not a hypertext document browser with a document model unsuitable for interactive apps.

But here we are: Never bet against JavaScript


> In the right timeline, a deeply flawed language like JavaScript would never have been successful

but in that same timeline, we had Java, which was touted to solve all of those problems, vs Javascript which was touted to add slight interactivity. We took a step sideways, instead, with Java offering a lousy user experience for what it promised, and Javascript getting progressively better, while promising nothing in the early days.

The other competitor, Flash, ended up flawed in its own right.

And here we are.


The trouble with Java in the browser wasn't the language per se; it was the external runtime. This was also the problem with Flash. It was basically like downloading and running a random exe from any site you visited.

From the start Javascript was built in to browsers and even as it grew it was running in the context of a web page. By the time Javascript had grown large enough that people were looking for other languages it had become too entrenched.

Besides, Javascript is good enough for most websites. It's only larger web apps where it becomes more of a problem.


> It was basically like downloading and running a random exe from any site you visited.

which is pretty much where we are today, especially with wasm and bundles. we just went around the virtual machine and byte code before coming back.

from Javascript's start, it was embedded, but Java applets were pretty early as well. The core problem was direct interactivity with the browser, which Javascript had a small amount of initially, and Flash/Java tried to subsume. The difference was how the browser itself was treated.


The key difference is in the context. "Embedded" Java and Flash still required an external runtime, but the browser provided them a surface to paint on. Later on browsers like Chrome tried sandboxing the Flash process but the trouble is Flash was built under the assumption it would have full user access, so that could only go so far. There's a reason news of Flash exploits used to regularly appear in the tech press.

Wasm was built from the ground up to be run in the context of a browser. It's also implemented by browser vendors so they can take responsibility for security. In essence, the difference is that it's made to be sandboxed from the rest of the system.


> The key difference is in the context. "Embedded" Java and Flash still required an external runtime,

But that was Netscape's initial decision. There was nothing stopping a browser vendor from embedding Java into the browser the same way. Microsoft ended up doing that for VBScript in IE. Lotus Notes was embedding Java on their clients in the mid 90s alongside LotusScript. In addition to that, you could even copy/paste a Java applet from the web into Lotus email, and it would run, although that was different from having Java embedded as a language.


Sun wanted Netscape to embed Java in the browser. Netscape wanted to make their own language. The plan was actually to embed both along with a canvas component, but there wasn't enough time to do anything more than JS for that release, and so Sun went with Java applets and canvas would have to wait until HTML 5.



It's the epitome of "Worse is Better" :)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: