Hacker News new | past | comments | ask | show | jobs | submit login

This was probably the single worst design decision of JavaScript. It is the only language I know of that has two "absent value" types. If null was a billion dollar mistake, then undefined adds an order of magnitude to the problem. There is absolutely no reason for undefined to exist.

We've generally adopted strict mode. I think it is now time to come up with and adopt a new no-undefined mode. And yes, it should break the standard library. It is worth it.




In almost every case you test for someVal != null or someVal == null (which is almost the only case where == is preferable to ===)

And on the other hand sql made the opposite mistake where NULL means both "this field has an empty value" and "this field does not have a value"


Well, well, well... Not so long ago, I joined the opposing camp where everyone was arguing to drop null into the Mountain of Doom.

https://medium.com/@hbarcelos/why-i-banned-null-from-my-js-c...



typeof null === 'object' // true

Brendan Eich has elaborated on why the decision was made, whether it was a good one or not - null is an empty object pointer.

Idiomatically, I'd argue that `null` shouldn't exist in Javascript, since undefined is the global "default empty" value.


Please see https://x.com/BrendanEich/status/1273250867823538176 and the quoted thread in that post.


If no Java big brother language mandate, then only nil:

https://x.com/BrendanEich/status/1672314927401549824


> It is the only language I know of that has two "absent value" types.

It's not very common but not unique either. Perl has `undef` that roughly works identically, which became generalized into the "undefinedness" concept in Raku [1]. Ruby doesn't have `undef` value, but it could have been in the alternative universe [2]. Even more languages have multiple absent values which are not necessarily compatible to the null-undef distinction (e.g. Objective-C).

---

I think the separate `undef` value was mainly regarded as a solution to the apparent problem of detecting the absence in general, for example the absence of index or argument. Consider the following Python program for example:

    def foo(obj=None): ...
It is clear that the optional `obj` argument cannot alone distinguish `foo(obj=None)` from `foo()`. A common idiom is to have a private object in place of `None`:

    _NOT_GIVEN = object()
    def foo(obj=_NOT_GIVEN): ...
It is still possible to somehow obtain a reference to `_NOT_GIVEN` and therefore use `foo(obj=_NOT_GIVEN)` which is indistinguishable from `foo()`, but why would you do that? `None` is sometimes a valid argument to the optional argument, but `_NOT_GIVEN` is clearly designated to be invalid for that. Now rename `_NOT_GIVEN` and make it a language construct---voila, you've got `undef`.

`Undef` might have been a working solution a decade ago, when we were still struggling with dynamically typed languages in general and systematic approaches were less common. Lua for example uses `nil` for both purposes; `t[key] = nil` is a valid way to remove given key from the table `t` (with a caveat that it doesn't shift any subsequent keys if the key was an integer) and an excess argument is filled with `nil` [3]. This is painful from time to time, say, a table of optional integers is not straightforward. `Undef` might have been a good compromise under this observation... if we didn't have any algebraic/sum data type like today.

[1] https://docs.raku.org/language/typesystem#Undefinedness

[2] https://stackoverflow.com/questions/6975266/what-is-the-unde...

[3] Lua even tried hard to remove any visible distinction between the actual `nil` and real absence of value! But it's still not perfect, and the discrepancy is much easier to detect from the C API.


Maybe I haven't worked with dirty enough Python, but no, I don't think defining a private object in place of `None` is a common idiom at all.

None signifies absence, and if you need to pass a special value, then you might define a special object for that.

You can't distinguish `foo(obj=None)` from `foo()` because they are the same thing. You probably want something like `foo(obj=UNSET)`


I mean, it's common when the argument itself has a domain of all possible Python objects and therefore `None` should be usable as a literal value. The requirement itself is not very common, but virtually all code with that requirement uses this idiom in my knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: