Hacker News new | past | comments | ask | show | jobs | submit login
A case study on strict null checks (figma.com)
166 points by rudi-c on Dec 17, 2020 | hide | past | favorite | 112 comments



I'm a huge fan of strictNullChecks but it is remarkable how much language features contribute towards making non-nullable types ergonomic. For instance Rust has a lot of primitives on Option a lot nicer.

One that I love is `Option::ok_or` (or `Option::ok_or_else` if you like). It takes an Option and converts it to an error. With the try macro (?) it makes Option a lot easier to use. Compare:

   function parseFile(file: File | null) {
     if (file === null) {
       throw new Error("File must exist");
     }
     // TS now infers as File
   }
to:

   fn parse_file(file: Option<File>) {
      let file = file.ok_or(Error::new("File must exist"))?;
   }
Likewise if you want to apply a function to an Option if it is Some and pass along None otherwise, you can use `Option::map`:

   fn parse_file(file: Option<File>) {
      let parse_result = file.map(|f| parse(f));
   }

Indeed it's a little interesting how libraries have adopted paradigms beyond the language. React is extremely expression based, but that makes for a clunky combo with JS' primarily statement based control flow. You see this in React developers' reliance on ternary operators and short circuiting for control flow in JSX.

Of course this just JavaScript being a classical imperative language. Not much to do about that.


Is it crazy if I think that first example is the most readable of the three?

In fairness, I've done a lot of work in TS and exactly none in Rust, so this is totally biased, but at the very least, it seems like all you're getting in the next examples is two fewer lines of code, in return for assuming that the reader is familiar with 1) `Option<x>` 2) `.ok\_or` 3) the syntax of the third block which I don't remotely get.

Genuine question, how comparable are these things to understanding "File | null" in TS, which I would consider day 1 learning?


> Is it crazy if I think that first example is the most readable of the three?

IMO, sort of!

All abstraction requires you to understand it at some level before you can quickly reason about it in code. But once you do, it allows you to reason about things at a higher level, rather than at a level where you have to focus on each detail individually. This is a net win for good abstractions that are (generally) simple and minimally leaky, but it can be a net loss for abstractions that are complicated.

You see this same conversation play out with functional looping constructs vs. imperative ones. Which is more readable?

    for (int i = 0; i < a.len(); i++) {
        a[i] = 0;
    }
    
    // vs.
    
    a.map! { 0 }
If you don't know what `map!` does, the former. And many people argue for this for the sake of "simplicity".

But when you understand functional iteration—which is generally a simple, non-leaky abstraction—the latter wins by a mile. And while you might look at this example and think "you're just saving a few lines of code", the latter not only completely eliminates entire classes of problems (off-by-one errors, slow calculation of `len()` for e.g., NUL-terminated strings, etc.) but knowing it also unlocks a bunch of additional useful tools like `reduce`, `filter`, and friends that reduce reams of boilerplate throughout your code while dramatically improving comprehensibility.

The same is true of `Option<T>` and `Result<T>`. They're wildly powerful and allow for rapid understanding of code without having to read and parse if-else branching to confirm that the logic is performing null checking or error handling (and more importantly, doing it correctly).

So you're not crazy for thinking the first example is the most readable of the three given your current knowledge. But you are crazy if you think the first example is preferable to learning the relatively simple abstractions of Option<T> and Result<T> which allow reasoning about things at a higher level and unlock extremely powerful tools in doing so.


A counterpoint: Option and Result are hard to read and unpleasant to work with. They are so annoying, that the language designers extended the language itself to make them tolerable - do notation in Haskell, '?' operator in Rust, guard-let in Swift.

A plain old for loop has so much to recommend it. You get powerful control flow constructs (break/continue/return) and obvious performance characteristics.

Can you tweak your function to stop filling the first time a zero is encountered? That's a simple one-line change with the for loop, but a puzzle for the functional iteration.


> A counterpoint: Option and Result are hard to read and unpleasant to work with. They are so annoying, that the language designers extended the language itself to make them tolerable - do notation in Haskell, '?' operator in Rust, guard-let in Swift.

Option and Result are tolerable without language extensions (as long as your language has first-class functions and parameterized types, which you want anyway) - https://fsharpforfunandprofit.com/posts/recipe-part2/ . do notation is a small, purely-syntactic piece of sugar that's usable for many different cases, not just error handling.

> A plain old for loop has so much to recommend it. You get powerful control flow constructs (break/continue/return) and obvious performance characteristics.

Those are all language extensions! You're talking about adding four keywords to the language, none of which are anywhere near as reusable as do notation.

> Can you tweak your function to stop filling the first time a zero is encountered? That's a simple one-line change with the for loop, but a puzzle for the functional iteration.

Different code should look different. map, reduce, fold, traverse, foldM are all different functions that do different things, but they're easy to work with because they're all normal functions that obey the rules of functions (and if you're ever confused you can just click through to the implementation in plain old code). Languages don't want to offer several different variants of "for" because it's a language keyword that has to be supported at the language level, but the result is that the "for" loop is far from simple - it does several different things depending on how exactly it's used, and you can't tell which except by going through the details every time.


Fair points - Haskell is all-in on these ideas, and agreed that do-notation has power well beyond Result and Option. I regret including do-notation in my critique, maybe list comprehensions instead.

I think we disagree on what "different code" ought to mean. I have a C function which multiplies a list of numbers; I make it immediately `return 0` if it hits zero. In C that's the same function, just optimized; in Haskell it's a breaking API change due to laziness. I suppose the languages reflect that difference.


I don't think it's about laziness; 99% of the time bailing out of a list operation early vs processing the entire list is a semantic difference that I want to be able to see when I'm reading the code. If what you want to do is really and truly just a performance optimization then the language runtime should be able to do it.


A for loop's greatest strength is its greatest weakness: you can do anything, including in most languages mutating the index variables (god forbid..!), deeply nesting with mutable data, and so on. I usually find I can understand a map quicker than a loop, because a map requires that the code be simpler in most cases.

Obviously in the trivial case though a loop is very easy to understand. The trivial case usually, in my experience, involves iterating through the entirety of an array though anyway, in which case the map is usually more concise.

YMMV! :)


Yeah, trivial cases are trivial everywhere.

I like your strength/weakness observation. But if you are mutating index variables, you have a hard case, and it probably will be easier to express with a manual loop than trying to shoehorn it into awkward functional constructs.

An example is the "discard elements by a predicate" function, aka Vec::retain in Rust, std::remove in C++. Rust implements this using a for loop. Maybe it can be done with functional constructs, but it would be harder to write and to understand.

https://doc.rust-lang.org/src/alloc/vec.rs.html#1105


But Vec::retain is pretty close to one of the fundamental functional constructs for containers (it's the in-place version of filter). The argument here is that others should use functions like retain instead of re-implementing that nasty indexing logic. But I don't think it should be surprising that array-like containers are going to have to involve some indexing logic at some level.


Most languages have something like:

    for (ElemType x : collection) {
        # use x here
    }
or:

    for (elem : collection) {
        // use here
    }
for dynamic languages or statically typed ones with lots of inference

Not sure what you can do with this that:

    collections.each(x -> {
        // use here
    })
can't.


Those look like Java loops, and I don't recall them being much more than sugar over a for loop (although presumably they're implementing some Iterable class and apply to more than just arrays).

Either way, like most things in functional programming, it's often about restricting your functions so that they're easier to reason about. There's nothing special about the map function really in any pragmatic sense except when used in conjunction with the other features a good language affords.

I'm also curious if you think something like:

  listOfListsmap : List (List Int) -> List (List Int)
  listOfListsmap =
    (List.map << List.map) (\n -> n + 1)
is easier to understand at a quick glance than a rough equivalent using those loops above:

  collections.forEach(x -> {
    // anything could happen here to x before the inner loop   processes it
    // and since we're probably dealing with mutable variables, the inner loop can
       presumably access things I put here (which may or may not be a problem, but is something you have to think about)
        x.forEach(y -> {
        // do something with y, we can do anything with x *and* y here
        y = y + 1;
        }
    }
knowing that << is the composition function.


Can you tweak your function to stop filling the first time a zero is encountered? That's a simple one-line change with the for loop, but a puzzle for the functional iteration.

Yes, and that's exactly why you should use constructs like map when possible. With a manual loop you have to scan more code to verify that it's not doing something more complicated.


To drive parent's point home a bit, I skimmed both examples above when reading and without looking again I am very certain that the map example does not have any weird iteration semantics (eg "stop filling the first time a zero is encountered"), but I'm not sure that the for loop example is similarly 'normal' -- I'd have to check the condition again more carefully.


> Can you tweak your function to stop filling the first time a zero is encountered? That's a simple one-line change with the for loop, but a puzzle for the functional iteration.

In Rust:

  a.iter_mut().take_while(|i| *i == 0).for_each(|i| *i = 0)
And it's as fast as hand-written for loop.


A counter-counterpoint: classic for loops are hard to read and unpleasant to work with. They are so annoying, that the language designers extended the language/standard libraries to make them tolerable - array.map in ECMAScript 5, for (x : y) in Java 5 and C++11, foreach in PHP 4, LINQ extensions in C# 3.

A plain old functor/monad has so much to recommend it. You get a uniform way to iterate/transform all kinds of collections, even those which don't support indexing. They can also be used without mutation and are easy to typecheck. In languages with higher-kinded types they can be abstracted over without knowing anything about how the underlying data is structured.


> A plain old for loop has so much to recommend it. You get powerful control flow constructs (break/continue/return) and obvious performance characteristics.

I spent 3.5 hours with a junior team member yesterday refactoring break and continue out of her loops. The code was so much more readable afterwards.


In my experience `map` and `filter` are easy to grok for even for people unexperienced with FP, but `reduce` is always a headache to parse even if you know how to read it. A loop that accumulates some result is almost always better.


"reduce" (or fold) certainly takes a while getting used to, but once you gain some intuition for it, it is pretty reusable and it avoids all the problems mentioned above such as off-by one errors etc.

also, if you use map + reduce, it's totally trivial to parallelise the map part and have the main thread combine the partial updates since there is no shared mutable state, e.g. Java also offers parallel streams that support this with basically zero effort. if you use loops, it can be quite hard to rewrite the code so that it is thread safe.


I usually like the FP style better, however that's not the best example. You're using a for loop rather than a foreach loop - admittedly C doesn't have foreach, but most other imperative languages have some variation of it, even BCPL had it.

foreach/map have the advantage of working based on array size information, while for needs to be told. That gives the FP version an unrelated advantage.

Switch to foreach, and none of the 'classes of problems' you mention apply to the imperative version of the code.


It is the most readable to you because you are used to null. If you're reading Rust, then it's a pretty good assumption that you'll be used to Option<x>.

Null is sloppy because it is the only way in many languages to express something like a union type (in Rust terms, an "enum"). So it gets overloaded to convey information that would be more accurately conveyed by a union type.

The Rust equivalent of null, Option, is just another enum and you can handle it with the standard enum tooling the language gives you such as pattern matching. It also makes you stop and think about what you are doing if you're returning it. In most cases, your intent can be more clearly expressed with an enum other than Option.

This contrived example would be less likely to appear in Rust. Why is there a function called parseFile taking something that might not even be a file?

As an aside, the Rust code could also be written like this:

        fn parse_file(file: Option<File>) -> Result<(), String> {
            if let None = file {
                return Err("File must exist".into());
            }
            // ...
        }


> As an aside, the Rust code could also be written like this

Sadly it can't really, because Rust doesn't have control-flow based type refinement like TypeScript.. In TS the type of `file` after the throw is File. In Rust it stays `Option<file>` so you would have to unsafely unwrap it below the if block.


(Unwrap is not unsafe, and in fact, in this code, you would even know that it can never panic. That being said, you're not wrong that this is nicer in TS. You could invert the condition and add an else and it would be not too terrible. I'd still probably go for a ? style.)


Rust’s if-let construct handles that in a nice, explicit, and compact way.


Why use `if let None = file` instead of `if file.is_none()`?


I realize the shadowing might make the Rust example unnecessarily confusing. With overly pedantic names, it might look like:

    fn parse_file(file_from_input: Option<File>) {
      let file = file_from_input.ok_or(Error::new("File must exist"))?;
    }
What I like about the Rust version is that it explicitly unwraps the argument and assigns it to a new variable. In the TypeScript one, the if statement allows the inference algorithm to determine that `file` is a `File` and not a `File | null`. That's a testament to the TypeScript team's efforts, but it's a little less ergonomic (in my view) that variables can change their type without getting mutated or changed in any way.

For instance, if I were to open up this file in emacs with no language server, nothing. I'd have to trace over the file and act like the TypeScript checker, thinking "oh okay so this null check ensures that file cannot be null, therefore it's inferred as File". This is clearly simple, but for other cases it's not as easy. Whereas with the Rust code, I know that my argument, file_from_input is an Option<File>. file_from_input.ok_or(Error::new(..)) makes it a Result<File, Error>. The (?) macro makes it a File. Each step produces a consistent type. At no point do I have to understand the inference algorithm to determine what the type may be.

That said it's totally cool if you find the TypeScript version more readable :D. It's not my place to say what's readable or not readable to you.


Agree. It's more about the usefulness of moving to the non-Optional type and the null safety in following lines, rather than the readability of the ok_or.


The former code is more equivalent to:

  let file = match file {
    Some(file) => file,
    None => panic!("file was null");
  };


Which you'd never write in practice since `expect` exists


Can you say more here, I don't have the context.


This code is identical to

  let file = file.expect("file was null");
That is, the "expect" method does exactly this.


I like Rust from what I've tried. But I'd have preferred something like this:

  let file = file.unwrapOrPanicWith("file was null");
option.expect(message) doesn't semantically make sense.


There was some discussion about this when the method was named; you're not alone. In the end, it is what it is. You could define your own method like that if you really wanted to.


I read it as expect (prepare) for the worst!


> Is it crazy if I think that first example is the most readable of the three?

No. That's simply the difference between imperative and functional code. If you don't know the latter you could never understand this.

For the most part, the last example is equivalent to `let result = file && parse(file);`, but there are important differences to consider in JavaScript at runtime.


Interesting, okay, makes sense. Thanks for the explanation.


The block syntax is comparable to understanding File | null - it's day 1 learning in Rust. The part I'd say you're missing is that Option is just a plain old type and ok_or is just a plain old function, so if you don't know what they're doing you can always just click through to their definitions (which are in normal plain old code) and read them. Even if you don't do that, you know that they're not doing any magical control flow things. Whereas to understand the first example I have to understand what "if" does and how having a bunch of statements after each other works, and there's nothing to tell me that anywhere.


> I'm a huge fan of strictNullChecks but it is remarkable how much language features contribute towards making non-nullable types ergonomic.

For sure, that was our experience as well. Without TypeScript's control flow analysis, it would be much less ergonomic to use and would probably lead to a lot of non-null `!` assertions everywhere. When writing correct code, you never notice that control flow analysis is there at all. A desirable feature, though as a result of operating in the background, few know how much TypeScript innovates in this area over other mainstream languages.


Kotlin has this, may not be part of mainstream languages, depending on the definition.


Haskell does a nice job with this as well. There's a lot of machinery available for dealing with error handling, much of it through typeclasses.

    format :: Maybe Int -> String
    format = maybe "No Int Provided" show

    formatIfFormatterAvailable :: Maybe (Int -> String) -> Maybe Int -> Maybe String
    formatIfFormatterAvailable formatter int = formatter <*> int
The latter will work with any error handling type, not just Maybe.

    formatIfAvailable :: (Int -> String) -> Maybe Int -> Maybe String
    formatIfAvailable formatter int = fmap formatter int
And so on. Using a combination of functor, monad, and applicative typeclasses, you can get really ergonomic error handling. It can be a little confusing to see it at first, where during parsing you have expressions like

    data Entry = Entry Username Date Dollars
    parser :: Parser Entry
    parser = Entry <$> usernameParser <*> dateParser <*> dollarsParser
What the above is doing is "first try to parse a username, then try to parse a date, then try to parse a dollar amount, and if they all parse then return an Entry object with all that data". This is a lot easier than writing out something like 3 nested if statements, checking if any of the parsers returned null each time, or trying to use GOTOs or whatever.


I don't think I would ever write code where a file argument to a parse-file would be null, even if that is possible.

A null instead of a file could occur if there is some API to open a file which returns null. That should be throwing something instead of returning a value that might not be handled.

If I did have some object which either holds an open file or else a nil to indicate there is no open file right now, I would carefully guard that this nil does not escape; i.e. that it's never passed into any functions that cannot work without a file, such as parse-file. All methods of that object which deal with that file have to have conditionals for the situations when it's missing.

Once parse-file has received nil instead of a file, it's game over. It might as well not even bother checking. The only reason to check for a nil in parse-file is if we can provide a better diagnostic for the situation compared to letting a lower level file I/O routine produce the error.

Checking for null at higher levels is a bad habit from C. In C, lower level API's and library functions often don't check for null pointers: they just dereference them or crash. A C version of parse_file has to check for a null stream, because getc(stream) will crash miserably.


> Checking for null at higher levels is a bad habit from C.

or its a way to allow the developer to decide how much they are willing to pay for checking, since the library doesn't do it itself.

I appreciate the sentiment that the ability to not check has caused many problems; I also appreciate using APIs that don't cost me conditionals when I know that the passed in value cannot be null.

Sometimes I get the sense that Rust tries to be a language that promises you can get the best of all worlds here, but everytime I did deeper, it seems that this is an illusion.


I ported some null-heavy Java code to C++. The source frequently returns an object, or null in case of error; e.g.,

    BigInteger bi = rational.asBigInteger();
    if (bi != null) {
        ...
    }
One of the classes is essentially a node in a graph, so in my first pass its objects were wrapped in a shared_ptr, which can be null. However, objects of the other two types are typically passed by value in C++, so I had to think about that. Exactly what std::optional was designed for, but do I want to require C++17, both in the implementation and the public interface? The syntax looks nice enough:

    if (auto bi = rational.asBigInteger()) {
        ... // bi is like a non-null pointer
    }
Java has java.util.Optional, but the source predates Java 8.

TypeScript's type guard approach is really clever. Once you've checked the value, you can use it like normal. No need for a wrapper class, or different syntax.


Interesting. I kind of like how Kotlin provides syntactic sugar for this with ? and?: (and !! but you should avoid using that). I think Typescript integrated a similar feature last year.

However, Kotlin goes one step further and adds smart casting and contracts to the mix that enable the compiler to take null checks into account when inferring the type of something: String and String? are two different types so calling e.g. .length on a String? is a compile error. However, if you do a if(!s.isNullOrBlank()), s becomes a String. That gets rid of a lot of ugly code. Works for type checks as well. With contracts, you can tweak this further. And with extension functions you can add functionality to nullable types as well or generic types. The standard library has a few of those included.

For example let is an extension function defined as inline fun <T, R> T.let(block: (T) -> R): R

So if you have a String? you can write val message = s?.let { "hello %s" }

That works for any nullable type and basically one of the idioms in Kotlin that lets you avoid having to do null checks.

Typescript has absorbed quite a few similar features in recent years but it is being held back by backwards compatibility with Javascript. The two languages are actually very similar, especially if you turn on the strict mode. But there's always this untyped mess behind the facade that typescript provides. Currently, I'm dabbling with kotlin-js and I'm actually liking that as an alternative.


I'm not sure what these snippets are really supposed to show. This one:

    if (file == null) throw new Error("File must exist");
is literally one character less code than this one:

    let file = file.ok_or(Error::new("File must exist"))?;
and seems to be easier to 'read', but that's of course in the eye of the beholder.

More generally, accepting an optional file seems a bit bizarre; shouldn't the job of dealing with am missing file value be done by the caller?


The character count isn't the important point.

The point is that there's no way to use a `Option<File>` type as a `File` without first checking for null. And once you do get a `File` you know that from then on it can't be null.

On the other hand, using a nullable `File` requires that you remember the null check. And that you remember it for every function that takes a `File`. Yes, that last part shouldn't be necessary with sufficient care but historically people do make mistakes that go unnoticed when refactoring or generally just editing code or reusing functions in large projects.


But it doesn't require "remembering" the null check since TypeScript will error when you try to access a member of a nullable value?


In typescript, the type of 'file' implicitly changes, which is more information to keep track of.

In the rust example, a new variable is explicitly introduced—though, granted, it shadows the old by using the same name—and every variable has a static type for the entirety of its lifetime.

Which being said, I don't have a preference for either of those styles.


> More generally, accepting an optional file seems a bit bizarre;

Yes, but in languages where all types are nullable, you cannot opt out.


You don't really need language features, assuming your language has sum types, polymorphism and first class functions (and why bother using a language that's missing any of those). All you need is some functions, and you can write those yourself since they're just functions. https://fsharpforfunandprofit.com/posts/recipe-part2/


> You don't really need language features, assuming your language has sum types, polymorphism and first class functions (and why bother using a language that's missing any of those).

To be fair, this is basically saying "you don't need language features as long as you have these language features".


Sure, but those are general language features that you'd want to have for many other reasons anyway. You don't need any option/result-specific language features.


> You don't really need language features, assuming your language has sum types, polymorphism and first class functions

Aren’t those features?


My point is that Option::ok_or and Option::map aren't language features, they're just plain old functions written in the language; if they weren't there you could implement them yourself. You need some language features to implement them (e.g. you can't implement map without first-class functions), but those are general-purpose language features that you'd want to have anyway.


Tbf I find infix operators much more straightforward and readable. E.g.:

  let file = file |? raise Some_error
or

  let parse_result = parse <$> file
For the last two examples respectively.

EDIT: For completeness, the operators are

  let (|?) opt y =
    match opt with 
    | Some x -> x
    | None -> y

  let (<$>) f opt =
    match opt with
    | Some x -> Some (f x)
    | None -> None


Infix operators are certainly more readable if you're familiar with the operators.

I imagine the possible ways of handling non-nullable types (and algebraic types as a whole) as a spectrum:

* TS style explicit if branches: Easy for beginners, annoying to experts

* Rust style methods on Option/Result/etc: Slightly confusing for beginners, somewhat elegant for experts

* Haskell/OCaml style infix operators: Confusing for beginners, elegant and easy for experts

Note that this is beginners to functional programming. Not necessarily beginners in programming as a whole. Plenty of smart people would get tripped up by a `<$>`


In my opinion, the problem with <$> is actually a problem with Haskell, which is that there's too damn many "operators" -- it's hard to keep track of them all. This also makes them hard to search for. When I search for "kotlin question mark colon", the first page of results is flooded with results about the Elvis operator (also a very memorable name for future searches, if I forgot). Searching for "haskell dollar sign in angle brackets" unearths nothing.

Something as common as option types (which is basically how nullable types are used in Kotlin) deserves special syntactic sugar, in my opinion. Granted many of Haskell's operators are also common, but I wish they'd been able to limit the variety a bit.


This is how you're supposed to search for Haskell operators:

https://www.stackage.org/lts-13.21/hoogle?q=%3C$%3E


If you prefer the "or throw" approach to handling null values, you can write a little helper to get something like it in TypeScript:

    function assert<T>(v: T | null | undefined, message?: string): asserts v is T {
      if (v === null || v === undefined) {
        throw new Error(message);
      }
    }

    function parseFile(file: File | null) {
      assert(file, 'File must exist');
      // TS now infers type of file as File
    }
Check out the TypeScript 3.7 release notes for more on assertion functions: https://www.typescriptlang.org/docs/handbook/release-notes/t...


I agree! At the very least, the `map` operator/monad/whatever is very much required when you have to go beyond optional chaining (with the safe navigation operator). I very much miss it in TypeScript. It's never going to happen, sadly.


To some extent Promises provide a similar API. I am not sure if JS needs Option. In many cases try/catch and Promises on boundaries of your code is enough.

In real life you would wrap async file operation in try/catch and then call parseFile once you know that you have data. The obvious benefit of try/catch is that it can catch unexpected errors (including logic/data errors).

IMO JS/TS provide a nice compromise between Go checking errors everywhere and rust more complex abstractions.


TypeScript has just as few characters/lines as your Rust example:

    if (file === null) throw new Error("File must exist");
vs.:

    let file = file.ok_or(Error::new("File must exist"))?;
It's also easier to type (5 non-numeric characters vs. 8 in Rust), and IMO it's easier to read and requires less tribal knowledge. (Also could be just 4 non-numeric characters in TypeScript since the semi-colon is not required.)


  function ok_or<T>(x: T | null, msg: string): T {
          if (x === null) throw new Error(msg);
          return x;
  }


This is actually more similar to what is going on above:

  function ok_or<T>(x: T | null, msg: string): T | Error {
    if (x === null) return new Error(msg);
    return x;
  }
Thrown values (or exceptions if you like that term) are very much not type-safe in TypeScript.


Does TS have syntax sugar for returning errors, as rust has? If not, I don't see what's interesting about your form.


Sorry for the late reply. It does not. My point was mostly that the TypeScript analogs are either more of a pain or less type safe.


> Of course this just JavaScript being a classical imperative language. Not much to do about that.

I think you could actually make JavaScript expression based very easily and completely backwards compatibly. Using a statement in expression position currently throws an error. So you would simply be widening the number of allowable programs.


> very easily and completely backwards compatibly.

Object literals, blocks, and ASI take that out of the realm of "very easily":

   var x = if (c) {} else { b: console.log("hi") }
If `c` is true, does this give you `null` from executing an empty block, or an empty object? If `c` is `false`, do you get an object with key "b" and value whatever `log()` returns, or is that a block with a labeled statement?


This specific problem was already dealt with during the introduction of arrow functions[1]. It makes sense to keep the same parsing semantics.

[1]: https://www.ecma-international.org/ecma-262/11.0/index.html#...


There is actually a Stage 1 proposal that introduces an expression that does _exactly_ that (because it's not as easy as just making `const x = for(let i of foo) { yield i; };` parse.)

https://github.com/tc39/proposal-do-expressions

https://babeljs.io/docs/en/babel-plugin-proposal-do-expressi...


Kotlin:

    function parseFile(file: File?) {
        val file = file ?: throw Exception("File must exist")
    }
(Not the only way to do it, but in general I really like the language's economics).


I'm pretty sure a function declaration in Kotlin is fun, not function.


Yes, you're right. I copy-pasted from the previous example on my phone and missed that change.


Happens to everyone.

You have a nice day.


Isn't this code rather leaky?

What if, for instance, the file DOES exist but the current user doesn't have read permission.


The example code isn't a treatise on proper file handling, it's a minimal example of Option


A third of the entire value proposition of TypeScript is the `strictNullChecks` flag. I'm glad they turn it on by default in new projects.


I was very curious about their “visualization tool”, but it seems to be just one simple script. Which is fine, because it probably did all they wanted. But I believe that there’s great potential for code visualization in many ways!


> In terms of catching errors, we can say that null errors no longer show up in our error dashboard

I really wish they tried to have better metrics into if all this actually lowered their defect rates or not. Something a bit more quantitative then them not seeing null errors in their dashboards anymore. At least give us some sense of the measure? How many null error did they see before and what about now? What about other kind of errors? Did they see an uptick elsewhere as a result? What about developer productivity, was that impacted? Etc. This would have been a great opportunity to gather some real data about it.


I'm very curious what they mean by "Figma uses Redux, so we have models, actions and reducers". "Models" is not a term that is normally associated with Redux.

Note that we now recommend using the "ducks/slice file" pattern for organizing Redux logic for a given feature in a single file:

https://redux.js.org/style-guide/style-guide#structure-files...

which you basically get for free anyway if you're using our official Redux Toolkit package and the `createSlice` API (which generates action creators based on your reducers):

https://redux.js.org/tutorials/fundamentals/part-8-modern-re...

Obviously the Figma codebase has been around for a while so this isn't an immediate solution to their issue, but it definitely simplifies dealing with most Redux logic.


Good question! Looks like I might have slipped an internal term in there inadvertently. It just refers to type definition/interface declaration files for objects that are stored in the Redux store, we just happen to stick most of them in a folder that’s been called “models” for a long time.


in addition to counting dependents you can also run the graph through pagerank to find the most impactful files to convert first. This strategy is useful when converting codebases from JS to TS too because it's common to incorrectly type a file when it has a bunch of untyped dependencies, so getting the translation order right saves a lot of rework.


I was always a big fan of Optionals, but having learned Kotlin this year, I was pleasantly surprised at how much non-nullable types seem to remove the need for them. And non-nullable can be just as ergonomic as Optionals with the `?:` operator (or `??` in TypeScript). My only issue when working with `strictNullChecks` in TS is that `null` and `undefined` both exist (thanks JS...). I also wish TS adopted a `Type?` syntax like Kotlin that would signify `Type | null | undefined` and could be used anywhere, not just in function parameters and object types (like TS's current `key?: value` syntax).


I recommend just treating undefined and null as equivalent (since you can't control which one or ones third-party libraries will use). Everywhere you need to check for them do `!= null` or `== null` which is an idiomatic way to check for either.


The tsdef library has a Nilable<T> type that means T | undefined | null. It's not so bad to type with a snippet like ?<tab> => Nilable<|>.


That snippet is a great idea, thanks!


This may be slightly off topic, but I'm wondering about this in C++. There is not_null<T *> and some people even argue for using std::reference_wrapper<T> to void null dereference. But isn't the root issue that the compiler allows dereferencing of a pointer that's not guaranteed not null? Wouldn't life be much easier if we'd tweak things so that dereferencing a pointer is not actually allowed unless it's a not_null wrapped one? What are peoples thoughts on this?


Strict null checks have a dramatic positive improvement on code correctness.

Just want to point out that Python typecheckers (like mypy) do quite some sophisticated strict null checks when you use the 'Optional' type annotation.


> This website stores data such as cookies to enable necessary site functionality, including analytics, targeting, and personalization. By remaining on this website you indicate your consent.

This is illegal.


Only in some areas. Where are you browsing from? Inside EU or outside? To me it says

"This website stores information such as cookies to enable necessary web site functionality including analysis targeting and personalization. You can adjust your preferences at any time or accept the default preferences"

Settings

Marketing: OFF Personalization: OFF Analysis: ON

(Literally, in Swedish)

Denna webbplats lagrar data såsom cookies för att möjliggöra nödvändig webbplatsfunktionalitet, inklusive analys, inriktning och anpassning. Du kan ändra dina inställningar när som helst eller acceptera standardinställningarna. Integritetspolicy


EU law applies to EU citizens abroad, so unless you have a means of detecting the nationality (not the location) of the user, this is still illegal. Plus, even for EU visitors who decline consent at the prompt that you received, they still include heaps of tracking crap.


> EU law applies to EU citizens abroad...

Not in this case. The GPDR refers to residents of the EU, not citizens.


Why is null/undefined bad?


While I'm sure they're are good academic arguments around why null is not a great language feature, I think the simplest way to explain the problem is to point to the number of bugs, incidents, outages, failures, that all seem to be related to it. Billions of dollars have been lost to null.

You cannot ask for perfect programmers who will never slip up. We're humans. People make mistakes, forget to check for null. So why not instead just make these kinds of issues impossible? Let the build process look for the mistake and block you from making it.


Because it’s very easy to write code that fails to check for null or undefined, which usually leads to errors since subsequent code often expects to find a non-null and non-undefined value.

The beauty of a type checker here is that it can check to make sure you properly handle the nullable/undefinable type.


There’s another approach - where null takes on contextual meaning.

   List<Integer> f = null;
   f.add(1);
This is clearly a NullPointerException in Java but in a language with Nil punning, f is automatically a list containing 1.

    (conj nil 1)
    ;;=> (1)


Sure, I’m not asserting it’s the only way to handle nulls.

That said, for me personally, nil punning is uncomfortably close to the kind of weak typing (like in traditional JS) that can be catastrophic in large code bases.

However, I’ve never worked with a large code base in a Lisp - whereas I have with various statically-typed functional and non-functional languages - and I find static typing, particularly Option types, very valuable.


> That said, for me personally, nil punning is uncomfortably close to the kind of weak typing (like in traditional JS) that can be catastrophic in large code bases.

I think it would be totally fine and safe if “nil” is identical to an empty list in every scenario. In other words, nil is just a shorthand/synonym for an empty list.

Go kind of does that, for example. The only time it’s actually handled differently in my experience is in JSON serialization (which I personally believe was a terrible decision)


> I think it would be totally fine and safe if “nil” is identical to an empty list in every scenario

Yes, I agree. But even in Scheme, that's not the case. I'm not familiar enough w/Go to comment - I'll have to check it out.


I've only ever carved out solutions with stone knives and bearskins. I find stone knives very valuable.


Yes, noted caveman technology Haskell.

In terms of dynamic languages, I’ve also worked with a great deal of Python and JavaScript, with significantly less success on large codebases (which could be due to selection bias, admittedly).


This is also why capital letters are bad.

It's easy to write code which fails to check for a capital letter, which leads to errors when subsequent code requires a lower-case letter.

Capital letters are a billion dollar boondoggle.

It behooves languages to have a lower-case (or non-capital) character type which cannot be constructed with or assigned a capital letter.


I always interpreted that quote to mean that implicit nullability was a mistake. It's fine to have nulls, but not five to have them pop up anywhere randomly.

Capital letters are fine, but not if they randomly pop up when you thought you were only working in lowercase, and now suddenly all your string comparisons need a .toLoweR() on each side of the ==.


In all seriousness, after nulls, I think I've seen more issues caused by case-sensitivity in string comparisons than by any other class of error.

Domain names. Email addresses. Windows UPNs. Windows filenames... All manner of LDAP shit.


Must have been before Unicode. :)



Strict null checks doesn't change the language. It doesn't make the `null` feature of JavaScript go away, it just makes it more explicit in the type annotation. So `null` isn't bad. Languages that don't have `null` have something similar like `Option`.

To me, that helps make the code more maintainable and that's the biggest benefit, even more than catching bugs. The process of maintaining code involves asking questions about the code. "What is this code for?" "Was this code written to handle <edge case>?" "Can this value, sent to me from another part of the system I'm not familiar with, ever be null?"

In a strict null check world, you can answer the last question with confidence just by looking at the type signature. Without strict null check, the types don't tell you anything about whether a variable can be null, so figuring that out can be anywhere between a distraction to a huge endeavor.


It isn't bad per-se. The bad thing is when it would be a bug if a value is null, but the language still allows it to be a null.

By verifying that only the values for which it would be valid for it to be null are ever null, you can catch a large category of bugs automatically. You can catch even more bugs if the language forces you to check whether the value is null if you are dealing with a value that is allowed to be null.


It's one of those things that just allows for a lot of lazy code and/or unresolved corner cases.

Newer paradigms do varying degrees of 'forcing' you to handle the situation, with varying degrees of obsessiveness.

I don't quite think any of the solutions are very magically nice ... but we may still be yet one paradigm leap away from something a little more tight. Or, we may just have to live with Optionals forever.


Broadly speaking it's not bad if your null or undefined value has a different type than the shadowed type. The two examples in statically typed languages I can think of is rust and zig; in dynamically typed languages I think of elixir (which assigns it to an atom, and is a crash-fast language), and ruby (nil is it's own class)

Of the languages I have experience with, it's a major problem in C, java, and javascript.

For javascript specifically there is also the issue of falsey discipline; there are a ton of falsey values so you could get tripped up in ways you forgot about when doing a null check.


In JavaScript it's only the value undefined that is interchangeable with null, and only if you use == (two equal signs vs explicit check with three equal signs). eg if(foo==null) so I would say it's extremely rare, but if you are feeling adventurous you could of course write if(foo) and it would match false, undefined, null, 0, empty string, and NaN. Which could be an issue if for example a variable is either a number (0) or null.


In general, it's recommended to use NOT NULL in SQL as well. In practice, NULL causes many SQL statements to require 2 conditionals for logic instead of 1, introducing subtle bugs.

There are exceptions, but I haven't seen many. One useful one is if you want a unique index on a column and you can't use "", you can use NULL.

Another is if you need the full range of a column, plus indicating whether data was supplied or not. I see that more with applications involving money, whereas the empty string works fine for most tables used in SaaS and social applications.

Source: DBA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: