"Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget."
So how is nil checking different from when I write a function that pattern matches against a Maybe in a way that I only match against the "Just" case?
We have been using Scala lately for a couple of web services that are now running in production at our startup.
The difference is huge, because Option[T] references are type-checked at compile-time. Whenever a reference can be either Some(value) or None, then you are made aware of it and you are also forced to either handle it (by giving a default value, or throwing a better documented exception) or you can simply pass the value along as is and make it somebody else's problem.
Option[T] in Scala is also a monadic type, as it implements filter(), map() and flatMap(). It's really easy and effective to work with Option[T]. In comparison with "null", which isn't a value that you can work with other than doing equality tests, None on the other hand is an empty container for which you know at compile-time the type of the value it should contain and in Scala it's also an object that knows how to do filter(), map() and flatMap().
My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?
Of course, to tell you the truth, Option[T] (or Maybe as it is named in Haskell) is only really useful in a static language. In a dynamic language, such as Clojure, especially if you have multi-methods or something similar, well in such a case Option[T] is less useful. And before you ask, no, Go is not dynamic and people saying that Go feels like a dynamic language, don't really know what they are talking about.
>My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?
Almost always it is a matter of 2 seconds to find the source of a nil pointer error. Given that I would almost never forward raw error messages to the user, I cannot really see a gain.
However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.
> Almost always it is a matter of 2 seconds to find the source of a nil pointer error
Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)
The biggest problem with null pointer exceptions is precisely that (1) they get triggered too late in the lifecycle of an app, (2) such errors are unexpected, non-recoverable and sometimes completely untraceable and (3) you need all the help you can get in tracking and fixing it.
Either way, throwing better exceptions is just one of the side-effects of using Option[T], because in 99% of the cases you end up with code that functions properly without throwing or catching exceptions at all. And you completely missed my point focusing only on a small part of it that's also the most insignificant benefit of Option/Maybe.
> However having a language that combines this Scala feature with Go's exception-free error handling
First of all it's not a Scala specific feature, as the Maybe/Option type has been used and is proven from other languages, such as Haskell and ML. That the Go language creators took no inspiration from these languages is unfortunate.
Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.
>Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)
Ok even if the stack trace is 10 miles long, you just need to go to the end, right? :P
Anyway, so an exception gets thrown and Scala forces you to explicitely throw an exception, am I right? How does the other case not crash your program unless you catch it?
>Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.
Go uses panic (Go-speak for exceptions) for really bad errors: out of memory, nil pointer dereference... You can catch them like in other well-known languages.
The only difference: your catch blocks aren't cluttered with handling for non-exceptional errors like file doesn't exist etc. You are forced to handle those explicitely. Why is this good? Except for those truly exceptional errors, the state of your programs is much easier to determine.
For Go? The biggest complaints about PL design are and have always been that its designers ignored or discarded the previous 30 years of PL (theoretical and practical both) when creating it.
Not just Go. But in Go a lot of problems could be solved by doing things the Erlang or ML way... then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.
>then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.
Never heard such complaints from users that have used it for a few months. I assume you are talking about unused imports and variables. Actually it helps a lot because it keeps your code clean and clear.
Pointers can be null, this is a well-known issue. So when you use pointers, you better check they are not null. Even better it is to have some kind of code convention or pattern that you follow to prevent this.
Still I don't understand why you would prefer MyCustomException to crash your catch-less program instead of NullPointerException.
Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?
That maintains dead code that will never fire, is untestable, and costs runtime. So you will probably not want to re-check the pointer at every point. However, the compiler doesn't help you here. If you ever decide to call foo or bar from any other point without the NULL check, then you will get a crash.
Type safety can solve this. It does not convert "NullPointerException" to "MyCustomException". It converts "NullPointerException" to a compile-time type error (expected Foo, got Maybe Foo. Or: Unhandled pattern in case statement: Nothing).
The trick is simply to differentiate between a pointer that is guaranteed to not be null and one that isn't. Then, disallow using a nullable pointer as a regular one and force a check.
>Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?
I guess not, what I'm saying is therefore: if you use a language that does a lot of stuff, you need to find a convention for your project. One may be: check for nil after assigning variables.
>costs runtime
By all means, no.
Anyway, looks like in need to try Scala and see for myself. (Scala installed: check, Hello World: check)
Checking nil after assigning variables is not helpful for the reason you mentioned earlier: If you check for it and it is nil where it shouldn't be -- you're merely converting one runtime error (null exception) to another (different exception).
If however you use types to distinguish whether it can be nil or not, you simply eliminate the error completely at compile-time.
Glad you're checking it out!
I don't know Scala, I'm a Haskeller myself, but I believe it does get nulls more correctly. It might have bad old null in there too though because of Java interop.
> However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.
Do you realize that this is actually the case? Every Library/API I have seen in Scala until now uses the appropriate abstractions like Option/Either/Try/Validation/... and restrict exceptions to the most exceptional faults.
But anyway, if I had to choose between Go's horribly broken approach of returning multiple values and exceptions, I'll choose exceptions every day. Exceptions are ugly, but at least they are not blatantly wrong like using tuples for error codes.
Indeed, you can end up with an indeterminate state. Tell you one thing: writing the if err != nil boilerplate in Go is isomorphic to writing try { ... } catch { ... } for each function call in a way that your state stays clear. The difference of the former is, it reminds you all the time to do this.
> Indeed, you can end up with an indeterminate state.
No. I think that claim is hysterically funny considering that Go developers almost never check all FOUR states of Go's style of error handling.
The problem Go is solving here wouldn't even exist if they had designed/used a better language in the first place.
Maybe Go people should stop drinking so much Kool-aid, because they sound like all these Node.js-ninja-rock-star kids who think that they revolutionize asynchronous programming while they reinvent threads, badly.
There is a huge difference: null/nil is a valid value for a pointer, but Option[string] is not a valid value for a string argument, so the compiler forces you to deal with it.
How is that any different from always checking it? When you program in C, you essentially always have to check it, when you program in Scala/Haskell/etc you only have to check it once.
In Rust at least, pattern matching on enums must consider all possible cases. Option is just an enum, so it's a compiler error if you don't handle the None case.
Ah, I see. I was confused by Haskell not doing that by default... at least the last time I wrote code on it.
Is there actual data that says that null pointers are actually causing bugs in production software? I always thought they are just a symptom of lazy programmers, and no language can fully protect against that.
Witness my awesome Haskell snippet of code:
foo (Just x) = x+2
foo Nothing = undefined
Ok, so it's somewhat better since the lazyness is now explicit and cannot happen so accidentally.
Anders estimates 50% of the bugs in C# and Java are due to null dereferences.
Your example illustrates the unsafety of undefined, not the unsafety of nulls/Nothing. And you can of course grep for use of partiality in Haskell code and get warnings about partiality in your own functions.
I've seen NULL causing a lot of trouble in production in every setting I've been.
It is very rare to see people mis-handling a Maybe value in Haskell, simply because you have to be explicit about ignoring the Nothing case.
Also, in Haskell, if you get a Maybe value it is a very clear indication that the Nothing case actually exists and you have to handle it. In C, C#, Java, Go, when you get a reference, it is unclear whether it could be null or not in practice. Checking for null when it isn't warranted is dead code you never test. Avoiding checking for null risks missing checks in cases you actually need to check. All of this is simply not a problem when the types don't lie.
I believe graue was referring to the case when you are not using such a type in the code; then you have a guarantee that the value is non-null. The purpose of the Some/Option/Maybe types then becomes to indicate when a value can be null, but outside that wrapper type it is guaranteed to be non-null. So code where it must always be non-null does not have to confirm that that's the case. I think it is less about efficiency and more about not having to worry about it.
To me it's about the existence of the nil pointer itself. If there is a pointer, it is guaranteed to be valid by the type system. The other case (which would be represented by a nil or null pointer in other languages) is represented by the "None" type. There's no null pointer to dereference (and no way to blow up).
So how is nil checking different from when I write a function that pattern matches against a Maybe in a way that I only match against the "Just" case?