Java is quite far away from finally solving NullPointerException issues. Kotlin, C# or TypeScript with their compile time null checks solved it mostly. Common to these three languages is that they need null to be interoperable with older language versions (C#) or with related languages (Java, JavaScript). Languages like Rust that don't even know null (or nil as in Go), have finally solved the problem.
> Languages like Rust that don't even know null (or nil as in Go), have finally solved the problem.
I wouldn't quite phrase it that way, since it's nothing new. OCaml from the 1990s has no null value (you need to represent potentially missing values with Option). I suspect there are older examples than that.
You hit it right on the nail.
I love how the null type being defined away from function signature in TS. It's not entirely bug free but it gets pretty far!
I have no faith that Java will ever solve the problem after their bungled introduction of the Option type. Step 1 to migrating away from null would be to introduce a working Option type - one that could contain any valid Java value (which includes null for the time being) and where chaining behaviour worked the way you expect (i.e. that doesn't violate the monad laws, even when nulls are being thrown around). People from languages that have solved the problem told the Java folks about these issues, and were ignored, with the result that it's impossible to migrate an existing Java codebase to use Option, and I can't imagine the Java community having the appetite to introduce a non-broken Option that would allow moving off null.
It's not really clear from your description as to what's wrong with Optional type. When Java would introduce value classes, I could imagine Optional class to be automatically transformed by a JIT to a nullable instance removing all overhead. Why do you want to keep null inside Optional? That would be absolutely non-intuitive design.
The most basic example of the problem with null is: if you call Map.get(key) and get null back, you don't know whether that means the map doesn't contain a mapping for key, or it contains a mapping from key to null. Optional lets you solve this: if we added a Map.getOption(key) that returns an Optional then it will be None if the map doesn't contain a mapping for key, and Some(null) if the map contains a mapping from key to null. That way you don't have to rewrite the whole world to use optionals before you start getting the benefit: even if the code that's populating the map doesn't know about optional and is still using null, you've still solved your problem.
Instead, with the implementation that we got in Java, if you tried to write this Map.getOption then if your map was ever used with old code that used nulls then it would break. So there's no way to ever start migrating to using options.
There are other problems with null, but they mostly boil down to the same issue: different code uses null to represent two different things, and so gets confused about what it means. (The other problem is that you can't look at a value and know that it can't be null, but optional definitely can't solve that until the whole ecosystem migrates to it).
When you're calling `Map.get(key)`, you're getting null both for case when value is null and for case when value is missing. If you need to distinguish those cases, you have to use `contains(key)`. That's questionable design for sure and it's generally recommended to avoid null values. But Optionals have nothing to do with that.
If you ask me, they should have introduced another class, something like NullableOptional, similar to Optional for those use-cases. I don't agree that putting `null` in every optional just because of few bad APIs is a good idea, because people use Optional exactly to avoid dealing with nulls.
> That's questionable design for sure and it's generally recommended to avoid null values. But Optionals have nothing to do with that.
On the contrary, a selling point of Optionals is that they let you avoid that problem. Like I said, you could have a Map.getOption(key) function that unambiguously tells you whether the value is present or not, because it only ever returns None for absence, and will always return Some(value) (which might be Some(None) or Some(null), but those can be distinguished from None) if the value is present.
> I don't agree that putting `null` in every optional just because of few bad APIs is a good idea, because people use Optional exactly to avoid dealing with nulls.
Optional should be a transparent, consistent class that can contain any value that's valid in the language. Maybe Java will eventually reach a point where null can be considered invalid, but until that day, having Option ban putting null in it because it's old and deprecated makes about as much sense as if ArrayList banned you from making lists that contained deprecated classes.
From your blog (5.5 years using scala) it seems that null is a problem in scala:
Scala avoids the biggest pitfall of multiple inheritance elegantly, by only allowing one parent to have a constructor; classes may only be the first parent, and traits can't have constructors that take arguments. Unfortunately this doesn't mean they don't need to be initialized; in particular, vals in a mixed-in trait will be null if you try to access them in an earlier constructor.[2] As with any null, in the worst cases you may not get an error until much later on.
Null is not the problem there, null is the symptom. You see exactly the same problem if the val is a primitive type (which can't ever be null): if you access an Int from a mixed-in trait in an earlier constructor, that Int will be 0. This is a real issue, and one where I think Java has actually made the better design choice (interfaces may contain method implementations and constants but not fields - of course they had the benefit of Scala's experience when making that decision), but it's not about null.
Well, what I meant is that they're solving the unhelpful messages :) The root of the problem is still there. Only languages without Null and everything that entails have truly solved the problem.
It might look more pleasant, but it doesn't "solve" anything, only shifts it in a different place.
Go is in the middle, because structs have no null value, only a zero value; the only thing that can be nil are pointers which I personally feel should used as little as possible.
In Java you're passed a reference to an object. Might it be null? Can it be null? Who knows! Mostly you're just going to ignore the possibility and hope for the best.
In Rust you're passed a reference to an object. Can it be null? No, it simply can't! There does not exist a magic "null" value for references. To represent the possible absence of a value, you explicitly encode it into the type system by using `Option<T>`.
Option types don't have to be boxed and are specifically optimized in Rust and, if I remember correctly, in Haskell. The Nothing is represented much like a null pointer, i.e. with a special value that can't be dereferenced.
Java would also refuse to compile using an Optional<T> where T is expected.
Java and Rust are practically the same regarding the differences between Optional<T> and T (java), and Option<T> and T (rust). Optional<String> in Java has the `.get()` method which corresponds to the `.unwrap()` method in Rust. And in both are different different types than String in their respective languages.
But in Java you can have an Optional<String> that's null, not just empty. So that's three states - null, wrapping null, or wrapping a value.
I'd say the big difference is that in Rust you can have places that refuse nulls (e.g. using T instead of Option<T>), so you have well defined points of control/checking. That, combined with the difficulty in recovering from a panic (the equivalent of a null pointer exception when trying to unwrap a value that's missing) and the Result type, are Rusts real strength wrt. missing values.
I think we are saying the same thing, from slightly different perspectives. That Option can be nullable means that it can be used where it is not actually that type, without a type error, is what I’m saying.
Yes, the difference is a combination of types being non-null by default and compiler enforced checking.
This doesn't require an option type. Option types are actually quite awkward compared to language integration. Kotlin doesn't use an Option type yet still delivers all the same benefits as Rust/Haskell's approach, with benefits (e.g. zero overhead, integrated syntax). Note that Option type overhead isn't merely about heavyweight syntax and the need for the compiler to successfully scalarise: using language integration around standard null pointers means the CPU spends no time on checking the 'maybe' branch. If the type system wasn't violated and a value is present, it's used immediately without any additional instructions or code bloat. If it's not then that will trigger a page fault at some point in the CPU pipeline and transfer control to an exception handler. If the type system indicates nullability and you need to actually do more than just unwrap it, then you have to take a branch of course but it's fully inlined and cheap.
Doesn't that approach make the "nothing branch" extremely expensive? Makes sense in scenarios where you really always expect Something and not Nothing. In practice, Options are very often used where something is optional, and you can expect to get Nothing a good percentage of the time. Using exceptions in such cases seems silly.
Yes if you expect nothing, you test for it obviously. The problem is when you want high quality errors for the case where you didn't expect it and you e.g. casted away the nullness, or it was casted away for you due to language interop.
I know what an Optional is, the problem is that this pattern of wrapping stuff in Optionals seems to appear a lot, which means that in practice you have to do this little dance like you would in Java or, as others pointed out, try your luck with unwrap() and hope it goes well. In practice, it doesn't make coding much different.
You seem to forget all the cases in Java where you didn't expect null to happen, but for some reason something was null and you get a NPE. Sure, if you check all values for null already the difference to now is not very big, but no one I know does this. Everyone has some intuition whether something "can" be null or not. Rust (and other languages without null) formalize the notion of "this cannot be null", so you don't have to depend on your intuition anymore - and cannot be wrong about it.
Regarding unwrap: If you tell the compiler "I know what I'm doing" and you don't know what you are doing then no one can help you. It's the same as putting casts everywhere when the compiler yells at you about incompatible types: If you lie to the compiler it will do your bidding, but your code will run into errors at runtime that you could have prevented at compile time. Again: No one can help you here.
The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.
No that's not the key difference, because you can just call `unwrap()` and forget about it, which is effectively the same footgun as other languages.
The key difference is that now the type system can tell you the truth. In other languages when you have a reference to a type T, null is considered a valid T but you cannot treat it as one or everything blows up. Meanwhile in rust when you have a reference to T, you don't have a secret (not)T that invisibly breaks everything.
> No that's not the key difference, because you can just call `unwrap()` and forget about it, which is effectively the same footgun as other languages.
Are you sure? Each line of Java or Javascript can be equivalent to multiple `unwrap()` calls. You can't see them because the compiler is generating ([0]) them but they are there and there are many hundreds of thousands (or maybe even millions including dependencies) of them in most Java projects.
Here is an example: company.board.members.size() is equivalent to company.unwrap("NPE).board.unwrap("NPE).members.unwrap("NPE).size() and every single line in Java is like that. You can't opt out. You can't tell your coworkers to stop using "unwrap" during code reviews. The entire language forces you to do this in every single line that involves a reference.
[0] Well, it's actually just trapping the 0 address but it's equivalent.
Please do not quote posts out of context in order to make your argument seem stronger.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
I am well aware of the issues in other popular languages. You and I agree that there is an implicit `unwrap()` call in each dereference operation.
The post I was responding to claimed that in Rust you cannot leave null/None values unhandled. This is false because `unwrap()` exists and more importantly is in fairly common use.
The distinction is not whether you can (at your own peril of course) assume a value is not null. Rust lets you do that.
As you point out however, it does force you to be more explicit. Why is that? It's because the concept of `None` is not hidden from the type system.
The reason that your Java examples work that way is not because `unwrap()` is implicit, it's because the type system doesn't know about nulls in the first place.
> The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.
Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?
My goal here was not to get mired in semantic arguments that ignore the context of the discussion. It was merely to highlight that Rust lets you assume that things won't be null the same as any other language, and that the important difference is that rust lets you be clear about what can and cannot be null.
Yes, it absolutely is different. The naive code will not have unwrap() in it, and it will fail to compile. At that point the developer can still mistakenly use unwrap() to unblock themselves, but it's still opt-in, not opt-out.
If you're implying that coders use unwrap() pre-emptively, I don't think there's any evidence for that. Besides, you can't just slap it on every value you have - it must actually be a result type! - so even if it's applied incorrectly in advance, that still requires some conscious consideration.
This is not at all the same as forgetting an "else" - or, far more often, forgetting the "if" altogether.
I think this is the point of contention. Languages with some Option type and no null are not the same as any other language. Yes, you can ignore error handling, but you cannot forget it.
IMO handling forgetfulness is the job of the compiler and ties back into my overall point; that the meaningful distinction is the type system's awareness of what can or cannot be null.
I believe that claims that Rust forces you to not "forget about the else branch" are actively harmful to language adoption, I've seen many people use `unwrap()`'s existence as evidence that Rust's approach to empty value types doesn't actually provide any useful benefits.
Typescript is another one where null and undefined are unique singleton types, which are encompassed under the catch-all `any` type but, if strictness is turned on, are rejected by any other type matching.
doThingA(param: any) // you can pass anything including null and undefined
doThingB(param: object | number | string | boolean | bigint | symbol) // you can pass anything except null and undefined
> The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.
Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?
My goal here was not to get mired in semantic arguments that ignore the context of the discussion. It was merely to highlight that Rust lets you assume that things won't be null the same as any other language, and that the important difference is that rust lets you be clear about what can and cannot be null.
I get the thrust of your argument (that the expressiveness of the type system is what matters here), and it makes sense to me.
I'm not sure I agree with this though:
> Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?
I haven't used Rust, so but digging up the source shows unwrap is indeed just a `match` that panics on None.
The fact that "you can put thing inside a function and call that function" doesn't mean you "don't have to explicitly do thing". We end up with a pretty useless standard for "explicitly doing thing" if putting thing in a function doesn't count.
It solves it by making sure you handle it at compile time.
In Java you tend not to know where or when null checks have or will happen. So you tend to sprinkle null checks all over your code just incase someone somewhere forgot to check.
The difference is that the choice of Optional is deliberate in the parts of code where None is a valid value. In java you have no first class way to indicate that null isn't an expected value for some variable and so type checker can't block it. In rust type checker wont let None slip in accidentally if you don't use Optional.
I can't speak for Rust, but in Go accessing properties of nil objects and dealing with 0-values for non-pointer values can cause bugs in certain contexts where you "let your guard down".
func getFoo() (*Foo, error)
func main(){
foo, err := getFoo()
if err != nil {
// be an adult and handle your error
}
foo.Work()
// panic because whoever implemented getFoo returned a nil object
}
Also worth noting dealing with booleans in JSON is wonky because boolean values default to false, even if that json property wasn't set over the wire. Thus you end up unmarshaling to a pointer to a boolean, which now requires nil guards throughout your codebase when working with that unmarshaled object...
It means that you've told the type system what it needs to support you.
If your function tells the type system you need a reference to a type, you get a reference to that type. There's no hidden timebomb in there where you might get something that supposedly is that type but can't be treated as that type.
This lets you write your functions to accept nulls where appropriate, and require some form of unwrapping at the call site otherwise. It tells you when you've missed something and it lets you refactor without having to constantly repeat all of your checks, once you know something is valid the type system reflects that for you.
In short if you have a T in rust, you know that it's not secretly a (not)T that is implicitly allowed anywhere a T can be.
>It might look more pleasant, but it doesn't "solve" anything, only shifts it in a different place.
It shifts it right into your face. You're forced to do something instead of pretending that everything could be potentially null. Since the only places where you can actually encounter "null" are documented you are no longer wasting your brain cells on the happy path where you are guaranteed to never see a null value when you don't expect it. Instead you can free up all that cognitive capacity to actually write correct code that handles null values.
For the case where something really could be absent, you're right; it's the same amount of code. The difference is that in a language with null you have to write that code for every single function parameter, whereas in Rust you can have required parameters that are actually required and only need to check the genuinely optional cases (which is what, maybe 5% of the time?). Make invalid states unrepresentable.
Using the `match` method, you can guarantee that in the `some()` branch, v is not null, and will never be null.
In the `if == null` method, the `something` reference is null, but if you have functions that also reference `something` down the line, you're going to have to check it again for null-ness.
Rust does in fact have a ? operator [0], but it works slightly differently than Kotlin's in that it returns early if a None/Result<Err> is encountered instead of just resolving to that None/Result<Err>. If that isn't what you want, then Option::map() is a fine substitute.
The function that can panic still exists, but the ? operator itself won't invoke a panic. Sort of like how you can still get NullPointerExceptions in Kotlin using !!, but standard practice is to use a safer option.
Exception/panic-free code is fairly straightforward; you just need to have APIs return Option<_>/Result<_>/other error code equivalents to indicate failure. The problem with that is that it introduces some amount of friction that will probably make some programmers unhappy.
If you want to avoid returning Option<_>/Result<_>/other error code equivalents, you need some way to prove your inputs cannot trigger an invalid result, and that is a very difficult problem.
Panicing is the right thing to do: fail-fast is much better than silently continuing in an invalid state. If the None state is valid then don't call unwrap() (which you should almost never use, because if None was not a valid state then why were you using an Option in the first place?), call a function that lets you handle both cases, e.g. unwrap_or.
It's sometimes the right thing to panic, in which case use the !! operator to cast away the nullness and get an NPE at that point (now a helpful one as the new Java feature is really a JVM feature!)
With this new feature I'd argue the Java/Kotlin world now has the best handling of optionality of any language, anywhere:
• Pleasant, concise syntax for handling optionality. Not bolted on with an option type.
• Highly efficient translation to machine code. No boxing, no unnecessary branching in the unwrap/!! case.
• Excellent interop with older code written in OOP languages that don't represent nulls in the type system.
• But that older code can be annotated with @NotNull and @Nullable to retrofit the information; within Java IntelliJ will statically analyse these and add warnings where an NPE/panic would occur, when such code is used by Kotlin the annotations cause the types to be correctly derived as nullable or non-null.
• The large wealth of libraries that want to explore object graphs continue to work without being distracted by option/maybe types (e.g. serialisation, UI binding)
• On the rare occasions where you do need to unwrap and you got it wrong, you now get an error message that breaks down the sub-expression where the null pointer occurred, so there's no incentive to break up long chains of dereferences just to get better debugging in case of failure.
That's a featureset around optionality that's hard to match.
I have to disagree. This is an improvement, but it still leaves Kotlin behind where languages like OCaml were 20+ years ago, because nullable types don't compose the way you'd expect. (If you write generic code that uses T?, your code will break when T is itself a nullable type).
> • Pleasant, concise syntax for handling optionality. Not bolted on with an option type.
Optionality is not special enough to be worth a special case in the type system, IMO. Maybe some syntactic sugar could be worthwhile, but not a magic type that behaves differently from other types (which is what Kotlin's ? is) - you need it to behave like a normal type so that you can write and reuse generic code. Arrow-kt can't implement functions that work with nullable values, and has to implement its own Option type instead.
> • Highly efficient translation to machine code. No boxing, no unnecessary branching in the unwrap/!! case.
Getting the semantics right is the important thing; it doesn't matter how fast the code runs if it's broken. It's still possible to compile a well-behaved option type into something unboxed in the cases where it can be represented that way (look at what Rust does, where the option is "packed" if 0 is not a valid value for the inner type, but still does the right thing for Option<Option<T>>, Option<Int> and so on).
The amount of time I've eye traced the code looking for the possible culprits for the NullPointerException and then debugging it later .... This feature should be there day one;
It's huge feature. I can't count a number of times where this feature would save a lot of time for me. This feature should really be back-ported to Java 8 and Java 11.
I wonder about records - they would be great for simplified implementation of immutability, but they seem to not provide a way to copy with a subset of modified fields - like in Scala:
case class Person (
firstName: String,
lastName: String,
age: Int
)
you could create an instance like this:
val emily1 = Person("Emily", "Maness", 25)
and then create a new instance by updating several parameters at once, like this:
// emily is married, and a year older
val emily2 = emily1.copy(lastName = "Wells", age = 26)
You'd probably have to manually write a Builder in the Person record class. Then the callsite might look something like this:
val emily2 = emily1.newBuilder()
.lastName("Wells")
.age(26)
.build()
I think the inner Builder pattern is common enough in immutable java object implementations that they might want to include that fully in the Record class generation at some point. Most immutable object libraries that I've seen include a lot more functionality than Records, like builders.
This has many advantages, one being that it provably and declaratively creates a copy, which is not the case with the builder. In fact the obsession with methods (and hence the implied state) is what makes Java a terrible misfit for functional paradigms such as immutability.
JS is not any better in that regard, that simple example is ignoring nested objects, where the outer one is copied and the inner are referenced (unless you use triple dots all the way down - tedious and error prone, or even simply inefficient if done by some recursive algorithm).
The syntax is cool though
I don't understand what you mean by provably. The Java example does the exact same thing as the JS version: create a shallow clone with a different age. Do you mean the builder might do something different?
// Java
val emily2 = emily1.toBuilder().age(30).build()
As far as adding the spread operator to Java, I think you'd have to require a Spreadable interface to use it or limit usage of the spread operators to records. It's not clear to me that the expressive power is worth it over a builder.
If a tool analyzed the JS AST, it could conclude (with fairly trivial analysis) that emily2 is a shallow clone of emily1. No such guarantee is possible with the Java version.
Builders are really only useful for small programs or classes that aren’t broadly used.
Otherwise, at some point you need to add a field to Person, and then the compiler can’t tell you about all the places where your code fails to initialize the field.
Worse, some people pass builders around.
At that point, the problem of checking that you’ve passed a legal set of arguments when constructing objects is as hard as the halting problem.
You can write tests to make sure your program is typesafe, but that takes time away from writing more useful tests.
I find Builders most useful when you've got a lot of optional arguments. For mandatory arguments (and most arguments should be mandatory), you should definitely use constructors instead, so that your tool kit can tell you when they change.
Optional arguments with reasonable defaults don't present a problem for refactoring. It does make it hard to spot the cases where the reasonable defaults don't work. One trick is to temporarily add it to the constructor, review all the errors, and then remove the constructor arg.
Java lacks default arguments for functions, which makes that particular API infeasible without adding support for such. They could generate "withFoo" methods or even builder types, but that would butt up against developer preference, so it's probably best to leave it up to developers or a third-party library.
Kotlin has a very similar feature in its data classes, FWIW.
This goes against the very idea of method signatures in Java an overloads based on them. An attempt to add it will likely bring in way, way more backwards-compatibility pain than any benefits are worth.
The JVM has all the machinery required to do this right, though: invokedynamic.
Kotlin provides named arguments and a copy method on data classes/records that uses them. You do indeed have binary compatibility issues with them, but it's not a fundamental problem. It's just that it sits at the intersection of:
- Problems that only affect library writers, rarely a high priority for language designers (Java being a exception)
- Obscure JVM knowledge and new techniques
- Poor Android JVM holding back the ecosystem by discouraging any bytecode techniques that post-date Java circa 2010.
You can have named parameters and backward compatibility with method signatures, Groovy does that [1].
Ok, it cheats a little bit because named params are actually a syntax sugar for Maps. But other languages do the same (e.g. Python) and is very useful too. And if you can type check the map (as in Typescript) the functionality is equivalent to named params.
Python has real keyword parameters, not just a syntax sugar. They're just presented as a dict when you want to see them all. And you can splat that dict when calling.
Ruby before 2.7 has a syntax sugar where keyword parameters are the same as a trailing hash. But that's changing as well.
Does Kotlin have anything to let you treat data classes in a generic way like you can do with Shapeless in Scala? That's where the real power comes IME - e.g. look at Circe where you can derive JSON encoders/decoders for arbitrarily nested case classes at compile time, so you have zero boilerplate but still get compile-time type safety that stops you from accidentally trying to serialise a file handle or semaphore.
Looks like it's doing annotation processing at compile time? The trouble with that is that you end up with "magic" code like calling methods that don't appear to exist, and in order to be able to work on the serialisation itself (e.g. to add a new format) you have to understand the "magic", because there isn't a good intermediate abstraction. The great advantage of Shapeless is that all the "magic" is in the library itself and it spits out a generic representation of your data classes in record form; you can implement a new serialisation format in 100% vanilla Scala without having to worry about any macros or annotations or anything.
It's not annotation processing, it's a compiler plugin with IDE support. So pretty deeply integrated. It avoids some of those issues.
The kotlinx.serialisation design is pretty nice. It's basically what you describe. The compiler generates generic code that can call into a variety of "codecs" but they don't have to actually be serialisation specific. So there are codecs for JSON, protobufs, CBOR etc but you can also do arbitrary object graph transformations with the framework. A deep clone being a simple hello-world type example. I've never used Shapeless but it sounds pretty similar in some ways.
> It's basically what you describe. The compiler generates generic code that can call into a variety of "codecs" but they don't have to actually be serialisation specific. So there are codecs for JSON, protobufs, CBOR etc but you can also do arbitrary object graph transformations with the framework. A deep clone being a simple hello-world type example.
So what does the value that gets passed into your codec look like? What makes Shapeless tick is that it can represent records generically at the type level; records will have a type like
type Book = 'author ->> String :: 'title ->> String :: HNil
that you can actually break down and do meaningful operations on (e.g. you could run it through a function that counts the lengths of strings and get something of type 'author ->> Int :: 'title ->> Int :: HNil, in a fully type-safe way). I'd be impressed if someone managed to encode those kinds of types into Kotlin - e.g. I don't think Kotlin has singleton types, so being able to compare field names at the type level is probably impossible?
kotlin has this too .. but java does not have named params.
It would be a good addition imo (think Rect(left, top, ....) , especially associated with default params.
Alternatively, they could automatically generate a builder class for records.
As always in java, I guess that somebody will duct tape an annotation based solution on top of the language to solve this.
which is just ceremonial boilerplate that obscures the actual logic of what the class is doing (an IDE can generate the constructor for you, but the person maintaining your code still has to take the time to comprehend it). You can do the more complicated things in cases where you need to, but it's important to keep the simple case simple.
"switch expressions", "text blocks". I haven't used Java for 15 years but it's one of the most mature languages out there. I'm surprised it's only just getting these pretty standard features now.
I don't think switch expressions are pretty standard at all.
Switch statements are standard, and Java has had them for a long time. Switch expressions are different in that the switch "statement" now returns a value that can be assigned to a variable.
it's always moved slow on purpose... one of the language's greatest features is backwards compatibility and stability
it's purposefully dead simple. this is why many of us switched to alternative JVM langs ... Java can't keep up with the innovation other Langs have without breaking its philosophy of slowmoving/backwards compat
Not all features have to do anything with backward compatibilities. I agree with the fact that features should be added with caution cause new features can interact with existing features in a non-trivial way that will introduce lot of edge cases. But it's hard to see why a feature like text block would introduce more complexities.
Because once a feature is introduced it needs to be supported for eternity, every feature has to do with backwards compat. The language designers need to make sure the features are exactly what we want and need before introducing something half baked.
At one point “simple” was not really a fair assessment of Java, especially if you had to do multithreaded / concurrent code back when we had broken primitives in the Java 3-5 days.
It’s moving along a lot faster now thankfully but if it was going this fast 20 years ago we may not have needed Kotlin or Groovy
I guess one of the reasons was abundance of hardware platforms and operating systems back then. Threads were not very native to UNIXes. Nowadays it's basically x86_64 and ARM with Linux/Windows which are much more mature.
I think the issue being referred to was the lack of a rigorous memory model. But that wasn't a failing of Java. Java was one of the first to get such a model, most languages/runtimes have never even tried at all.
Looks like a bug. But anyway should be easily detectable by a static analysis and reported as a warning, so not a big deal in practice, even if working as intended.
"14.21. Unreachable Statements
It is a compile-time error if a statement cannot be executed because it is unreachable.
This section is devoted to a precise explanation of the word "reachable." The idea is that there must be some possible execution path from the beginning of the constructor, method, instance initializer, or static initializer that contains the statement to the statement itself. The analysis takes into account the structure of statements. Except for the special treatment of while, do, and for statements whose condition expression has the constant value true, the values of expressions are not taken into account in the flow analysis."
I still don't understand why reachable or unreachable changes the binding from names to local variable/field. The process of identifier resolution should happen before reachability analysis.
Is there some bug or mail list thread with reaction from Java developers?
(again, reachability analysis of unrelated code changes semantics.)
The problem is that this IS defined behaviour - the scope of the instanceof-assigned variable is dependent on whether or not the taken if-statement is provably exiting.
This is intended to allow
{
if (!(obj instanceof String s)) return;
// s exists now.
}
IMO they should treat it exactly like they treat uninitialized variables.
void test1(boolean b) {
String s;
if (b) {
return;
} else {
s = "test";
}
System.out.println(s);
}
This code compiles.
void test2(boolean b) {
String s;
if (b) {
if ("a".equals("a")) {
return;
}
} else {
s = "test";
}
System.out.println(s);
}
This code does not compile. But it does not mean that println will try to resolve s to something else. I think that they should have gone to a similar route, where declared variable will be available for entire lexical block where `if` was used, but initialized only inside matched branch. Usage in other code would error with "variable might not have initialized" consistently how it works now.
Of course that would require to shadow previous declaration for consecutive `if`-s. But it would be much more obvious and understandable. Actually the whole construction would be just a syntax sugar almost expressible with current Java constructions:
/*
if (o instanceof String s) {
System.out.println(s.length());
}
//System.out.println(s.length()); // variable might not have initialized
if (o instanceof Number s) {
System.out.println(s.intValue());
}
*/
String s;
if (o instanceof String) {
s = (String) o;
System.out.println(s.length());
}
//System.out.println(s.length()); // variable might not have initialized
Number s$; // no variable shadowing in Java now, but it could work
if (o instanceof Number) {
s$ = (Number) o;
System.out.println(s$.intValue());
}
//System.out.println(s$.intValue()); // variable might not have initialized
and would be directly expressible if Java would allow variable name shadowing which is a good thing as proven by Go and Rust (although that would be incompatible change for old code, but allowing variable shadowing for patterns would not be incompatible change, because old code does not have pattern variables).
Of course I did not think about this problem for too long and probably missed something important, so that's just my 2 cents. I guess, developers took that path for a reason.
Basically they want to following code to work:
String s;
void test(Object o) {
if (o instanceof String s) {
System.out.println(s); // local variable o
} else {
System.out.println(s); // this.s
}
}
and I'd argue that this code should not compile! It's bad code. If developer wants to use `this.s` he should explicitly write that.
It's a discussion of a simple use-case where variable is only available inside positive if match (actually pretty logical behaviour). It does not touch unreachable code modifying lexical analysis (what really causes confusion).
With switch, records, and (soon?) sealed interfaces, Java will be more pleasant than ever.
Would I ever choose it if I were in charge of a project? Probably not. But it is nice that the language is incorporating these proven features that make a huge difference. It'll make working in Java when I'm not in charge much nicer.
I've been doing mostly Java work in my career and I'm using more JS these days on the side. I'm envious of the rest, spread, and deconstruction functionality in ES6. Really, really, envious.
Syntax is not that different from Java. Someone who knows Java could be productive in 1-2 days. But it sounds like you agree that your leadership in that scenario was not rational.
One thing included in JDK 14 that will help alleviate this problem for certain types of applications (e.g. desktop apps) is the jlink/jpackage tools that make it much easier to ship an application with a bundled JDK that installs and behaves like a regular native application. If you bundle the JVM with your shipping app, you can always update to the latest JDK version.
People deliberately chosen to stay with some old version. Java backwards compatibility is awesome. The only questionable move was with Java 9, when they removed a lot of classes from standard library and introduced modules, but even that move was not so hard to migrate.
Some people just don't want to invest ANY money to improve their code. They just want to release new features. They would use Java 1 on Windows NT 4 if they could.
It's good to see Java is becoming better and iterates faster.
However, I found it's weird that hosted languages like Kotlin, Clojure or Scala are more approachable for me as they are working fine with JDK 8 like 'libraries'. They're much easier to upgrade than upgrade JDK itself.
Why is the cast even necessary? Isn't the cast only part of the type checker and thus unnecessary with a smarter type checker? For example TypeScript can do it, if you have code following an if condition that checks the type of the variable, you can use that said variable as if it were of that type without any further assertions necessary, eg.
let x: unknown;
if (typeof x === "string") {
console.log(x.charCodeAt(0));
}
Two reasons - current convention and backwards compatibility. Java developers are simply used to the cast, it is commonly occurring pattern (even if it is from necessity).
The backwards compatibility part:
class Foo {
void foo(Object o) { }
}
class Bar extends Foo {
void foo(Integer i) { }
}
...
Foo x = ...;
Integer i = 42;
if (x instanceof Bar) {
x.foo(i);
}
Currently this calls Foo.foo, if there was a smart cast it would call Bar.foo, possibly breaking existing code.
That's not correct. Java method dispatch is always dynamic, so x.foo() will always call the foo() method on whatever concrete type x actually is. The declared type of the variable that holds the reference to that object doesn't matter, nor does casting (with a couple exceptions, none of which apply here).
But don't trust me, try it!
| Welcome to JShell -- Version 11.0.5
| For an introduction type: /help intro
jshell> class Foo { void foo() { System.out.println("Super"); } }
| created class Foo
jshell> class Bar extends Foo { void foo() { System.out.println("Sub"); } }
| created class Bar
jshell> Foo f = new Bar();
f ==> Bar@58651fd0
jshell> Bar b = new Bar();
b ==> Bar@5419f379
jshell> f.foo();
Sub
jshell> b.foo();
Sub
jshell> ((Foo)b).foo();
Sub
jshell> class Foo { void foo(Object o) { System.out.println("Super"); } }
| created class Foo
jshell> class Bar extends Foo { void foo(Integer i) { System.out.println("Sub"); } }
| created class Bar
jshell> Foo f = new Bar();
f ==> Bar@7f9a81e8
jshell> Integer i = 1;
i ==> 1
jshell> f.foo(i)
Super
jshell> ((Bar)f).foo(i);
Sub
Your statement is wrong, in the following sense: at runtime, the call to x.foo() would always start look at the runtime type (which in this case is Bar, or some subclass of it) - otherwise overriding methods would not work.
This kind of flow analytics are pretty new IMHO. C# supports these e.g. for nullable checks. But I think Java and C# are designed a bit too early for having these kind of flexibility with the variable type. Also consider that changing the type also has other consequences like invoking other virtual methods (e.g. C# new operator on methods) which can be confusing because a type change above is not explicit.
PS: just came to my mind: that is a breaking change (in C#) due to the new operator exanple. Will never come (for C#). Maybe Java does not have this issue but most likely they have a similar problem
This is an awesome feature for typescript. Love their work there.
It starts to get more complex than you might think. Suppose I declare x to be of interface type A and then pattern match it to a disjoint interface type B. Now unless I get depresse that with a cast or a new variable name then I need to express it as some kind of intersection type and this may need to extend into the bytecode and vm.
It might even be an intersection between an object and an interface type, and for historical reasons those have different bytecodes for invoking methods!
Flow typing is really cool, but it can make things way more complex.
This blog post doesnt cover all interesting changes, at least for me.[0]
For me most interesting upcoming changes are
JEP: 352: Non_Volatile Mapped Byte Buffers,
JEP 345: NUMA-Aware Memory Allocation for G1
and JEP 370: Foreign-Memory Access API (Incubator).
Especially FMA api [1] examples seems most promising but shipped with panama and I am not sure about the maturity yet: https://github.com/zakgof/java-native-benchmark
I wonder how GraalVM stands in this picture that beside of
being polyglot, it has AOT and auto vectorization futures and I dont know if those are already/will be shipped also with openJDK
LocalDate is appropriate for things like user interfaces, where you're modelling an intuitive/vague concept of date-ness only meaningful in some wider human context, or where the actual time at which that day starts just doesn't matter or is unknown. For instance it may be a good type to use for annotated historical events, where the day the event happened is the most accurate you can get.
For a bank transaction where they're international by nature and usually need to be ordered temporally against each other, it's an inappropriate type. The time zone in which the date should be interpreted is important. But really for transactions you'd be better off using Instant, at least internally. Time matters too.
Would be great if Records were actually bonafide value types lol as in not allocated on the heap. Otherwise it’s just nice syntactic sugar like data classes in Kotlin
It’s been 6 years since Valhalla was announced why it’s still in prototype befuddles me. If anything, getting value types out the door will have significant performance improvements and least barrier of entry for current java / jvm based programmers.
How do I propose a language change? One thing I'd really like to see is optional type declarations in Lambdas to bulletproof them. Take this simple Lambda
List<String> names = new ArrayList<>();
names.stream().filter(String name -> "Bob Vance".equals(name)).findFirst().get();
By adding the String type declaration, a whole host of bugs in really complicated Lambdas can be eliminated and found easier when the original types and lists are being shuffled around
Apparently immutable classes in a functional streams is a new hotness and nobody uses POJOs anymore. They'll slap pattern-matching on top of that, to complete the circle.
I completely agree as I, myself, see very few instances where I could use records in my code. But properties to remove getter/setter boilerplate? That would remove thousands of LoC. But it's not hot.
It's an easy and zero-boilerplate way to have immutable data-only structures with some useful tidbits like structural comparison, meaningful hashcodes etc.
I say this as a C# developer that would desperately want them supported in C# and was super pissed off when they were discarded from C# 8.0 (I actually need them right now, they would save me a whole day of typing today)
> The new license makes it almost impossible to use Java without some form of payment to Oracle.
Do you have a source for this claim? It sounds a bit extreme, to say the least, and I had the impression OpenJDK was licensed using fairly standard terms.
It doesn't help that multiple Oracle/Sun folks—including people like McNealy—said under oath that they don't believe that the licensing permits you to make commercial use, even if you opt for the GPL version.
I don't know what you're referring to, but FSF does not allow the GPL be used in such a way that the four freedoms are compromised by the licensor imposing additional restrictions.
First, you didn't describe an exception; you described additional restrictions. But now you're pivoting to talk about exceptions.
These are fundamentally different things. One enlarges the set of actions a recipient is free to do relative to what vanilla GPL allows. This is permitted (and in the case of the classpath exception, endorsed) by FSF. The other attempts to shrink the size of that set by denying the user things that the GPL would otherwise allow. The FSF simply does not permit the GPL to be used in that combination (and there would be extreme contrast in your last sentence and the failure to recognize the FSF's say in this).
And secondly, you've yet to substantiate your claim that Java was ever distributed with such GPL-modifying restrictions.
How about a straightforward response, rather than trying to change the subject again?
What's more, I've seen this interview multiple times. Listening to Gosling stutter and be coy is not illuminating in the least. He has no idea how to answer the question he was asked, much less what's being discussed here now.
Sun as copyright holder had the right to constraint Java's usage as they wanted and embedded deployment wasn't covered.
Naturally it is hard for anyone to link to anything Sun, given what happened with their assets and Internet presence.
Is a substantiate argument? Maybe not, it doesn't change the fact that Google screwed Sun, didn't bothered to rescued it went it went down, and now we have Java and Android Java.
I guess FSF is happy with the outcome then, since it is allowed to tank companies.
> Sun as copyright holder had the right to constraint Java's usage
Sure. But what they don't have is domain over the GPL.
I won't respond to the rest of your comment, which has nothing to do with the claim you made to kick off this branch of discussion and is just another attempt to change the subject (with what is an opinion, not a "fact").
Glad to hear that; I thought I remembered reading something like that back when the license change was first announced, but I wasn't sure if something else had changed in the meantime.