Hacker News new | past | comments | ask | show | jobs | submit login
New Features in Java 14 (oracle.com)
231 points by pjmlp on March 3, 2020 | hide | past | favorite | 183 comments



"Helpful NullPointerExceptions": One of the biggest pain points of Java, probably the biggest. Good that it's being finally solved;


Java is quite far away from finally solving NullPointerException issues. Kotlin, C# or TypeScript with their compile time null checks solved it mostly. Common to these three languages is that they need null to be interoperable with older language versions (C#) or with related languages (Java, JavaScript). Languages like Rust that don't even know null (or nil as in Go), have finally solved the problem.


> Languages like Rust that don't even know null (or nil as in Go), have finally solved the problem.

I wouldn't quite phrase it that way, since it's nothing new. OCaml from the 1990s has no null value (you need to represent potentially missing values with Option). I suspect there are older examples than that.


I would tentatively phrase it like that. It's not that rust is original in that regard, but instead effective though actually being adopted.


Scala does a pretty good job too, with the help of build plugins you can avoid null polution (which normally comes from Java libraries) pretty well.


Build plugins - do you mean wartremover or something else?


Yes Wartremover, Scalastyle, Codacy, Scalazzi all help with this to varying degrees


You hit it right on the nail. I love how the null type being defined away from function signature in TS. It's not entirely bug free but it gets pretty far!


I have no faith that Java will ever solve the problem after their bungled introduction of the Option type. Step 1 to migrating away from null would be to introduce a working Option type - one that could contain any valid Java value (which includes null for the time being) and where chaining behaviour worked the way you expect (i.e. that doesn't violate the monad laws, even when nulls are being thrown around). People from languages that have solved the problem told the Java folks about these issues, and were ignored, with the result that it's impossible to migrate an existing Java codebase to use Option, and I can't imagine the Java community having the appetite to introduce a non-broken Option that would allow moving off null.


It's not really clear from your description as to what's wrong with Optional type. When Java would introduce value classes, I could imagine Optional class to be automatically transformed by a JIT to a nullable instance removing all overhead. Why do you want to keep null inside Optional? That would be absolutely non-intuitive design.


The most basic example of the problem with null is: if you call Map.get(key) and get null back, you don't know whether that means the map doesn't contain a mapping for key, or it contains a mapping from key to null. Optional lets you solve this: if we added a Map.getOption(key) that returns an Optional then it will be None if the map doesn't contain a mapping for key, and Some(null) if the map contains a mapping from key to null. That way you don't have to rewrite the whole world to use optionals before you start getting the benefit: even if the code that's populating the map doesn't know about optional and is still using null, you've still solved your problem.

Instead, with the implementation that we got in Java, if you tried to write this Map.getOption then if your map was ever used with old code that used nulls then it would break. So there's no way to ever start migrating to using options.

There are other problems with null, but they mostly boil down to the same issue: different code uses null to represent two different things, and so gets confused about what it means. (The other problem is that you can't look at a value and know that it can't be null, but optional definitely can't solve that until the whole ecosystem migrates to it).


When you're calling `Map.get(key)`, you're getting null both for case when value is null and for case when value is missing. If you need to distinguish those cases, you have to use `contains(key)`. That's questionable design for sure and it's generally recommended to avoid null values. But Optionals have nothing to do with that.

If you ask me, they should have introduced another class, something like NullableOptional, similar to Optional for those use-cases. I don't agree that putting `null` in every optional just because of few bad APIs is a good idea, because people use Optional exactly to avoid dealing with nulls.


> That's questionable design for sure and it's generally recommended to avoid null values. But Optionals have nothing to do with that.

On the contrary, a selling point of Optionals is that they let you avoid that problem. Like I said, you could have a Map.getOption(key) function that unambiguously tells you whether the value is present or not, because it only ever returns None for absence, and will always return Some(value) (which might be Some(None) or Some(null), but those can be distinguished from None) if the value is present.

> I don't agree that putting `null` in every optional just because of few bad APIs is a good idea, because people use Optional exactly to avoid dealing with nulls.

Optional should be a transparent, consistent class that can contain any value that's valid in the language. Maybe Java will eventually reach a point where null can be considered invalid, but until that day, having Option ban putting null in it because it's old and deprecated makes about as much sense as if ArrayList banned you from making lists that contained deprecated classes.


From your blog (5.5 years using scala) it seems that null is a problem in scala:

Scala avoids the biggest pitfall of multiple inheritance elegantly, by only allowing one parent to have a constructor; classes may only be the first parent, and traits can't have constructors that take arguments. Unfortunately this doesn't mean they don't need to be initialized; in particular, vals in a mixed-in trait will be null if you try to access them in an earlier constructor.[2] As with any null, in the worst cases you may not get an error until much later on.


Null is not the problem there, null is the symptom. You see exactly the same problem if the val is a primitive type (which can't ever be null): if you access an Int from a mixed-in trait in an earlier constructor, that Int will be 0. This is a real issue, and one where I think Java has actually made the better design choice (interfaces may contain method implementations and constants but not fields - of course they had the benefit of Scala's experience when making that decision), but it's not about null.


You can get those compile time checks with PMD. It plugs into all major Java IDEs and build tools.


Well, what I meant is that they're solving the unhelpful messages :) The root of the problem is still there. Only languages without Null and everything that entails have truly solved the problem.


C++ has had non-null references for decades.


> Languages like Rust that don't even know null (or nil as in Go), have finally solved the problem.

I'm in no way a Rust expert, but I see a lot of code with Optionals, something like

    match sth {
      Some(v) => do_something
      None => nothing_to_do
    }
This looks awfully a lot like

    if (something == null) {
       nothing_to_do
    } else {
       do_something
    }
It might look more pleasant, but it doesn't "solve" anything, only shifts it in a different place.

Go is in the middle, because structs have no null value, only a zero value; the only thing that can be nil are pointers which I personally feel should used as little as possible.


In Java you're passed a reference to an object. Might it be null? Can it be null? Who knows! Mostly you're just going to ignore the possibility and hope for the best.

In Rust you're passed a reference to an object. Can it be null? No, it simply can't! There does not exist a magic "null" value for references. To represent the possible absence of a value, you explicitly encode it into the type system by using `Option<T>`.


Just changed from having a bottom null type for all types to everything being a boxed Option type. It’s equivalent.

The actual difference is the reliance on pattern matching and the compiler enforcing coverage on those languages.


Option types don't have to be boxed and are specifically optimized in Rust and, if I remember correctly, in Haskell. The Nothing is represented much like a null pointer, i.e. with a special value that can't be dereferenced.


Not in Haskell, FYI. Maybes are boxed.


In Java the Option of T type blows up the same way a NullPointerException does if you try to use the Option's value when it contains none.

I agree. In Java at least the semantics are practically identical.

I can't comment on Rust. I suspect it does a slightly better job.

Crystal is another language where there is no null. It's quite cool.


Rust refuses to compile, because an Option<T> type is a different type than T, so if you try to use it like one, you get a type error.


Java would also refuse to compile using an Optional<T> where T is expected.

Java and Rust are practically the same regarding the differences between Optional<T> and T (java), and Option<T> and T (rust). Optional<String> in Java has the `.get()` method which corresponds to the `.unwrap()` method in Rust. And in both are different different types than String in their respective languages.

But in Java you can have an Optional<String> that's null, not just empty. So that's three states - null, wrapping null, or wrapping a value.

I'd say the big difference is that in Rust you can have places that refuse nulls (e.g. using T instead of Option<T>), so you have well defined points of control/checking. That, combined with the difficulty in recovering from a panic (the equivalent of a null pointer exception when trying to unwrap a value that's missing) and the Result type, are Rusts real strength wrt. missing values.


I think we are saying the same thing, from slightly different perspectives. That Option can be nullable means that it can be used where it is not actually that type, without a type error, is what I’m saying.


Huh, what? You don’t need or want most values to be potentially absent.


Yes, the difference is a combination of types being non-null by default and compiler enforced checking.

This doesn't require an option type. Option types are actually quite awkward compared to language integration. Kotlin doesn't use an Option type yet still delivers all the same benefits as Rust/Haskell's approach, with benefits (e.g. zero overhead, integrated syntax). Note that Option type overhead isn't merely about heavyweight syntax and the need for the compiler to successfully scalarise: using language integration around standard null pointers means the CPU spends no time on checking the 'maybe' branch. If the type system wasn't violated and a value is present, it's used immediately without any additional instructions or code bloat. If it's not then that will trigger a page fault at some point in the CPU pipeline and transfer control to an exception handler. If the type system indicates nullability and you need to actually do more than just unwrap it, then you have to take a branch of course but it's fully inlined and cheap.


Doesn't that approach make the "nothing branch" extremely expensive? Makes sense in scenarios where you really always expect Something and not Nothing. In practice, Options are very often used where something is optional, and you can expect to get Nothing a good percentage of the time. Using exceptions in such cases seems silly.


Yes if you expect nothing, you test for it obviously. The problem is when you want high quality errors for the case where you didn't expect it and you e.g. casted away the nullness, or it was casted away for you due to language interop.


Integrate PMD and enable its errors as broken builds, done.


I know what an Optional is, the problem is that this pattern of wrapping stuff in Optionals seems to appear a lot, which means that in practice you have to do this little dance like you would in Java or, as others pointed out, try your luck with unwrap() and hope it goes well. In practice, it doesn't make coding much different.


You seem to forget all the cases in Java where you didn't expect null to happen, but for some reason something was null and you get a NPE. Sure, if you check all values for null already the difference to now is not very big, but no one I know does this. Everyone has some intuition whether something "can" be null or not. Rust (and other languages without null) formalize the notion of "this cannot be null", so you don't have to depend on your intuition anymore - and cannot be wrong about it.

Regarding unwrap: If you tell the compiler "I know what I'm doing" and you don't know what you are doing then no one can help you. It's the same as putting casts everywhere when the compiler yells at you about incompatible types: If you lie to the compiler it will do your bidding, but your code will run into errors at runtime that you could have prevented at compile time. Again: No one can help you here.


The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.


No that's not the key difference, because you can just call `unwrap()` and forget about it, which is effectively the same footgun as other languages.

The key difference is that now the type system can tell you the truth. In other languages when you have a reference to a type T, null is considered a valid T but you cannot treat it as one or everything blows up. Meanwhile in rust when you have a reference to T, you don't have a secret (not)T that invisibly breaks everything.


> No that's not the key difference, because you can just call `unwrap()` and forget about it, which is effectively the same footgun as other languages.

Are you sure? Each line of Java or Javascript can be equivalent to multiple `unwrap()` calls. You can't see them because the compiler is generating ([0]) them but they are there and there are many hundreds of thousands (or maybe even millions including dependencies) of them in most Java projects.

Here is an example: company.board.members.size() is equivalent to company.unwrap("NPE).board.unwrap("NPE).members.unwrap("NPE).size() and every single line in Java is like that. You can't opt out. You can't tell your coworkers to stop using "unwrap" during code reviews. The entire language forces you to do this in every single line that involves a reference.

[0] Well, it's actually just trapping the 0 address but it's equivalent.


Please do not quote posts out of context in order to make your argument seem stronger.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

I am well aware of the issues in other popular languages. You and I agree that there is an implicit `unwrap()` call in each dereference operation.

The post I was responding to claimed that in Rust you cannot leave null/None values unhandled. This is false because `unwrap()` exists and more importantly is in fairly common use.

The distinction is not whether you can (at your own peril of course) assume a value is not null. Rust lets you do that.

As you point out however, it does force you to be more explicit. Why is that? It's because the concept of `None` is not hidden from the type system.

The reason that your Java examples work that way is not because `unwrap()` is implicit, it's because the type system doesn't know about nulls in the first place.


When you write unwrap(), you're handling them - by forcing a panic, but it's still a choice that you make.


I was responding to the following post:

> The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.

Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?

My goal here was not to get mired in semantic arguments that ignore the context of the discussion. It was merely to highlight that Rust lets you assume that things won't be null the same as any other language, and that the important difference is that rust lets you be clear about what can and cannot be null.


Yes, it absolutely is different. The naive code will not have unwrap() in it, and it will fail to compile. At that point the developer can still mistakenly use unwrap() to unblock themselves, but it's still opt-in, not opt-out.

If you're implying that coders use unwrap() pre-emptively, I don't think there's any evidence for that. Besides, you can't just slap it on every value you have - it must actually be a result type! - so even if it's applied incorrectly in advance, that still requires some conscious consideration.

This is not at all the same as forgetting an "else" - or, far more often, forgetting the "if" altogether.


I think you agree with me but are convinced that we do not agree because of minor differences in wording.


> the same as any other language,

I think this is the point of contention. Languages with some Option type and no null are not the same as any other language. Yes, you can ignore error handling, but you cannot forget it.


IMO handling forgetfulness is the job of the compiler and ties back into my overall point; that the meaningful distinction is the type system's awareness of what can or cannot be null.

I believe that claims that Rust forces you to not "forget about the else branch" are actively harmful to language adoption, I've seen many people use `unwrap()`'s existence as evidence that Rust's approach to empty value types doesn't actually provide any useful benefits.


Typescript is another one where null and undefined are unique singleton types, which are encompassed under the catch-all `any` type but, if strictness is turned on, are rejected by any other type matching.

    doThingA(param: any) // you can pass anything including null and undefined
    doThingB(param: object | number | string | boolean | bigint | symbol) // you can pass anything except null and undefined


Isn't unwrap(x) just

    match x {
        Some(v) => return v
        None => die painfully
    }
?

If so, the None is still handled. Dying painfully is an escape hatch out of the type system in any language that lets you do it.


I was responding to the following post:

> The key difference is that in your second example you can just forget about the else branch, while in Rust you must explicitly handle the None case, otherwise it won't compile.

Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?

My goal here was not to get mired in semantic arguments that ignore the context of the discussion. It was merely to highlight that Rust lets you assume that things won't be null the same as any other language, and that the important difference is that rust lets you be clear about what can and cannot be null.


I get the thrust of your argument (that the expressiveness of the type system is what matters here), and it makes sense to me.

I'm not sure I agree with this though:

> Do you really think that `unwrap()` is meaningfully different from "forget[ting] about the else branch"? Does it require you to "explicitly handle the None case"?

I haven't used Rust, so but digging up the source shows unwrap is indeed just a `match` that panics on None.

The fact that "you can put thing inside a function and call that function" doesn't mean you "don't have to explicitly do thing". We end up with a pretty useless standard for "explicitly doing thing" if putting thing in a function doesn't count.


If you don't care about the else case you can use if let: https://doc.rust-lang.org/rust-by-example/flow_control/if_le...


It solves it by making sure you handle it at compile time.

In Java you tend not to know where or when null checks have or will happen. So you tend to sprinkle null checks all over your code just incase someone somewhere forgot to check.


The difference is that the choice of Optional is deliberate in the parts of code where None is a valid value. In java you have no first class way to indicate that null isn't an expected value for some variable and so type checker can't block it. In rust type checker wont let None slip in accidentally if you don't use Optional.


I can't speak for Rust, but in Go accessing properties of nil objects and dealing with 0-values for non-pointer values can cause bugs in certain contexts where you "let your guard down".

    func getFoo() (*Foo, error)

    func main(){
        foo, err := getFoo()
        if err != nil {
            // be an adult and handle your error
        }

        foo.Work()
        // panic because whoever implemented getFoo returned a nil object
    }
Also worth noting dealing with booleans in JSON is wonky because boolean values default to false, even if that json property wasn't set over the wire. Thus you end up unmarshaling to a pointer to a boolean, which now requires nil guards throughout your codebase when working with that unmarshaled object...


Rust does not have nil objects nor zero values.

The equivalent rust code would fail to compile.


Go also has https://golang.org/doc/faq#nil_error, so interface != nil doesn't ensure it's safe to call any of its methods.


That's a huge improvement.

It means that you've told the type system what it needs to support you.

If your function tells the type system you need a reference to a type, you get a reference to that type. There's no hidden timebomb in there where you might get something that supposedly is that type but can't be treated as that type.

This lets you write your functions to accept nulls where appropriate, and require some form of unwrapping at the call site otherwise. It tells you when you've missed something and it lets you refactor without having to constantly repeat all of your checks, once you know something is valid the type system reflects that for you.

In short if you have a T in rust, you know that it's not secretly a (not)T that is implicitly allowed anywhere a T can be.


>It might look more pleasant, but it doesn't "solve" anything, only shifts it in a different place.

It shifts it right into your face. You're forced to do something instead of pretending that everything could be potentially null. Since the only places where you can actually encounter "null" are documented you are no longer wasting your brain cells on the happy path where you are guaranteed to never see a null value when you don't expect it. Instead you can free up all that cognitive capacity to actually write correct code that handles null values.


For the case where something really could be absent, you're right; it's the same amount of code. The difference is that in a language with null you have to write that code for every single function parameter, whereas in Rust you can have required parameters that are actually required and only need to check the genuinely optional cases (which is what, maybe 5% of the time?). Make invalid states unrepresentable.


Using the `match` method, you can guarantee that in the `some()` branch, v is not null, and will never be null.

In the `if == null` method, the `something` reference is null, but if you have functions that also reference `something` down the line, you're going to have to check it again for null-ness.


This specific pattern can be re-written as

  sth.map(|v| do_something);
incidentally.


And you can write sth.unwrap() in Rust, and then it panics when it is null.

Kotlin solves it much better. sth?.do_something() ?: nothing_to_do


Rust does in fact have a ? operator [0], but it works slightly differently than Kotlin's in that it returns early if a None/Result<Err> is encountered instead of just resolving to that None/Result<Err>. If that isn't what you want, then Option::map() is a fine substitute.

[0]: https://doc.rust-lang.org/edition-guide/rust-2018/error-hand...


But there is still a function that panics.

Java has solved memory safety. Now the task is to make sure that code that can panic will not compile


The function that can panic still exists, but the ? operator itself won't invoke a panic. Sort of like how you can still get NullPointerExceptions in Kotlin using !!, but standard practice is to use a safer option.

Exception/panic-free code is fairly straightforward; you just need to have APIs return Option<_>/Result<_>/other error code equivalents to indicate failure. The problem with that is that it introduces some amount of friction that will probably make some programmers unhappy.

If you want to avoid returning Option<_>/Result<_>/other error code equivalents, you need some way to prove your inputs cannot trigger an invalid result, and that is a very difficult problem.


Panicing is the right thing to do: fail-fast is much better than silently continuing in an invalid state. If the None state is valid then don't call unwrap() (which you should almost never use, because if None was not a valid state then why were you using an Option in the first place?), call a function that lets you handle both cases, e.g. unwrap_or.


It's sometimes the right thing to panic, in which case use the !! operator to cast away the nullness and get an NPE at that point (now a helpful one as the new Java feature is really a JVM feature!)

With this new feature I'd argue the Java/Kotlin world now has the best handling of optionality of any language, anywhere:

• Pleasant, concise syntax for handling optionality. Not bolted on with an option type.

• Highly efficient translation to machine code. No boxing, no unnecessary branching in the unwrap/!! case.

• Excellent interop with older code written in OOP languages that don't represent nulls in the type system.

• But that older code can be annotated with @NotNull and @Nullable to retrofit the information; within Java IntelliJ will statically analyse these and add warnings where an NPE/panic would occur, when such code is used by Kotlin the annotations cause the types to be correctly derived as nullable or non-null.

• The large wealth of libraries that want to explore object graphs continue to work without being distracted by option/maybe types (e.g. serialisation, UI binding)

• On the rare occasions where you do need to unwrap and you got it wrong, you now get an error message that breaks down the sub-expression where the null pointer occurred, so there's no incentive to break up long chains of dereferences just to get better debugging in case of failure.

That's a featureset around optionality that's hard to match.


I have to disagree. This is an improvement, but it still leaves Kotlin behind where languages like OCaml were 20+ years ago, because nullable types don't compose the way you'd expect. (If you write generic code that uses T?, your code will break when T is itself a nullable type).

> • Pleasant, concise syntax for handling optionality. Not bolted on with an option type.

Optionality is not special enough to be worth a special case in the type system, IMO. Maybe some syntactic sugar could be worthwhile, but not a magic type that behaves differently from other types (which is what Kotlin's ? is) - you need it to behave like a normal type so that you can write and reuse generic code. Arrow-kt can't implement functions that work with nullable values, and has to implement its own Option type instead.

> • Highly efficient translation to machine code. No boxing, no unnecessary branching in the unwrap/!! case.

Getting the semantics right is the important thing; it doesn't matter how fast the code runs if it's broken. It's still possible to compile a well-behaved option type into something unboxed in the cases where it can be represented that way (look at what Rust does, where the option is "packed" if 0 is not a valid value for the inner type, but still does the right thing for Option<Option<T>>, Option<Int> and so on).


Option<T> explicitly states the value may be none.


I wish C# would do the same (all the other features seem to be in C# already if not borrowed from C#).


It's just better error messages (where within expression, not just which line).


The amount of time I've eye traced the code looking for the possible culprits for the NullPointerException and then debugging it later .... This feature should be there day one;


It's huge feature. I can't count a number of times where this feature would save a lot of time for me. This feature should really be back-ported to Java 8 and Java 11.


I wonder about records - they would be great for simplified implementation of immutability, but they seem to not provide a way to copy with a subset of modified fields - like in Scala:

  case class Person (
    firstName: String,
    lastName: String,
    age: Int
  )
you could create an instance like this:

  val emily1 = Person("Emily", "Maness", 25)
and then create a new instance by updating several parameters at once, like this:

  // emily is married, and a year older
  val emily2 = emily1.copy(lastName = "Wells", age = 26)


You'd probably have to manually write a Builder in the Person record class. Then the callsite might look something like this:

    val emily2 = emily1.newBuilder()
                    .lastName("Wells")
                    .age(26)
                    .build()

I think the inner Builder pattern is common enough in immutable java object implementations that they might want to include that fully in the Record class generation at some point. Most immutable object libraries that I've seen include a lot more functionality than Records, like builders.


Why not do what JS does?

  val emily2 = { ...emily1, age: 30 };
This has many advantages, one being that it provably and declaratively creates a copy, which is not the case with the builder. In fact the obsession with methods (and hence the implied state) is what makes Java a terrible misfit for functional paradigms such as immutability.


JS is not any better in that regard, that simple example is ignoring nested objects, where the outer one is copied and the inner are referenced (unless you use triple dots all the way down - tedious and error prone, or even simply inefficient if done by some recursive algorithm). The syntax is cool though


I don't understand what you mean by provably. The Java example does the exact same thing as the JS version: create a shallow clone with a different age. Do you mean the builder might do something different?

    // Java
    val emily2 = emily1.toBuilder().age(30).build()
As far as adding the spread operator to Java, I think you'd have to require a Spreadable interface to use it or limit usage of the spread operators to records. It's not clear to me that the expressive power is worth it over a builder.


If a tool analyzed the JS AST, it could conclude (with fairly trivial analysis) that emily2 is a shallow clone of emily1. No such guarantee is possible with the Java version.


Builders are really only useful for small programs or classes that aren’t broadly used.

Otherwise, at some point you need to add a field to Person, and then the compiler can’t tell you about all the places where your code fails to initialize the field.

Worse, some people pass builders around.

At that point, the problem of checking that you’ve passed a legal set of arguments when constructing objects is as hard as the halting problem.

You can write tests to make sure your program is typesafe, but that takes time away from writing more useful tests.


I find Builders most useful when you've got a lot of optional arguments. For mandatory arguments (and most arguments should be mandatory), you should definitely use constructors instead, so that your tool kit can tell you when they change.

Optional arguments with reasonable defaults don't present a problem for refactoring. It does make it hard to spot the cases where the reasonable defaults don't work. One trick is to temporarily add it to the constructor, review all the errors, and then remove the constructor arg.


Java lacks default arguments for functions, which makes that particular API infeasible without adding support for such. They could generate "withFoo" methods or even builder types, but that would butt up against developer preference, so it's probably best to leave it up to developers or a third-party library.

Kotlin has a very similar feature in its data classes, FWIW.


> Java lacks default arguments for functions, which makes that particular API infeasible without adding support for such

I really wish Java would add both default arguments for functions, and also support for passing function arguments by name.

One of these days...


This goes against the very idea of method signatures in Java an overloads based on them. An attempt to add it will likely bring in way, way more backwards-compatibility pain than any benefits are worth.

Writing simple methods with few arguments helps.


The JVM has all the machinery required to do this right, though: invokedynamic.

Kotlin provides named arguments and a copy method on data classes/records that uses them. You do indeed have binary compatibility issues with them, but it's not a fundamental problem. It's just that it sits at the intersection of:

- Problems that only affect library writers, rarely a high priority for language designers (Java being a exception)

- Obscure JVM knowledge and new techniques

- Poor Android JVM holding back the ecosystem by discouraging any bytecode techniques that post-date Java circa 2010.


You can have named parameters and backward compatibility with method signatures, Groovy does that [1].

Ok, it cheats a little bit because named params are actually a syntax sugar for Maps. But other languages do the same (e.g. Python) and is very useful too. And if you can type check the map (as in Typescript) the functionality is equivalent to named params.

[1] http://docs.groovy-lang.org/latest/html/documentation/#_name...


Python has real keyword parameters, not just a syntax sugar. They're just presented as a dict when you want to see them all. And you can splat that dict when calling.

Ruby before 2.7 has a syntax sugar where keyword parameters are the same as a trailing hash. But that's changing as well.


Does Kotlin have anything to let you treat data classes in a generic way like you can do with Shapeless in Scala? That's where the real power comes IME - e.g. look at Circe where you can derive JSON encoders/decoders for arbitrarily nested case classes at compile time, so you have zero boilerplate but still get compile-time type safety that stops you from accidentally trying to serialise a file handle or semaphore.


There is the currently experimental kotlinx.serialization library that generates e.g. JSON codecs at compile time. Not sure if that fits the bill.


Looks like it's doing annotation processing at compile time? The trouble with that is that you end up with "magic" code like calling methods that don't appear to exist, and in order to be able to work on the serialisation itself (e.g. to add a new format) you have to understand the "magic", because there isn't a good intermediate abstraction. The great advantage of Shapeless is that all the "magic" is in the library itself and it spits out a generic representation of your data classes in record form; you can implement a new serialisation format in 100% vanilla Scala without having to worry about any macros or annotations or anything.


It's not annotation processing, it's a compiler plugin with IDE support. So pretty deeply integrated. It avoids some of those issues.

The kotlinx.serialisation design is pretty nice. It's basically what you describe. The compiler generates generic code that can call into a variety of "codecs" but they don't have to actually be serialisation specific. So there are codecs for JSON, protobufs, CBOR etc but you can also do arbitrary object graph transformations with the framework. A deep clone being a simple hello-world type example. I've never used Shapeless but it sounds pretty similar in some ways.

https://github.com/Kotlin/kotlinx.serialization


> It's basically what you describe. The compiler generates generic code that can call into a variety of "codecs" but they don't have to actually be serialisation specific. So there are codecs for JSON, protobufs, CBOR etc but you can also do arbitrary object graph transformations with the framework. A deep clone being a simple hello-world type example.

So what does the value that gets passed into your codec look like? What makes Shapeless tick is that it can represent records generically at the type level; records will have a type like

    type Book = 'author ->> String :: 'title ->> String :: HNil
that you can actually break down and do meaningful operations on (e.g. you could run it through a function that counts the lengths of strings and get something of type 'author ->> Int :: 'title ->> Int :: HNil, in a fully type-safe way). I'd be impressed if someone managed to encode those kinds of types into Kotlin - e.g. I don't think Kotlin has singleton types, so being able to compare field names at the type level is probably impossible?


kotlin has this too .. but java does not have named params. It would be a good addition imo (think Rect(left, top, ....) , especially associated with default params.

Alternatively, they could automatically generate a builder class for records.

As always in java, I guess that somebody will duct tape an annotation based solution on top of the language to solve this.


It is very cumbersome to not having the 'copy' or equivalence so that a lot of functional languages have similar things:

F#: let updated = { old with prop = newValue }

Elixir: update = %{ old | prop: newValue }

Even JavaScript have similar stuff: const update = { ...old, prop: newValue }


Clojure: (assoc old :key new-value)

just a fn in the end


>val emily1 = Person("Emily", "Maness", 25)

The assumption of inputs based on the order of the class items frightens me!


You can use named parameters if you're worried:

  val emily1 = Person(firstName="Emily", lastName="Maness", age=25)
Scala's approach to constructors is a good example of convention over configuration; in Java 99.9% of classes have a constructor like:

    this.firstName = firstName;
    this.lastName = lastName;
    this.age = age;
which is just ceremonial boilerplate that obscures the actual logic of what the class is doing (an IDE can generate the constructor for you, but the person maintaining your code still has to take the time to comprehend it). You can do the more complicated things in cases where you need to, but it's important to keep the simple case simple.


Lombok solves this for Java.


Using Lombok ends up being just as much effort as using a different JVM language, and the rewards are smaller.


The learning curve is way less steep, and you can introduce it into an existing project with zero impedance mismatch.


Yes, but in teams where there is no hope of switching to another JVM language like Kotlin, Lombok can reduce boilerplate by enormous amounts.

It's a compromise.


Lombok is really not that complex and it not close to using a different language.


I would try out Immutables before Lombok. It generates code using the standard annotation processor instead of modifying OpenJDK's AST tree.


"switch expressions", "text blocks". I haven't used Java for 15 years but it's one of the most mature languages out there. I'm surprised it's only just getting these pretty standard features now.


I don't think switch expressions are pretty standard at all.

Switch statements are standard, and Java has had them for a long time. Switch expressions are different in that the switch "statement" now returns a value that can be assigned to a variable.


it's always moved slow on purpose... one of the language's greatest features is backwards compatibility and stability

it's purposefully dead simple. this is why many of us switched to alternative JVM langs ... Java can't keep up with the innovation other Langs have without breaking its philosophy of slowmoving/backwards compat


To be fair, it moved much faster after the Oracle acquisition.

The Java 6 era of no changes whatsoever was slow as molasses. The joke back then was that even C++ got lambdas before Java.


Anonymous inner classes sucked royally. Java 7 rocked my world when it dropped!


Not all features have to do anything with backward compatibilities. I agree with the fact that features should be added with caution cause new features can interact with existing features in a non-trivial way that will introduce lot of edge cases. But it's hard to see why a feature like text block would introduce more complexities.


Because once a feature is introduced it needs to be supported for eternity, every feature has to do with backwards compat. The language designers need to make sure the features are exactly what we want and need before introducing something half baked.


They failed few times (checked exceptions, raw types, plenty of JDK classes like Vector, Date, LinkedList and so on) yet we survived.


At one point “simple” was not really a fair assessment of Java, especially if you had to do multithreaded / concurrent code back when we had broken primitives in the Java 3-5 days.

It’s moving along a lot faster now thankfully but if it was going this fast 20 years ago we may not have needed Kotlin or Groovy


I guess one of the reasons was abundance of hardware platforms and operating systems back then. Threads were not very native to UNIXes. Nowadays it's basically x86_64 and ARM with Linux/Windows which are much more mature.


I think the issue being referred to was the lack of a rigorous memory model. But that wasn't a failing of Java. Java was one of the first to get such a model, most languages/runtimes have never even tried at all.


That was before and this is now. Java is adding things quite frequently now.


negative instanceof is a disaster

I posted a bit about it here http://www.benf.org/other/cfr/java14instanceof_pattern.html

it was first noted https://twitter.com/tagir_valeev/status/1210431331332689920 here

but the main thing is that if the 'taken' conditional is guaranteed to exit, then scope hiding happens, if not, not.

But in java

if (true) { throw new Exception(); }

is not guaranteed to exit.

So: https://github.com/leibnitz27/cfr_tests/blob/master/src_14/o...

In case you don't want to run, as of java (build 14-ea+34-1452) this prints:

Fred

WIBBLE

  public class InstanceOfPatternTest10 {
    static String s = "WIBBLE";

    public static void test(Object obj) {
        if (!(obj instanceof String s)) {
            throw new IllegalStateException();
        }
        System.out.println(s);
    }

    public static void test2(Object obj) {
        if (!(obj instanceof String s)) {
            if(true) {
                throw new IllegalStateException();
            }
        }
        System.out.println(s);
    }

    public static void main(String ... args) {
        test("Fred");
        test2("Fred");
    }
}


Looks like a bug. But anyway should be easily detectable by a static analysis and reported as a warning, so not a big deal in practice, even if working as intended.


Nope - the if condition is not considered in flow analysis. Read the end:

https://docs.oracle.com/javase/specs/jls/se8/html/jls-14.htm...

"14.21. Unreachable Statements It is a compile-time error if a statement cannot be executed because it is unreachable.

This section is devoted to a precise explanation of the word "reachable." The idea is that there must be some possible execution path from the beginning of the constructor, method, instance initializer, or static initializer that contains the statement to the statement itself. The analysis takes into account the structure of statements. Except for the special treatment of while, do, and for statements whose condition expression has the constant value true, the values of expressions are not taken into account in the flow analysis."


I still don't understand why reachable or unreachable changes the binding from names to local variable/field. The process of identifier resolution should happen before reachability analysis.

Is there some bug or mail list thread with reaction from Java developers?


I haven't raised mine, as I consider it to be a refinement of the bug noted in https://twitter.com/tagir_valeev/status/1210431331332689920 (Don't know if java devs have responsed to that.)

(again, reachability analysis of unrelated code changes semantics.)

The problem is that this IS defined behaviour - the scope of the instanceof-assigned variable is dependent on whether or not the taken if-statement is provably exiting.

This is intended to allow

  {
    if (!(obj instanceof String s)) return;

    // s exists now.
  }
But it's not been thought through.


IMO they should treat it exactly like they treat uninitialized variables.

    void test1(boolean b) {
        String s;
        if (b) {
            return;
        } else {
            s = "test";
        }
        System.out.println(s);
    }
This code compiles.

    void test2(boolean b) {
        String s;
        if (b) {
            if ("a".equals("a")) {
                return;
            }
        } else {
            s = "test";
        }
        System.out.println(s);
    }
This code does not compile. But it does not mean that println will try to resolve s to something else. I think that they should have gone to a similar route, where declared variable will be available for entire lexical block where `if` was used, but initialized only inside matched branch. Usage in other code would error with "variable might not have initialized" consistently how it works now.

Of course that would require to shadow previous declaration for consecutive `if`-s. But it would be much more obvious and understandable. Actually the whole construction would be just a syntax sugar almost expressible with current Java constructions:

        /*
        if (o instanceof String s) {
            System.out.println(s.length());
        }
        //System.out.println(s.length()); // variable might not have initialized
        if (o instanceof Number s) {
            System.out.println(s.intValue());
        }
         */
        String s;
        if (o instanceof String) {
            s = (String) o;
            System.out.println(s.length());
        }
        //System.out.println(s.length()); // variable might not have initialized
        Number s$; // no variable shadowing in Java now, but it could work
        if (o instanceof Number) {
            s$ = (Number) o;
            System.out.println(s$.intValue());
        }
        //System.out.println(s$.intValue()); // variable might not have initialized
and would be directly expressible if Java would allow variable name shadowing which is a good thing as proven by Go and Rust (although that would be incompatible change for old code, but allowing variable shadowing for patterns would not be incompatible change, because old code does not have pattern variables).

Of course I did not think about this problem for too long and probably missed something important, so that's just my 2 cents. I guess, developers took that path for a reason.

Basically they want to following code to work:

    String s;
    void test(Object o) {
        if (o instanceof String s) {
            System.out.println(s); // local variable o
        } else {
            System.out.println(s); // this.s
        }
    }
and I'd argue that this code should not compile! It's bad code. If developer wants to use `this.s` he should explicitly write that.



It's a discussion of a simple use-case where variable is only available inside positive if match (actually pretty logical behaviour). It does not touch unreachable code modifying lexical analysis (what really causes confusion).


Ooops


With switch, records, and (soon?) sealed interfaces, Java will be more pleasant than ever.

Would I ever choose it if I were in charge of a project? Probably not. But it is nice that the language is incorporating these proven features that make a huge difference. It'll make working in Java when I'm not in charge much nicer.


One question about sealed:

Does anyone know how it will play with type parameters in the interface? I know in Scala, you can simulate GADTs with that.

    sealed interface Expr<A> {}
    record IntExpr(Integer i) implements Expr<Integer> { }
    record StringExpr(String i) implements Expr<String> { }
Will that be legal? And will switch be able to do its magic and carry that type variable through?


I've been doing mostly Java work in my career and I'm using more JS these days on the side. I'm envious of the rest, spread, and deconstruction functionality in ES6. Really, really, envious.


What would stop you from switching to Kotlin, which has those features and many more?


I was in a Java shop at one point where leadership said Kotlin would be too hard for the engineers to learn (they were all mostly fairly junior).


Syntax is not that different from Java. Someone who knows Java could be productive in 1-2 days. But it sounds like you agree that your leadership in that scenario was not rational.


I'm not sure, I never tried Kotlin.

I like the look of it but I know Java pretty well so it's been hard to not just use that for my personal stuff :p


I would be the only one on my team who would be using Kotlin. I didn't know that Kotlin had that though.


I feel like Java has a particular problem in common with Android - they both keep getting better, but most people are stuck on some old version.

The difference is that smartphones phones last at most 5 years, but a program that a business depends on lasts forever.


One thing included in JDK 14 that will help alleviate this problem for certain types of applications (e.g. desktop apps) is the jlink/jpackage tools that make it much easier to ship an application with a bundled JDK that installs and behaves like a regular native application. If you bundle the JVM with your shipping app, you can always update to the latest JDK version.


People deliberately chosen to stay with some old version. Java backwards compatibility is awesome. The only questionable move was with Java 9, when they removed a lot of classes from standard library and introduced modules, but even that move was not so hard to migrate.

Some people just don't want to invest ANY money to improve their code. They just want to release new features. They would use Java 1 on Windows NT 4 if they could.


To be fair, many users would do just fine with an Amiga 1000, given what they do with their computers.


It's good to see Java is becoming better and iterates faster.

However, I found it's weird that hosted languages like Kotlin, Clojure or Scala are more approachable for me as they are working fine with JDK 8 like 'libraries'. They're much easier to upgrade than upgrade JDK itself.


> Pattern Matching for instanceof

Why is the cast even necessary? Isn't the cast only part of the type checker and thus unnecessary with a smarter type checker? For example TypeScript can do it, if you have code following an if condition that checks the type of the variable, you can use that said variable as if it were of that type without any further assertions necessary, eg.

    let x: unknown;

    if (typeof x === "string") {
      console.log(x.charCodeAt(0));
    }


Two reasons - current convention and backwards compatibility. Java developers are simply used to the cast, it is commonly occurring pattern (even if it is from necessity).

The backwards compatibility part:

  class Foo {
    void foo(Object o) { }
  }
  class Bar extends Foo {
    void foo(Integer i) { }
  }
  ...
  Foo x = ...;
  Integer i = 42;
  if (x instanceof Bar) {
    x.foo(i);
  }
Currently this calls Foo.foo, if there was a smart cast it would call Bar.foo, possibly breaking existing code.

EDIT: Added method parameter.


That's not correct. Java method dispatch is always dynamic, so x.foo() will always call the foo() method on whatever concrete type x actually is. The declared type of the variable that holds the reference to that object doesn't matter, nor does casting (with a couple exceptions, none of which apply here).

But don't trust me, try it!

    |  Welcome to JShell -- Version 11.0.5
    |  For an introduction type: /help intro
    
    jshell> class Foo { void foo() { System.out.println("Super"); } }
    |  created class Foo

    jshell> class Bar extends Foo { void foo() { System.out.println("Sub"); } }
    |  created class Bar

    jshell> Foo f = new Bar();
    f ==> Bar@58651fd0

    jshell> Bar b = new Bar();
    b ==> Bar@5419f379

    jshell> f.foo();
    Sub

    jshell> b.foo();
    Sub

    jshell> ((Foo)b).foo();
    Sub


That's right, I forgot a method parameter:

  jshell> class Foo { void foo(Object o) { System.out.println("Super"); } }
  |  created class Foo

  jshell> class Bar extends Foo { void foo(Integer i) { System.out.println("Sub"); } }
  |  created class Bar

  jshell> Foo f = new Bar();
  f ==> Bar@7f9a81e8

  jshell> Integer i = 1;
  i ==> 1

  jshell> f.foo(i)
  Super
  
  jshell> ((Bar)f).foo(i);
  Sub


Your statement is wrong, in the following sense: at runtime, the call to x.foo() would always start look at the runtime type (which in this case is Bar, or some subclass of it) - otherwise overriding methods would not work.

https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.htm...


Yes, I forgot a method parameter there.


This kind of flow analytics are pretty new IMHO. C# supports these e.g. for nullable checks. But I think Java and C# are designed a bit too early for having these kind of flexibility with the variable type. Also consider that changing the type also has other consequences like invoking other virtual methods (e.g. C# new operator on methods) which can be confusing because a type change above is not explicit.

PS: just came to my mind: that is a breaking change (in C#) due to the new operator exanple. Will never come (for C#). Maybe Java does not have this issue but most likely they have a similar problem

This is an awesome feature for typescript. Love their work there.


It starts to get more complex than you might think. Suppose I declare x to be of interface type A and then pattern match it to a disjoint interface type B. Now unless I get depresse that with a cast or a new variable name then I need to express it as some kind of intersection type and this may need to extend into the bytecode and vm.

It might even be an intersection between an object and an interface type, and for historical reasons those have different bytecodes for invoking methods!

Flow typing is really cool, but it can make things way more complex.


This blog post doesnt cover all interesting changes, at least for me.[0]

For me most interesting upcoming changes are

JEP: 352: Non_Volatile Mapped Byte Buffers, JEP 345: NUMA-Aware Memory Allocation for G1 and JEP 370: Foreign-Memory Access API (Incubator). Especially FMA api [1] examples seems most promising but shipped with panama and I am not sure about the maturity yet: https://github.com/zakgof/java-native-benchmark

I wonder how GraalVM stands in this picture that beside of being polyglot, it has AOT and auto vectorization futures and I dont know if those are already/will be shipped also with openJDK

[0] https://jaxenter.com/java-14-update-news-163585.html [1] https://openjdk.java.net/jeps/370


Wow - raw text but I still have to escape \s, \w and friends in a regex. Call me underwhelmed.


BankTransaction amount is a double? transactionDate is a LocalDate?

Please nobody copy paste that class.


What's the problem with using LocalDate to store a date (without time)?


There's no timezone.

LocalDate is appropriate for things like user interfaces, where you're modelling an intuitive/vague concept of date-ness only meaningful in some wider human context, or where the actual time at which that day starts just doesn't matter or is unknown. For instance it may be a good type to use for annotated historical events, where the day the event happened is the most accurate you can get.

For a bank transaction where they're international by nature and usually need to be ordered temporally against each other, it's an inappropriate type. The time zone in which the date should be interpreted is important. But really for transactions you'd be better off using Instant, at least internally. Time matters too.


"Java 14 is scheduled for release on March 17"

So it didn't quite arrive yet.


You know, candidate/preview releases have been available for some time, if you can't wait to test these new features:

https://jdk.java.net/14/


So, (if I could write LaTek in here I would)

The limit as JavaVersion —> Inf = Kotlin?

I won’t complain.


Related from a few weeks ago: https://news.ycombinator.com/item?id=22237145


Would be great if Records were actually bonafide value types lol as in not allocated on the heap. Otherwise it’s just nice syntactic sugar like data classes in Kotlin


All in good time, value types is in progress. Available to try here: https://wiki.openjdk.java.net/display/valhalla/Minimal+Value...


It’s been 6 years since Valhalla was announced why it’s still in prototype befuddles me. If anything, getting value types out the door will have significant performance improvements and least barrier of entry for current java / jvm based programmers.


How do I propose a language change? One thing I'd really like to see is optional type declarations in Lambdas to bulletproof them. Take this simple Lambda

   List<String> names = new ArrayList<>();
   names.stream().filter(String name -> "Bob Vance".equals(name)).findFirst().get();

By adding the String type declaration, a whole host of bugs in really complicated Lambdas can be eliminated and found easier when the original types and lists are being shuffled around


Can't you do .filter((String name) -> "Bob Vance".equals(name)) already?


Oddly enough I'm having trouble finding references around this, even though it most definitely compiles.


I just learned something today.


Seems like Java is trying to catch up with Kotlin


Pattern Matching for instanceof. Non-Volatile Mapped Byte Buffers. Helpful NullPointerExceptions. Switch Expressions (Standard) Packaging Tool (Incubator) NUMA-Aware Memory Allocation for G1. JFR Event Streaming. Records (Preview)

Everything looks promising for me.


Can't understand the need for records when C# solves the problem of boilerplate with regular classes and some syntactic sugar.


Apparently immutable classes in a functional streams is a new hotness and nobody uses POJOs anymore. They'll slap pattern-matching on top of that, to complete the circle.

I completely agree as I, myself, see very few instances where I could use records in my code. But properties to remove getter/setter boilerplate? That would remove thousands of LoC. But it's not hot.

Fashion-driven development, that is.


It's an easy and zero-boilerplate way to have immutable data-only structures with some useful tidbits like structural comparison, meaningful hashcodes etc.

I say this as a C# developer that would desperately want them supported in C# and was super pissed off when they were discarded from C# 8.0 (I actually need them right now, they would save me a whole day of typing today)



hopefully yes, I'm following the Records v2 issue on github :D


Who cares about the new features? The new license makes it almost impossible to use Java without some form of payment to Oracle.


Ugh, false. This document goes into quite some detail explaining the situation and all the many options you have to not paying Oracle: https://docs.google.com/document/d/1nFGazvrCvHMZJgFstlbzoHjp...


> The new license makes it almost impossible to use Java without some form of payment to Oracle.

Do you have a source for this claim? It sounds a bit extreme, to say the least, and I had the impression OpenJDK was licensed using fairly standard terms.


It's mostly FUD: it's only relevant for the Oracle JDK; it doesn't apply to OpenJDK (or the other open distributions by other orgs).


It doesn't help that multiple Oracle/Sun folks—including people like McNealy—said under oath that they don't believe that the licensing permits you to make commercial use, even if you opt for the GPL version.


At the time Google screwed Sun, the GPL version did not cover the deployment into embedded platforms, only desktop and servers.

OpenJDK license is another matter.


I don't know what you're referring to, but FSF does not allow the GPL be used in such a way that the four freedoms are compromised by the licensor imposing additional restrictions.


Except that there are plenty of dual licenses with GPL-exception clauses and Java was one of them back then.

It is up to the courts and copyright holder to decided what to do with their IP.


First, you didn't describe an exception; you described additional restrictions. But now you're pivoting to talk about exceptions.

These are fundamentally different things. One enlarges the set of actions a recipient is free to do relative to what vanilla GPL allows. This is permitted (and in the case of the classpath exception, endorsed) by FSF. The other attempts to shrink the size of that set by denying the user things that the GPL would otherwise allow. The FSF simply does not permit the GPL to be used in that combination (and there would be extreme contrast in your last sentence and the failure to recognize the FSF's say in this).

And secondly, you've yet to substantiate your claim that Java was ever distributed with such GPL-modifying restrictions.


Well, I let Gosling speak about Google's then

https://www.youtube.com/watch?v=ZYw3X4RZv6Y&feature=youtu.be...


How about a straightforward response, rather than trying to change the subject again?

What's more, I've seen this interview multiple times. Listening to Gosling stutter and be coy is not illuminating in the least. He has no idea how to answer the question he was asked, much less what's being discussed here now.

Can you substantiate your claim or not?


Sun as copyright holder had the right to constraint Java's usage as they wanted and embedded deployment wasn't covered.

Naturally it is hard for anyone to link to anything Sun, given what happened with their assets and Internet presence.

Is a substantiate argument? Maybe not, it doesn't change the fact that Google screwed Sun, didn't bothered to rescued it went it went down, and now we have Java and Android Java.

I guess FSF is happy with the outcome then, since it is allowed to tank companies.


> Sun as copyright holder had the right to constraint Java's usage

Sure. But what they don't have is domain over the GPL.

I won't respond to the rest of your comment, which has nothing to do with the claim you made to kick off this branch of discussion and is just another attempt to change the subject (with what is an opinion, not a "fact").

This will be my last comment here.


Glad to hear that; I thought I remembered reading something like that back when the license change was first announced, but I wasn't sure if something else had changed in the meantime.


Well we all know that's not true, player.. :)

https://en.wikipedia.org/wiki/OpenJDK


Use the open jdk versions. It’s like centos instead of RHEL.


That only applies to the official java builds from Oracle. Distro maintainers can always compile their own openjdk binaries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: