Maybe creative destruction is for the best. The JVM platform is much nicer than Java the language. We've learned a lot since Java and the JVM were created. The death of Java might be a nice opportunity for some Spring cleaning.
Off the top of my head:
IntelliJ's @Nullable annotation as a core part of the language (I would suggest spelling "Nullable" as "*" to avoid freaking out the C++ converts) would greatly improve static type safety. Let's stop re-inventing Tony Hoare's Billion Dollar Mistake.
Call me a heretic, but for most real-world uses (particularly financial calculations) IEEE 754-r decimal floating point is a better default than IEEE 754 binary floating point, plus it's less confusing for inexperienced programmers. Excel does some ugly ugly tricks to try and hide IEEE 754 binary artifacts from display, but the artifacts still show up from time to time. Even experienced programmers often mistake binary floating point artifacts for software bugs.
UTF-16 also fools many programmers into thinking it's a fixed-width representation, and Java's APIs force Strings to at least pretend that they're using UTF-16 internally. Chars should be 32-bit, since 16 bits aren't enough to represent all Unicode codepoints. Iterator-like String APIs are often more appropriate than Java's array-like String APIs, and give implementors more design flexibility.
These days, even a lot of embedded systems are beefy enough that a register-based or SSA-based bytecode is a better choice. (For small embedded systems, I'm sure a few companies would spring up with cut down standard libraries and tools to either generate native code or a stack machine representation from the standard bytecode.) Bytecodes manipulating NaN-tagged types would make the VM a better target for dynamically typed languages.
We know that CSP/Actors are a less bug prone concurrency abstraction than forcing developers to work directly with threads, but Java chose threads to make C++ programmers feel comfortable.
The use of type erasure in Java's generics was a compromise necessary to maintain compatibility with old class files. Safety and performance would be improved by starting out with a system that supports generics.
I'm not familiar with @Nullable (just looking it up now), but have become quite fond of Guava's Optional<T>. It's a bit wordy, but it seems like a great example of how to make the type system work for you by finding your mistakes as early as possible,
> Chars should be 32-bit, since 16 bits aren't enough to represent all Unicode codepoints.
This would be a lot better, but even this is oversimplifying things and is going to cause problems. In this case, thinking "chars are 32 bits" ignores the fact that, in Unicode, a "char" (that is, a codepoint) and a single character you see on the screen (that is, a glyph) are not the same thing: Some codepoints don't map to glyphs, such as bidirectional markers, [1] and some codepoints modify existing glyphs, such as combining forms. [2] And Zalgo waits for people who forget about combining forms. [3] Zalgo hungers, mortal. [4]
The underlying 'problem' is that Unicode is the first text processing standard that's actually complex enough to be useful for more than one language. Its complexity reflects how complex the real world is.
We could be seeing a second life for Java rather than this disgusting shuffle of the undead.