While Java's slow and cautious evolution frustrates developers, it still arguably demonstrates longer-term thinking than the constant accrual of features in its contemporaries such as JavaScript and C#.
That isn't to say the designers of JavaScript and C# don't think carefully about the addition of new features; indeed, was it Anders Heljsberg who made the point about all new proposed features starting with negative points?
But Java's recent and upcoming additions show _taste_: adopting a single lambda notation that fitted smoothly with the surrounding class-based OOP paradigm via SAM types (rather than magically-generated-type-style lambdas succeeding two overlapping "delegate"/"event"-like features); implementing (sadly somewhat-incompatible) modules that can curtail unbounded reflection, opening the door to greater reasonability and performance in the future; proposing Project Loom to avoid the async/sync API split that hit Python, C#, and even Kotlin Coroutines; and now Valhalla, which was quietly mulled on for years before arriving at this very reasonable solution that considers myriad angles.
I like this approach to language design and think it bodes well for the language's future. It's a sweet spot between being necessarily conservative, dealing with developers' real-world problems, and giving time to mull over new language feature designs and not just implementing as soon (and haphazardly) as possible to please vocal developers.
I would agree with this if C# didn't definitively show you can have a successful enterprise/"LTS" language that doesn't move at a glacial pace and spend several years doing little to nothing (pre Java 8)
It's implemented features Java had just gotten much earlier, in much more useful forms (see: generics, lambdas) without them being "haphazard" about it.
-
To me tasteful is what C# did, breaking changes when needed, but only when so much value was added no one could be upset with the result.
It takes way more effort, and way more carefulness than "we're going to move at glacial pace and provide half-hearted features in the name of backwards compatibility" (again, see: lambdas and generics)
I won't argue over particular features because clearly it's a matter of personal preference and, working on OpenJDK, I'm biased, but a couple of points:
Since Java was introduced in 1995 until today, Microsoft's software framework has had about three or four drastic backward-incompatible "generations", depending how you count. Most Java code written in 1995 will run, with little or no change, on the current JDK 13, and would compile on JDK 13 with only minor changes.
Leaving aside the JDK 7 years (2006-2011) when Sun was struggling and then going through an acquisition, the rate of innovation on the Java platform is quite high. It's just that we emphasize other parts of the platform more than the Java language because that's where Java has always believed most of the value is. In the past five years Java has seen an exceptional new low-overhead, extendable profiling capability (JFR), groundbreaking new GCs (ZGC), and what is probably the biggest breakthrough in compilation technology of the past decade (Graal). Those are all huge developments, none of them affecting the frontend language in any way. An innovative, state-of-the-art runtime programmed with an intentionally conservative language has been Java's strategy from the beginning -- James Gosling called it a wolf in sheep's clothing (https://youtu.be/Dq2WQuWVrgQ) -- and it has worked quite well. One could argue that that's the only way to sell a wolf to such an industry. Indeed, despite some grumblings on HN, it seems like most software developers prefer it this way.
I have no love lost for Java the language, I find it cumbersome as hell to work in and as such all of my JVM work is done in Kotlin, but I appreciate having conservative and well-thought our evolution of platform features that ultimately effect all JVM languages; seeing as Java The Language is heavily tied into JVM semantics.
Quite frankly even with all the new features Microsoft has been putting into C# they’ve been prime to ignore the existing implementations in F#, leading to a clusterfuck of an ecosystem where I can’t use many C# libraries in my language of choice due to conflicts in implementation. I don’t have that on the JVM, I know I can use a Java library in Kotlin, Scala, Clojure, hell even Ceylon without huge headaches because they all share the same primitives.
Let other JVM languages incubate need language features, Java needs to be where they are eventually stabilized and standardized in how they work on the platform.
>Since Java was introduced in 1995 until today, Microsoft's software framework has had about three or four drastic backward-incompatible "generations", depending how you count. Most Java code written in 1995 will run, with little or no change, on the current JDK 13, and would compile on JDK 13 with only minor changes.
That's exactly what's wrong with Java.
That's why we got "functional interfaces" instead of proper lambdas.
That's why we got half baked generics.
That's also exactly what I'm praising C# for.
They've paid that price 4 times, and every time they added so much value that it was easily worth it to their users.
Ask the average C# developer if they'd trade for Java's half baked lambdas and generics alone for none of those generations happening, and you'd get a resounding, no way in hell.
And C# stayed relevant in enterprise despite it, I've never seen an enterprise say "we're choosing Java because 4 times since 1995 they decided to add features so incredibly powerful language features and broke backwards compatibility.
The places that couldn't handle the changes needed are also the ones that would keep running an old runtime (and old JVM if they're using Java, because any upgrade = risk, backwards compat claims or not) as long as they want, security be damned.
.NET has an insane amount of "under the hood" innovation that has happened as well in all that time by the way, I'm speaking to the programming languages, not the runtimes and platforms that enable them. It's fairly straightforward to compare language features because they started in such similar places, comparing runtimes is a much less direct comparison.
Whether that's "wrong" or "right" is a matter of opinion. But it's hard to argue with the fact that many more people seem to prefer Java's approach to Microsoft's. As to .NET's "insane under the hood" innovation, the platform is similar to what Java was about 10-15 years ago in terms of GC, JIT compilation and monitoring. .NET prefers changing the language, while Java prefers changing the platform. The two just have a different DNA. I've always felt that while there's a clear bottom-line "business" advantage to improving performance or observability, language features are almost always questionable because we can't find any evidence they make a real difference in practice. The main difference it seems to make is in the number of developers who vocally love or vocally hate a particular feature, and in giving a feeling of "vibrancy" to developers who perhaps sometimes care about their code more than about the program it's part of.
I’d recommend spending some more time with .NET Core and comparing its performance characteristics (as of .NET Core 3.1). I think you’ll find it compares quite favorably, with recent advances like Span making a big difference. While it’s accurate to say the CLR has been behind on the desktop for a long while, the rekindled effort around advancing the runtime in .NET Core has paid off in spades and has only really just begun. It sounds like your impression is rooted in the desktop CLR, which is effectively frozen for compatibility reasons.
I'm sure it has, but in the meantime Java is making great strides as well on multiple important fronts -- low-latency GCs, a next-gen JIT, AOT compilation, native ffi, low-overhead in-production profiling and lightweight concurrency, and the main problem with Microsoft's technologies remains that they break compatibility every five-six years or so. Now it's .NET Core, and it wouldn't be a bad bet to guess that in five years it will be something else. We're at a point where it's so hard to make changes with a big bottom-line impact (arguably there weren't too many such advances in the past twenty years) that very few actually justify breaking compatibility, and companies know that. So while smaller software can afford switching from one technology to the other, big "important" stuff requires a compatibility lifespan of at least ten if not fifteen years, something that Microsoft has never been good at. It's possible that with their focus shifting more to the backend with their cloud offering that would change, but it appears that their cloud strategy is to support all software platforms rather than focus on their own. That's why we see Microsoft now hiring engineers, as well as getting some CLR engineers, to contribute to Oracle's OpenJDK [1].
Well, you're certainly entitled to that opinion :)
As I said, I encourage you to explore .NET Core and its recent advances, especially now that the big runtime improvements made in recent years have proliferated throughout the standard library and new language features. While it's fair (and correct) to say that the Java runtime is ahead due to many years of focused improvements, I think you'll find the difference quite a bit smaller than it was back when .NET was focused mainly on Windows desktop software.
I believe you re .NET Core, but I think that Microsoft shifting their focus to a backward-incompatible development platform (not Windows, on which they have a very good compatibility record) every five years or so is more than just opinion. I think that their record on that front speaks for itself. If you were a developer who always uses the current flagship MS development platform to develop a piece of software that you first wrote in 1999, by now you'd be on your fourth significant rewrite. TBF, Java also made one such misstep, with JavaFX Script in 2009, but that was corrected and reverted.
That may be your impression. I'm only speaking from experience here, where we have done things like investigate issues with the first public release of a compiler for a week to ensure behavior is still correct when used in a newer environment.
But it's important to distinguish where we are now - .NET Core being explicitly cross-platform, vendor neutral, and OSS - from where .NET was. .NET Core is not tied to a specific product or initiative (e.g., Silverlight) that could go down due to other market forces and take that flavor of .NET with it. This is the position that Java has been in for a long time. Perhaps you or others you've known have been burned by buying into a Windows Mobile Flavor of the Month only to see it canned a few years later, so it's not unreasonable to think that the same could happen here. But at the same time, .NET Core has been going for 5 years now, and over that span of time it's only become more compatible with existing APIs and runtime environments, not less.
Haha, the way you just handwave away the last few years of .NET/CLR/Roslyn/DLR changes means I've seen this is a waste of my time and will stop checking for replies
But uh yeah... what percentage of Java developers are still on Java EE 7, 8?
The "business advantage" is in things not changing. If you compare languages by business advantage then Java 1 wins because "we haven't had to waste money upgrading since the 90s!"
I'm going for something a little more meaningful, developer ergonomics, which you hand-wave away as not being valuable to a business (protip: if your developers get to care about their code that's a good thing, you'll find you can get better developers if you look for the ones who do)
Putting aside which platform has better developer ergonomics -- something that's obviously very subjective -- you think that linguistic features that have not been found to make any significant impact are "more meaningful" than things that make software more valuable just because you feel very strongly about that? Also, seeing that I've been a professional developer for more than a couple of decades now, I hope you'll forgive me if I ignore your "pro" tip. Programmers who care about code are, indeed, often better than those who care about nothing at all, but if you want to hire good developers, you'll hire those who care about the product more than about code. They'll be much more expensive, though.
It's perfectly possible to separate the language changes from the mindless API/platform/ecosystem churn.
.NET Core finally considers Linux, the server OS, which brings it to parity with the JVM. Will this be the final rewrite? Who knows.
Plus there is a lot of churn in Java land too (new libraries, rewrites of old libs, streams, libs becoming unmaintained, etc.), just big corps don't ever do any real maintenance.
I would probably opt to use Scala or Typescript instead of C# anyway. And Rust if there's some performance critical part.
The big difference that many .NET Framework libraries won't work on .NET Core, and even less on Linux, because they depend on C++/CLI, Win32 interop or COM.
Then those corporations start to evaluate a rewrite to portable .NET Core and come to the conclusion that several third party components also don't work on .NET and they need to go shopping.
In the same process, they find out that said components actually have a perfectly working counterpart for Java since Java 1.4 or something.
And a new RFP goes out.
As for what to opt for, platform languages are always the best bet longterm.
As long as the platform is relevant on the market they stick around, guest languages come and go, and tend to be insignificant outside the platform where they have guest status as they have to reboot an whole new platform experience.
Sorry, I worded my comment imprecisely. I meant the CRL and language feature developments can and should be looked at separately from the platform/ecosystem stuff.
I'm aware that Core is different, and that MS only ported a subset of their APIs, and thus many libs need to be completely rewitten, especially those that were already just wrappers/helpers for more Win32-specific things.
And this wrapping platform specific lower lever libs happens on other ecosystems too. Python users run into this quite a lot, Rust devs too, etc. And many early Java programs were basically platform specific, even if the JRE itself was not.
These C# issues arguably demonstrate haphazard additions that didn't align with good taste:
* Too many overlapping concepts for referring to code by value and defining them inline: events, delegates, anonymous delegates, and lambdas.
* Lambdas that generate magic types rather than slotting into SAM types. This works great for functional languages, sure, but doesn't fit well into a class-based OOP language.
* `ref` and `out` parameters to appease archaic COM APIs.
* Tuple unpacking, which makes sense independently but then bizzarely tries to integrate with `out` parameters.
* A nice "ex nihlo" object literal syntax that shoots itself in the foot by undermining immutability due to requiring settable fields. (TBF, later versions fixed this IIRC.)
* Inline functions in methods in a language that already has lambdas and methods; it's just bloat.
* `as` casting that yields nulls. Did it really need a whole new syntax just to handle null more concisely specifically for casting?
* `partial` classes. Encouraging even more code generation with features like this is a questionable idea and wasn't necessary in other languages.
* `dynamic`. Even if there are use cases for it, it's strange in an otherwise static language with a top-level Object type anyway. I'm saying this as someone who's perfectly happy with fully dynamic languages like Erlang and Lisp. I just don't see the point of adding it to C# specifically.
* C-style enumerations.
* Properties. Invoking side-effects on something that is syntactically indistinguishable from an attribute read is a bad idea. Auto-generation of Java's verbose getters would be long overdue, but the caller site shouldn't be the same a la C#.
* Extension methods. It seems quite ad hoc compared to static addition of types to common operations in other languages, like imported traits in Rust or typeclasses in Haskell.
* Proposed "shapes". Looks like a good idea by itself but will overlap too much with default method implementation and other existing mechanisms.
* Interfaces prefixed with `I`. Not strictly a language problem but an ecosystem one. I shouldn't need to know what _sort_ of type I'm dealing with, that's '90s Hungarian notation.
* Nullable reference types. Getting rid of null is good, but this proposal became confusing. They mentioned opting in assembly-wide for a while but there was then a conversation about having it just warn in some cases. I need to read the latest literature around this, but it seemed less elegant than Java just adding a monad-like Optional type and not adding loads of special-case operations with question marks everywhere.
Despite this, I still admire a lot of the design decisions behind C#. LINQ was great. `async/await`, despite my belief of its inferiority to Goroutines/Project Loom/Erlang processes, was still a great innovation from Midiori at the time. Value types were obviously right to be implemented early on. Assemblies were a good idea. Private by default rather than package-accessibility was a nice touch, as was the `override` keyword. The C# team are smart people who know what they are doing!
As an aside, I used to be firmly against erased generics, but reading more about the tradeoffs from the likes of Gilad Bracha has caused me to reconsider.
I disagree with so many points in your list I don't even know if it's productive to start listing them.
From implying that C# lambdas are not infinitely more useful, powerful, and intuitive than "functional interfaces"
To the issues implying Properties are bad compared to the completely and utterly ridiculous situation in Java (which is exactly how they're implemented in Kotlin by the way).
And your "critique" of "dynamic", do you even know what it's for or how it works? Go read up on what the "DLR" is
You seem to have fundamental issues with the fact C# goes for being a useful language for it's users over being Java. Half your reasons are literally "it doesn't do it how Java does" or "it does X but Java doesn't so X is bad". The other half are complaints about insanely powerful features C# has that have minor issues that in no way take away from the fact.. they're incredibly powerful and useful features.
-
To me the moment you're trying to defend Java generics and type erasure, vs C# which paid the price early and has reaped the rewards for years, you should already you're on the wrong side of things...
> To me the moment you're trying to defend Java generics and type erasure, vs C# which paid the price early and has reaped the rewards for years, you should already you're on the wrong side of things...
Those erased generics that you call "half baked" are the reason why language interop works so much better in Java that in .NET. The combination of subtype polymorphism and parametric polymorphism means choosing a variance strategy. If you reify generics with subtypable arguments that means that you must bake a variance strategy into the platform -- which is what .NET has done -- even though languages have very different ones (in particular, untyped and typed languages will have different variance). So on the Java platform, the Java language, Kotlin, and Clojure all have different variance, yet they can share data structures with no runtime conversion. The cost of this platform compatibility is exactly three very minor annoyances [1] in the Java language. For the price of these three minor annoyances, the reward Java has reaped is a large polyglot ecosystem that's a favorite of language implementors. The Java language, too, the one with those actual three minor inconveniences, is also much more popular than C#. So overall, I think it is hard to argue that C#'s rewards from reified generics are greater than Java's from erased generics.
It really doesn't take much of an effort to defend Java's choice once you know what the ramifications are and what the results have been. Partly because of the mistake of reified generics, .NET is de-facto a one-language platform. The language, like most programming languages, is fine, but given Java's growing emphasis on being one of the best runtimes for Python, Ruby and other languages as well, it's very clear that the two platforms have very different goals. Reifying subtypable generics is a good choice for a one-language platform but a bad choice for a polyglot one.
[1]: In order of decreasing annoyance: no overloading by generic argument, no `new T[]`, no `instanceof List<String>`. All three are very minor concerns, and the last is possibly even an antipattern.
As someone who writes a lot of Kotlin for a living, something like 80% of the improvements Kotlin brings that I use on a day to day basis are features to give it the same level of ergonomics as C#... like reified generics...
-
And your comment that .NET is a de-factor one-language platform makes it sound like you've never heard of the DLR (or F# and VB for that matter)
To me the biggest reason DLR languages are not as big as things like JRuby is C# is a pretty damn good language. There's much less value is trying to cobble together existing languages and subpar runtimes when the defacto language is modern, developing at a steady clip, and "delightful" to use.
Those under-10% Java market-share languages would make up about half of .NET's. Everything looks small in comparison when you're as incredibly successful as Java (some of those languages you find so insignificant are about as big as Go, much bigger than Rust, and probably 10x as big as Elixir or Haskell). I just find it funny to argue that second-tier products (in terms of popularity) know about "value" more than leading ones. And none of that changes Java's focus and strategy as a polyglot runtime. Java is already on its way to be the best Python runtime, and it's getting competitive with the very top JS runtimes out there.
You can argue over language preference, as some programmers do, all you like. I have very different preferences from yours, and many other programmers have preferences different from the both of us and that's OK. You say you prefer programming Java in Kotlin rather than the Java language? That's pefectly fine and part of Java's strategy for the past 20 years. The Java language is intentionally conservative because it seems many more people like conservative, slow-changing languages, but the Java platform will make sure that it runs Clojure, Kotlin, JS, Python and Java language programs as well as anything.
Ugh, I started reading this before I realized it's the same person who thinks C#'s under the hood improvements in the last decade can be handwaved away.
Yeah, 10% of Java's market share is not larger than C#
If it was, those languages would be showing up on Github's survey above C# as well, they all consider Java independently not as a combination of all JVM languages.
-
You've contorted this conversation so ridiculously out of shape then beating the horse you laid on it.
My original comment was a rebuttal to this comment:
> While Java's slow and cautious evolution frustrates developers, it still arguably demonstrates longer-term thinking than the constant accrual of features in its contemporaries such as JavaScript and C#.
If you read the whole comment, it was not about the JVM, it was not about confusing this issue with "oh yeah well the language sucks but that's so you can run Python on it's runtime".
No idea why you are so insistent on making it about anything but the actual language called Java, not Clojure or Kotlin or Js or Python.
Because Java is both the name of a platform and the name of a language for that platform, and from its original design, the platform has been the main focus. Clojure, Kotlin and the Java language are all Java platform languages. And you'll just have to come to terms with the objective fact that other developers might disagree with your subjective language preferences. In fact, statistics would suggest that most of them would (as they would with any of us; I don't think a majority of developers would agree with any single language preference ranking). Developers know that most other developers disagree with them over language preference. That's why I'd rather speak of the platform than the language. Clearly we have different language preferences -- as most developers do -- and there is no right and wrong there.
Ugh, no Java is not the same the Java Virtual Machine, no one calls them by the same name, no one is confusing those but you.
When Github says your project has Java they don't mean your project has Clojure or JRuby.
When Tiobe says Java has X market share they don't mean JVM does.
They are not Java platform languages, they are JVM platform languages.
If you can't even talk about this using simple base definitions of the two languages that I've literally never seen argued against until today, it only compounds why I said having this conversation with you is not worth my time.
As I said, I work on OpenJDK, and the JVM constitutes less than 25% of the Java platform software (JDK). The Java language constitutes about 2% of the codebase, and Kotlin and Clojure make use of over 95% of it. They use the JVM, the Java core libraries (thanks to erased generics), and the selection of Java's debugging, profiling and monitoring tools that make up the JDK (not to mention their extensive use of third-party Java ecosystem libraries that aren't a part of the core platform). They are most definitely Java platform languages (although they're not only Java platform languages; e.g. Kotlin is also an Android language), even if colloquially many refer to the platform as "the JVM" although the JVM is only a small, yet obviously very important, part of it. Java is the name of both a programming language and the platform it is part of, and sometimes, for the sake of brevity, I too would refer to the platform as "the JVM." But as someone working on Java (not so much the language, but the platform), I try to use the more precise, more correct terminology, and I guess I'll just have to try and live with your dismay.
Well then, I guess I'm going to have to learn to live with your disbelief as well.
Java is the name of both a software platform as well as a programming language for that platform. Languages that target that platform are often called "JVM languages," but really they make use of almost all of Java (the platform) rather than just the JVM (although that degree varies: Kotlin makes use of the platform almost as much as the Java language; Clojure makes use of somewhat less, yet more than Scala or Ceylon).
You're mangling the conversation and then trying to force an issue with the conversation you mangled... again.
Please show me where Github, Tiboe, or any other survey says "Java" instead of "Clojure" or any "JVM language".
That's what my comment was referring to (rather, a small part of it was and you latched onto the chance to derail the conversation to wax poetic about semantics again)
You're the only one trying to talk about a platform.
Github indexes a ".java" file with Java code as a Java language file and a ".clj" file with Clojure code as a Clojure language file.
Java is the name of a language as well as the name of a platform that contains it, in addition to a VM, core libraries and a wide selection of tools, all as integrated but distinct components (those living in Britain will find such dual meanings familiar, even though they recognize they might be confusing to outsiders). You yourself have talked about "under the hood improvements" to C#, meaning the .NET platform, not just the C# language. The discussion of reified generics applies to the platform, not the language -- indeed, reification has hurt interop on .NET precisely because it occurs in the runtime (VM and standard libraries) rather than just the language -- and this very article about project Valhalla applies to both. Being languages that target Java (the platform), Kotlin and Scala would benefit from Valhalla just as much as the Java language. They would benefit not only from the changes to the JVM, but also from the relevant changes to the other parts of the Java platform that they rely on, like the core libraries and serviceability tools.
Now, I don't mind at all you calling the Java platform "the JVM," as many do, but using the more accurate terminology does not distract from the discussion, even though the terminology in itself is not very important. It serves to highlight Java's design and strategy from its inception, as you can see in the video I linked to: the intention and strategy all along have been to have a platform made of integrated but distinct components, with a state-of-the-art VM and a conservative language. To this day, the Java VM is state-of-the-art while the Java language is intentionally conservative. .NET's design philosophy and strategy are just different, also intentionally so. To ignore all that is to ignore how Java's designers see it and maintain it, and misses the point of what Java is. It also misses the central elements of Java's strategy, which has made it so successful.
Your original point was something about how you don't like Java's erased generics (and general language evolution). I explained how they fit in with, and, indeed, the best choice for Java's design philosophy that, despite being drastically different from .NET's has proven exceptionally successful -- namely one based on a platform made of several integrated but very much distinct components, with a fast-innovating state-of-the-art VM and a slow-innovating conservative language. I think it was rather straightforward but if you've found it masterful ¯\_(ツ)_/¯
FWIW, I could have produced an equally-long list of Java flaws. I'm not terribly attached to either language. I just wanted to challenge the meme that "C# is a better Java" I hear on the Internet every other week.
"Infinitely" more useful seems like a stretch. Both languages made valid design decisions with their lambdas: autogenerated types means not every variant of lambda needs a backing interface, but it also means that the types are a world unto themselves and not integrated with well-established interfaces and abstract classes in the way that Java lambdas are, resulting in conflicting mechanisms for passing code to adhere to a requirement. I find the unification of SAM types and lambdas to be elegant in a class-based OOP language (and actually preferable to the Smalltalk/Ruby block model too), but it's clearly subjective.
My criticism of properties hiding side-effects as attribute reads is mostly derived from one of the earliest books about the CLR; was it "The CLR via C#"? I'll have to check. The critique isn't Java-inspired; getters have the same problem of course. The point is that a getter is a method call, so you expect potential side-effects. You don't expect side-effects from a property read, although C# does tend to use capital letters for properties, to be fair.
While on the topic of CLR, I realise the DLR exposed dynamic typing primarily to make the CLR a better target for dynamic languages; I was arguing that exposing that up to C# wasn't necessary. C# is the flagship CLR language, sure, but that doesn't mean it must expose _every_ feature of it. Java also added `invokedynamic` for similar reasons but didn't feel the need to expose it to its flagship language directly in language syntax.
pron covered the nuances around generic type erasure in another comment better than I did, so I won't reiterate. Like you, I still mildly prefer reified generics over erasure as a language user, but the points raised by pron and Bracha are absolutely real. The ability to do runtime type checks and default values on generics seem like antipatterns, so I'm glad Java doesn't support those _specific_ features of reification even if I like a lot of the others parts.
> Nullable reference types. Getting rid of null is good, but this proposal became confusing. They mentioned opting in assembly-wide for a while but there was then a conversation about having it just warn in some cases. I need to read the latest literature around this, but it seemed less elegant than Java just adding a monad-like Optional type and not adding loads of special-case operations with question marks everywhere.
Nullable Reference Types (NRTs) is released, so it's important to talk about what exists in an LTS form today rather than something from a draft proposal.
Firstly, there's the surface-level stuff. Reference types can be explicitly be marked as `foo?` to indicate to the compiler that the type is nullable. Mismatches are warnings to ensure backwards compatibility, since billions of lines of perfectly valid code today can't just start emitting errors across an entire codebase.
But the far more interesting side of NRT isn't that, but the compiler analysis that goes into it. It's an incredibly advanced and thorough flow-based typing system that catches numerous complicated scenarios, and a system that can be (and is) improved over time without incurring a risk of a breaking change. This analysis is equally applied to the existing nullable value types, so it's a unified model.
The other interesting side of NRT is that isn't a one-and-done feature. There's a long rollout period where the .NET ecosystem adopts this way of dealing with reference types, and to do so there need to be tools for component authors and application developers to adopt it incrementally and at their own pace. Everything in the design is incredibly deliberate and well thought-out, with numerous past designs (such as a "sidecar" format for annotations and a mechanism for managing updates to that in parallel with a package or framework!). It is imperfect, but perhaps the best that can be done given the constraints a 20 year old language imposes.
That said, I really the world could be different. Since I prefer (and work on) a typed functional language where `null` isn't much of a problem due to a different core language design, the incredible amount of engineering effort that went into NRT for C# feels slightly strange to me. But my only reasonable alternative to not making progress on this problem is, "just use a different language", which most developers do not find reasonable.
Additionally, the pedantic side of me doesn't feel that Java's optional is an any way reminiscent of monadic programming. Java simply lacks numerous features to enable this style of programming in a way that the majority of Java developers would utilize.
Thanks for the clarification on the final NRT behaviour. Just to say, my point about "didn't align with good taste" was itself lacking taste. I'm sure each of those features I critiqued made sense as they were proposed at the time and were just considering different use cases and tradeoffs.
Did it drop the idea of assembly-wide opt-ins to stricter behaviour, meaning all NRTs can be reasoned about in the same way without considering a configuration flag somewhere like PHP? That does sound like an improvement.
Doing the change gradually, without breaking existing code or requiring potentially-ecosystem-breaking opt-ins does seem eminantly sensible and user-friendly. I was being too harsh to C# here. Java's `Optional` doesn't even warn about it itself being null, for example. You need static tooling and code analysis for that. C#'s solution does at least try to solve that, albeit at the cost of more complexity.
I said "monad-like" rather than "monadic" for that reason, but arguably it doesn't even go far enough to be considered monadic-like. Certainly this article would agree: https://blog.developer.atlassian.com/optional-broken/
> Did it drop the idea of assembly-wide opt-ins to stricter behaviour, meaning all NRTs can be reasoned about in the same way without considering a configuration flag somewhere like PHP? That does sound like an improvement.
There's three levels of opt-in/out:
* Source directives, for opting in/out a single scope (typically a whole file, but it can in be as small as a single method)
* Project file directive via MSBuild property, for opting in/out a single project/assembly
* The MSBuild property can be set in a Directory.build.props file, opting in/out all projects in a directory and its children
In the long term, this property will likely be implicitly set to true for new projects (and perhaps all projects in the very long term), but for now everything is opt-in.
Personally, I think that one of the biggest areas for tooling improvement is to better recognize when you're in a nullable-enabled context. It's pretty obvious when you look at source code, but when you've got a million LoC codebase and only a subset uses it, making developers aware of that is certainly a challenge.
The other interesting thing to consider here is that even if the C# compiler freezes all future improvements to NRT analysis, there is a level of breaking changes that will occur over time as packages and frameworks adopt the feature and rev their versions. This transformation won't be without pain for existing codebases, and some may never adopt the feature (especially if they're the kind of codebase that doesn't really modernize). Interesting times ahead.
Fair enough; I do miss keyword and default arguments from C# when using Java. A long list of overloads and argument forwarding to fake default arguments gets old.
It’s interesting to look at this celebratory narrative and the fact that Valhalla took over 5 years (!?). And contrast with the rise of go and AWS Lambda. And C++, where the polished parts of boost got added to the language behind some flags (so the old language was preserved). And the growth of C++ despite the complete absence of a competitive, platform agnostic packaging & distribution solution.
Wait, why do we care about the JVM?? This whole article appears to address concerns that, while fundamental to programming, are also fundamentally irrelevant to shipping products.
> ...complete absence of a competitive, platform agnostic packaging & distribution solution
Well, to be fair, it's more like there are two (or so) good cross platform ones at the moment, but there isn't one so popular to be de facto. And there isn't one that is standard.
Besides, the most popular packaging and distribution systems for native code might still be the popular Linux distros. Packaging and distributing precompiled binaries with native ABIs (not targeting interpreters or runtimes) is much more complicated. Folks have decided to value other things than portability of packages for now. That could change in the future.
While that is interesting to hear, is there a single book that brings someone who hasn't used Java in the last 5 years upto speed, like the book Bjarne Stroustroup's 'A tour of c++' does for c++.
There might be, but unfortunately I don't know it. I learnt Java initially from the first edition of Just Java, read from various other sources about the 1.5 and 1.6 additions it didn't cover, and then learned new features from OpenJDK proposals from 1.7 onwards as they were released.
The biggest impact of the 5 years for working developers has probably been:
* Streaming API and lambdas allowing usual `map`/`filter`/`reduce` transformation pipelines line many other languages (lazy, unlike JS's `Array` methods, parallelism considered via "spliterators" unlike Python).
* Default methods in interfaces, _not_ making them traits, but arguably more "trait-like". It allowed them to add many useful methods onto existing types without needing bifurcate the common APIs.
* A REPL. Probably not a big deal for a professional developer, who likely uses IntelliJ's "Evaluate Expression" feature already for a similar feature, but pretty useful for education I guess.
* Better at being stuffed inside Docker containers with resource limits. Also, max permgen space no longer a problem really (it's where you had to set an upper memory limit when invoking Java programs, which was annoying). Just generally less annoying at being deployed in modern infrasructures.
* APIs like `File.lines` that mean you don't need to recite War and Peace just to do a buffered operation across lines in a file. Generally more ergonomic APIs that finally make basic operations sane.
* Type inference with `var`. Basically the same as C#'s `var` or Go's `:=`.
* More native executable-producing tooling. jlink, jpackager, etc. The dream of ubiquitous, shared Java runtimes installed everywhere didn't really work out, and they seem to have realised that.
* Ecosystem moving away from XML. Still there, just less common. Runtime annotations are used more heavily, which I'm honestly not sure is a massive improvement. Hopefully more lambda-heavy APIs will reduce use of runtime annotations; see Spring WebFlux's functional handlers for annotation-based controllers for a comparison.
There's a lot more, but I'd argue it won't be as immediately visible for the working developer more than those points. Modules were important for the ecosystem, but most devs probably aren't worrying about that day-to-day. Gradle and Maven still dominate the building/packaging side.
The .NET CLR (VM) has had proper generics and value types since 2002. Java did not implement generics until 2004 and to this day does not have user-defined value types. There was an opportunity to do it right in 2004 before this became "baked in".
There is really no good excuse. Many developers have abandoned java due to the glacial progress with both the language and the VM. It is legacy now.
The main benefit (IMO) of Vahala would be for other JVM languages (such as Kotlin) to make the implementation of value types more efficient.
(I use both CLR and JVM in my work, but is clear to me that the CLR is superior on many fronts).
Well, some companies still write many of their new projects in the Java language (not to mention use the Java platform), like Apple, Amazon, Alibaba, Google, Netflix, and, of course, banks, governments, airports, militaries, hospitals, utility companies, factories, robotic warehouses [1], and most Fortune 500 companies.
To make your comment more explicit: The parent states that Java is legacy now. But given your examples above that would mean that java and many other languages are legacy now.
Of course any language could be legacy in the future but historically the probability that one of those future legacies is a hot new language now is at least equal with that of an established language currently still widely used to write significant production software.
The initial concept was "value types" and we wanted a project name that was evocative of "value". Place names (even for mythical places) are good names since they cannot be trademarked. Hence, "value" => "val" => "Valhalla".
Because java has slowly been dying? ;-) no I don’t think that’s why (and java is still being used a lot I guess) but that was what jumped to mind first thing when I read “project valhalla”. Not the most PR friendly name!
Tangentially related questions to generics, anyone think java will ever get higher kinded types? It would be super useful to have a better way of expressing higher level abstractions.
Java will not get HKTs in yours and my lifetime. What might get HKTs is Kotlin. There's a proposal brought forward by the Arrow people that has attracted a lot of attention.
If if that does not succeed, the Arrow folks are working on a meta compiler (think template Haskell but with IDE support) that should get you HKTs and union types.
I wonder if they plan to support nested inline types, such as arrays inside objects, or arrays in arrays. Every time I try to design a layout something like that I end up having to compute values to initialize various offsets with. Arrays may have a length prefix, or objects may need some metadata in the header. Tagged unions may need to initialize tags.
Reading the doc, I believe this is supported with these restrictions: (1) value classes may only contain other value classes, and (2) value classes cannot contain instances of themselves (no circularity).
> With the multi-level memory caches and instruction-level parallelism of today’s CPUs, a single cache miss may cost as much as 1000 arithmetic issue slots – a huge increase in relative cost.
Does this mean that the value of these additions are partially diminished because of spectre and meltdown vulnerability fixes?
I think it might actually end up the other way around. For example, primitives are going to be able to have methods now. In the end, the numbers just aren't there for certain types of computations without them. And people have been complaining about them missing for years.
Also generic specialization-- which will allow generics over primitive types (and value types). That'd eliminate one major source of pain for performance-sensitive code (no more writing your own collection classes if you need to, for example, contain a bunch of ints).
That isn't to say the designers of JavaScript and C# don't think carefully about the addition of new features; indeed, was it Anders Heljsberg who made the point about all new proposed features starting with negative points?
But Java's recent and upcoming additions show _taste_: adopting a single lambda notation that fitted smoothly with the surrounding class-based OOP paradigm via SAM types (rather than magically-generated-type-style lambdas succeeding two overlapping "delegate"/"event"-like features); implementing (sadly somewhat-incompatible) modules that can curtail unbounded reflection, opening the door to greater reasonability and performance in the future; proposing Project Loom to avoid the async/sync API split that hit Python, C#, and even Kotlin Coroutines; and now Valhalla, which was quietly mulled on for years before arriving at this very reasonable solution that considers myriad angles.
I like this approach to language design and think it bodes well for the language's future. It's a sweet spot between being necessarily conservative, dealing with developers' real-world problems, and giving time to mull over new language feature designs and not just implementing as soon (and haphazardly) as possible to please vocal developers.