Hacker News new | past | comments | ask | show | jobs | submit login
Project Jigsaw: Complete (mreinhold.org)
184 points by artbristol on Sept 24, 2017 | hide | past | favorite | 85 comments



I don't follow Java too closely and the world "module" is so hopelessly generic that I had a hard time understanding what the article was talking about. It links to this document [1] though, which is a long read, but does an excellent job of describing of describing all the ins and outs of the module system.

(tl;dr Packages can now declare themselves as modules instead of simple namespaces, and they get to choose exactly what they export and what other modules they require. Projects can now resolve their types from within modules with a module path instead of on a per-type basis from the class path. It's a huge step forward for Java modularity and dependency management.)

I have a hard time being too impressed because more modern languages are doing this out of the gates and with far fewer hacks and more out-of-the-box tooling (e.g. Rust), but acknowledge just how monumental of an effort it was to design the new system and then build it into Java's core in a way that's mostly backwards-compatible. Java's going to be around for a long time, and this is a big long-term gain for the ecosystem.

I'd really love to see languages like Ruby tackle this next. It's sorely needed.

---

[1] http://openjdk.java.net/projects/jigsaw/spec/sotms/


> ...because more modern languages are doing this out of the gates and with far fewer hacks and more out-of-the-box tooling (e.g. Rust)

Haha...I love Rust, but you must not have been following the recent saga over modules in the community. Two things became abundantly clear:

1) There are definite issues with the current way that Rust does modules and they create problems for newcomers that don't know all the intricacies.

2) There's very little consensus in the community about the best way to fix those problems.

Initially, a couple of different, somewhat major proposals were made that would largely overhaul the system. Over the course of a few iterations, those were whittled down to a few, much smaller changes that mostly keep the current system but remove some of the stumbling blocks. It's a credit to the Rust team that they've handled it in such an open manner, but it's also creating a bit of a "design by committee" feel that's probably going to create something that everyone can live with and very few will think is close to perfect.


The problems the Rust module system has and the changes being made to it are purely syntactic. Rust has already solved, and isn't changing its solution to, the kinds of problems Java is solving with Jigsaw.

FWIW, I don't agree with the "design by committee" assessment on the new system, either. There was certainly a lot of committee-flavored input, but it was only used for brainstorming and gathering requirements. The actual RFC, especially at this point, is pretty coherent and put together by only one or two people.


> it's also creating a bit of a "design by committee" feel that's probably going to create something that everyone can live with and very few will think is close to perfect.

I wish that the term "design by committee" didn't have the stopping power it has. Rust's current module system absolutely wasn't designed by committee, and if there's one complaint that I'd levy against it it'd be "overengineered", which is typically what people seem to tend to expect from systems designed by committees!

Personally, whether or not something is designed by a committee is orthogonal. What matters more is whether that something is well-integrated with the sibling systems that it is integrated with (read: unsurprising), and whether its design exhibits good taste (obviously, openly, wantonly subjective, but perhaps we can usefully say that a design can be judged to have good taste if it is so judged by people who think that its sibling systems also have good taste (so read: consistent)).

In truth, there probably is a positive correlation between things that are designed by committee and things that violate the two principles above; it's hard to get lots of people to commit to a consistent vision, especially if that involves serious tradeoffs. It's part of why languages with BDFLs tend to be considered at least coherent, if not elegant: it's easy for a committee of one to have a vision consistent with itself. But where this pejorative doesn't necessarily apply is when the committee is small, close-knit, and all share the same values.

When it comes to Rust, the committee in question is the language team, which is just seven people (in contrast to the dozens of commenters on this issue, who are there to provide perspective and arguments, not to cast votes). Of these, the team lead (Niko Matsakis) is someone that I personally trust to have excellent taste; I feel the same about the RFC author, Aaron Turon, who is also on the language team. And, for better or worse, teams choose their own new members, which gives good taste the chance to propagate; this runs the risk of stagnation (lessened, hopefully, by the RFC process), but also avoids the fractured nature of committees assembled from far afield.

For the record, the module system is exactly one of those things that I've long felt is subpar about Rust (though still better than headers, obviously--that's not a debate, it's a massacre). I haven't had the chance to play with the revised modules RFC yet, but we'll have a lot of experience with it before it potentially gets stabilized, and from what I've read it does look like it ultimately improves consistency and reduces surprises when judged against the rest of the language, which makes me optimistic. I'd love to have to find something new about Rust to whine about. :)


For the record, I wasn't talking about the overall module system with my "design by committee" description, just the recent proposed changes. Whether there were only 7 votes or not, it definitely felt like there was an effort to appease as many of the people who commented on the issues as possible, and that wasn't a small number. Don't get me wrong...I think the community that has developed around Rust is awesome and somewhat unique for a technical project when it comes to the respect and civility that gets exhibited. But I think the module system is an area where having a talismanic figure capable of hearing input, deciding on a design aesthetic and then pushing the entire community to buy into it would yield superior results to a make-everyone-happy approach. And it feels like the changes that will be coming to Rust are more of the latter and limited to just the things that almost everyone agreed upon.

The difficulty is that I think the Rust process works better for changes that aren't as disruptive or fundamental to the language. The RFC process is not only open and fairly welcoming to people outside the core team, it's also a great educational resource. I've learned a lot about language design from reading the RFCs and the accompanying comments. So I don't think I'd trade the Rust model for a BDFL model. I just wish something more like Aaron's original proposal (the second original proposal, that is...the one that got turned into the most recent one) was being implemented, even though I disagreed with a good chunk of it.


Yeah, design by committee isn't a bad thing. See e.g. Common Lisp which, being defined by an ANSI standard, shows how great things can be when you have a committee made of smart people who inform their work by carefully evaluating what other smart people before them tested in the field.


    (setf (readtable-case *readtable*) :invert)


Yes. What about it?

It's a useful feature to have, as it makes Lisp reader more flexible. See the summary of the discussion by the committee here: http://clhs.lisp.se/Issues/iss286_w.htm.


> I have a hard time being too impressed because more modern languages are doing this out of the gates

The problem with Java is that it's an already established and very popular platform and it's been so for the past decade at least. When starting from scratch, it's easy to just throw it all away and start fresh, it's easy to fix mistakes that were made in the past.

The irony though is that we still have a hard time learning from history. Just look at Go. Sometimes this industry feels like Groundhog Day, the movie.

Also, no platform or language that I know of has gotten "modules" right, with Java being one of the platforms that has gotten closest to fixing the problem actually (by means of OSGi). To see why modules are still a big problem and why we won't have a fix for the foreseeable future, I invite you to watch this keynote by Rich Hickey: https://www.youtube.com/watch?v=oyLBGkS5ICk


OSGi is a bottomless pit of pain and despair. Never again will I work on any project built on OSGi modules (well, every man has his price, but you know what I mean).


I agree, OSGi sucks, but it's one of the few attempts at fixing the problem with compatibility breakage in dependencies.


> I have a hard time being too impressed because more modern languages are doing this out of the gates and with far fewer hacks

Well of course - it's easier to build something like this into a clean-slate language isn't it? It's harder to build it into an existing language and VM spec with an incomprehensibly large volume of existing code to be compatible with. It's backwards to say it's not impressive because someone else with zero constraints to work with also managed it.


> Well of course - it's easier to build something like this into a clean-slate language isn't it? It's harder to build it into an existing language and VM spec with an incomprehensibly large volume of existing code to be compatible with. It's bizarre to say it's not impressive because someone else with zero constraints to work with also managed it.

I hope my original comment was relatively clear on this, but sure, it's a big accomplishment and will be a quantum leap for the ecosystem. However, given that languages elsewhere have had better systems for quite some time now, it's not like they're advancing the state of the art in dependency management.

But again, it wasn't an easy thing to do (a very hard thing even) and everyone involved deserves major felicitation.


I guess the real question is, why do you find advancing the state of the art to be more impressive or deserving of acclaim than integrating improvements to existing systems?

Both activities seem valuable and complementary to me -- there's no point in advancing the frontier if nobody will bother to make those advances practically useful.


They are advancing Java itself. Those who work with Java on large projects (which is the Java use case) will find modules useful. There is huge community around Java and even more people who use it, but don't count itself members of any community.


I think the distinction is GP makes is being impressed by the concept of a module system itself, or by being able to bolt on such a system to an existing language.

A module system in and of itself is not impressive these days. Getting one into Java is.


Can someone with an idea about this post other languages that actually attempt to modulerize it's own API? I'm not aware of any.


Elixir is an application that runs on the Erlang VM. Libraries can be brought in and started as their own applications, with their own lifecycles. I'm not privy enough in the specifics of the Rust way of things, but it sounds like this is sort of similar if you squint a little bit?

Observer screenshot from a blank IEx session: https://puu.sh/xI3re/3701a0c64e.png


> I have a hard time being too impressed because more modern languages are doing this out of the gates and with far fewer hacks and more out-of-the-box tooling

I'm not so sure about that. Java (both the language and the VM) is relatively unique in its mix of static (mostly in the sense of being typed) and dynamic (all linking is dynamic; code is often be loaded, unloaded and generated at runtime; reflection and general introspection of the runtime is pervasive). It's a blend of, say, ML, Smalltalk and Erlang. The module system is therefore more impressive than it may look at first glance. While not all the tooling is there yet, through the use of what it calls "layers", it makes it possible to dynamically load plugins or even upgrades to components in the system, each depending on different versions of the same library, while enforcing customizable levels of isolation between them. Whether or not there are any conflicts requiring a separation of "layers", can also be determined at runtime by introspection.


Other than (its botched version of) parametric polymorphism (due to how easily it can be circumvented, defeating the entire point to it), what ML-influenced features does Java actually have? There is no global type reconstruction, there are no abstract types, and “computation as expression evaluation” is a royal pain in the rear hole in Java.


Oh, I simply meant any safe, typed language, and that was the first that came to mind.

Having said that, Java is a typed, safe OO language, and that alone, I contend, already makes it close in spirit to ML (in particular, its structures/signatures). I think interfaces (and the ability to override methods like `equals` are pretty close to abstract types. I also strongly disagree with your description of Java's parametric polymorphism being "botched", and certainly not that its "entire point" is defeated. That it can be circumvented is what makes code and data sharing between different languages with different variance models possible (compare to how badly that's done in .NET). So it's simply a matter of what requirements are more important to you.

I can understand those that think that Java not being fully static or fully dynamic may defeat the whole purpose of what they like in their preferred approach. But I think that Java combines static and dynamic aspects in a rather unique and novel way, and the new module system is not different, combining in an interesting way both static and dynamic aspects.


> Having said that, Java is a typed, safe OO language, and that alone, I contend, already makes it close in spirit to ML (in particular, its structures/signatures).

The whole point to structures and signatures is abstract types. Mere value hiding can be achieved with let, which isn't exactly the pinnacle of typing: Scheme has it.

> I think interfaces (and the ability to override methods like `equals` are pretty close to abstract types.

Please do tell how you would make two or more instances of an abstract type in Java, in such a way that:

(0) The client is aware that that the two instances have a common type.

(1) The client is not aware of the representation.

If it's not possible, then Java doesn't have abstract types, period.

> That it can be circumvented is what makes code and data sharing between different languages with different variance models possible

And unsafe. (No, memory safety alone isn't safety.) If it's going to be unsafe, then I better at least get my money's worth in terms of performance, which is why low-level languages like C and Rust are the only ones worth FFI'ing to.

> I can understand those that think that Java not being fully static or fully dynamic may defeat the whole purpose of what they like in their preferred approach.

Java is fully dynamic. “Type checking” in Java is basically a mandatory linter.


> Please do tell how you would make two or more instances of an abstract type in Java, in such a way that...

By having two implementations of a common interface, like `ArrayList` and `LinkedList` both implementing `List`.

> And unsafe.

It seems like you're defining "safe" to be precisely what the languages you like provide, no more and no less. I can say that ML and Haskell are unsafe because they don't statically forbid erroneous behavior at runtime, like a sorting function that doesn't sort (or doesn't terminate). Java has no undefined behavior. In fact, it is completely unknown how much safer -- if at all -- is ML than Java in practice.

> If it's going to be unsafe, then I better at least get my money's worth in terms of performance, which is why low-level languages like C and Rust are the only ones worth FFI'ing to.

Of course it's a matter of specific requirements, but I think you get more than your money's worth in terms of performance in Java, and having decades of experience writing huge multi-MLOC programs in both Java and C++, I'm convinced that it takes significantly less effort to get a well-performing large Java app -- especially if it's concurrent -- than a C++ app, even though you could surpass Java's performance given considerable additional effort. In any event, I think that the success of the JVM shows that many people find supporting it to be worth it.

> Java is fully dynamic. “Type checking” in Java is basically a mandatory linter.

This is simply untrue. Java is mostly type safe. If you have a variable of type `Foo` in your program, it cannot reference an object of a type that is not `Foo` at runtime. You are right that this does not extend to generic types, but only if -- 1. you've intentionally tried to circumvent the type, or 2. you've fallen victim to an obscure bug that was found recently, and is very hard to reproduce accidentally.


> By having two implementations of a common interface, like `ArrayList` and `LinkedList` both implementing `List`.

That's not an abstract type, it's an object type. An abstract type has a single representation, determined by the type's implementor, which is hidden from the rest of the program. OTOH, objects with the same type may have different internal representations, determined by whoever constructs the object, just as in your example.

> It seems like you're defining "safe" to be precisely what the languages you like provide, no more and no less.

Safety is defined in terms of the language's semantics, not my personal preferences: are meaningless operations ruled out or not? (We can get technical and say that Java does, in fact, assing a meaning to invalid casts: to throw ClassCastException. But very few people would consider that a useful meaning: if you run into it, your program plainly has a bug.)

> Java is mostly type safe.

Then it isn't.


> An abstract type has a single representation, determined by the type's implementor, which is hidden from the rest of the program.

So, like:

    <T> foo(A<T> a, B<T> b) { ... }
You don't know what `T` is, but you know that `a` and `b` are parameterized by the same type.

> Then it isn't.

That it's not completely typesafe (Scala isn't, either, BTW) doesn't mean that it's not completely safe. The vague notion of safety is not defined by the arbitrary notion of type safety, which heavily depends on the type system. TCL is typesafe, but few would say it's safer than Java.

> But very few people would consider that a useful meaning: if you run into it, your program plainly has a bug.

There are many more plainly incorrect behaviors that aren't prevented even when the language is 100% type safe (depending on the type system, and the effort required to encode the correctness conditions) -- as Java prior to generics was, BTW. So if you want to define "safe" as "typesafe", that's fine, but given than ML's and Java's type systems are of similar (not identical, but similar) richness (i.e., they are both simple type systems with parametric polymorphism), and given that you don't actually run into `ClassCastExceptions` in Java unless you choose to do stuff that can get you into that sort of trouble (which would mean ignoring compiler warnings), I think the two languages offer a similar level of safety.


> So, like (...) You don't know what `T` is, but you know that `a` and `b` are parameterized by the same type.

Abstract types are existentially quantified, whereas generic type parameters are universally so.

> Scala isn't, either, BTW

Scala isn't type safe either, period.

> So if you want to define "safe" as "typesafe", that's fine, but given than ML's and Java's type systems are of similar (not identical, but similar) richness (i.e., they are both simple type systems with parametric polymorphism)

Actually, Java's type system is more complicated, since it has subtyping, as well as a limited form of first-class existentials (wildcards) that Standard ML doesn't have, yet somehow Java buys me less safety than Standard ML. This is what happens when engineers design programming languages.

> and given that you don't actually run into `ClassCastExceptions` in Java unless you choose to do stuff that can get you into that sort of trouble (which would mean ignoring compiler warnings), I think the two languages offer a similar level of safety.

There are no runtime type errors at all in Standard ML.


> Abstract types are existentially quantified

Oh, you mean something like this?

    interface A<T> {
        T x();
        void y(T t);
    }

    void foo(List<A<?>> as) { // every element may have a different instance of T
        for (A<?> a : as)
            bar(a);
    }

    <T> void bar(A<T> a) {
        T t = a.x();
        a.y(t);
    }

> There are no runtime type errors at all in Standard ML.

There are no runtime type errors at all in TCL, either. That doesn't mean that the language is safer. "Safety" and "type safety" are not the same. Type safety has its virtues, but I would argue that Java's type safety is good enough; I would take it along with its dynamic capabilities over SML for many projects -- it's just a tradeoff that buys you a lot of other stuff. There's a price you pay for dynamic runtime capabilities and for deep cross-language interop (e.g. a Clojure list is a Java List), and that's a price worth paying in many cases, considering that it's not high at all.

> This is what happens when engineers design programming languages.

I guess engineers prioritize things based on what they're actually worth to developers (or, at least, according to how much they believe they're worth to their developers) as opposed to using an arbitrary metric for some intrinsic quality, whose relationship with actual benefit isn't at all clear. Java sacrifices a tiny bit of type safety for interop and various dynamic capabilities. I wasn't aware that there's been some major discovery showing that this is the wrong tradeoff to make.


> Oh, you mean something like this?

Kind of, but, with wildcards, you still don't get the ability to unpack the existential for arbitrary further use. You can only unpack the existential within a single method, which is unnecessarily restrictive. If you unpack the same existential package twice (say, because you need to do it in two different methods, neither of which calls the other), then, as far as the type checker cares, that produces two different abstract types. Programming with abstract types in Java is a pain, which is why (understandably) Java programmers don't do it.

> There are no runtime type errors at all in TCL, either.

Yeah, but you don't get basic things such as integers, which is even worse.

> There's a price you pay for dynamic runtime capabilities and for deep cross-language interop (e.g. a Clojure list is a Java List)

Anyhow, the only thing I wouldn't want to do in Standard ML is manipulate arrays, mainly because runtime bounds-checking is unnecessary when you get your array algorithms right, but Standard ML requires it anyway. Interoperability between two memory-safe languages is completely useless to me.

> and that's a price worth paying in many cases, considering that it's not high at all.

Any price is high when you get nothing in exchange for it.


> Programming with abstract types in Java is a pain, which is why (understandably) Java programmers don't do it.

OK, but I never said Java is ML. The example I provided is not uncommon.

> Interoperability between two memory-safe languages is completely useless to me.

Fair enough, but it's clearly not useless to many people.

> Any price is high when you get nothing in exchange for it.

I think that polyglotism and dynamic capabilities are a good deal, and I think that many others find it at least as useful as complete type safety, but, of course, that depends on your needs and personal preferences.


>> Please do tell how you would make two or more instances of an abstract type in Java, in such a way that...

>By having two implementations of a common interface, like `ArrayList` and `LinkedList` both implementing `List`.

This is not i believe what he is talking about, here is a silly example in SML, it has 3 types which all share a type t... a list of t pairs, a vector of t's, and a function from pair of t's to t.

functor Foo (type t; val pairs : (t * t) list val f : t * t -> t) = struct val things : t vector = Vector.fromList (List.map f pairs); end

One thing to note is that t is never exported/returned. and thus, the thing returned exports t vector but not t.

we could export t in a few different ways:

type t = t; Export it, let its binding be known type t; Its a type, but what it is bound to is not known.


With the raging debate going on in Rust community about module system, it is clear they are still far from having well understood and usable module system for wider Rust community.


The meaning of “module” is pretty concrete and specific. A module is:

(0) A unit of encapsulation: Modules can export abstract types whose representation is hidden from the rest of the program.

(1) A unit of type-checking: Modules can be type-checked without knowing anything other than the interface of their dependencies.

(2) A unit of translation: Modules can be translated to target machine code separately from each other. (That being said, whole-program compilation is fine if you want it. The semantics of the module system shouldn't make it mandatory, though.)

Most programming languages don't have module systems at all. And most programming languages that do have module systems, have module systems that suck. Including Rust.


>I have a hard time being too impressed because more modern languages are doing this out of the gates and with far fewer hacks and more out-of-the-box tooling

I'm curious, is there another language that allows you to streamline its runtime by choosing only what modules your app needs?


This was already used in Lisp and Smalltalk to trim down images for production delivery.

Other than that, the majority of compiled languages do remove unused library code when static linking.


>I'd really love to see languages like Ruby tackle this next. It's sorely needed.

Could you expand on that? Why Ruby?


not OP, but ruby's current "modules" are just namespaces.

What jigsaw and rust call modules are isolated components that you can plug and connect in various configurations, and that declare public interfaces both as dependencies and as exported symbols. Consider a module configuration like

  module java.sql {
    requires public java.logging;
    requires public java.xml;
    exports java.sql;
    exports javax.sql;
    exports javax.transaction.xa;
  }
So, they are actually slightly closer to rubygems + bundler, but ruby is always "promiscuous", i.e. code from one library can trivially mess up with code from other gems, while some isolation would be nice.

Also, by default ruby does not namespace things by package/module/gem so it's easy to have collisions, while in a more robust system this would/should be under the control of the calling code.


I think it's going to take me a while to learn about the module system in Java 9, I'm probably going to bank on watching other projects do it first to see if any best practises emerge as developers get used to the new way of working.

One thing I will say though is Java 9 has had a definite improvement on Java applications that run on my Raspberry Pi 3, I think because this got included http://openjdk.java.net/jeps/297

One application used to take 55s to start up, now it takes around 15s!


> 15s

Still sounds ridiculous. Assuming this is how much time it takes to initialize the VM, I wonder if it would be possible to have a pool of VM processes that are already running and ready to accept an application code.


It's a spring boot application so comes with the baggage of initialising Spring + Spring MVC, that took a long time on JRE 8


I don't know if it's still a problem with Spring, but at a previous job about 8 years ago I was able to cut our application's start time from about 80 seconds to ~12 by writing a custom implementation of the logic that Spring used to find bridge methods to determine the proper annotations. We profiled the startup and found that over 80% of the time was being spent in reflection to resolve those bridge methods and that could be done significantly faster by using asm to load the classfile bytes and resolve the bridge method by looking directly at the invokevirtual call. Spring avoided this approach because it could fail in situations where the SecurityManager prevented filesystem access.

I don't know if that's still the case, but it always bugged me that Spring would slow down the >99% case by so much just to accommodate the <1% case.


Dave Syer looked at this topic recently[0]. Most of the time spent in loading isn't due to reflection or annotation processing: it's proportional to the number of classes loaded.

That said, Spring is fairly enthusiastic about pulling stuff. As he says:

> Since more beans mean more features, you are paying at startup for actual functionality, so in some ways it should be an acceptable cost. On the other hand, there might be features that you end up never using, or don’t use until later and you would be willing to defer the cost. Spring doesn’t allow you to do that easily. It has some features that might defer the cost of creating beans, but that might not even help if the bean definitions still have to be created and most of the cost is actually to do with loading and parsing classes.

I imagine that they'll keep poking at it, especially since they're working on Spring Cloud Function[1], where cold starts for single invocations are common.

[0] https://github.com/dsyer/spring-boot-startup-bench/tree/mast...

[1] http://cloud.spring.io/spring-cloud-function/


> most of the cost is actually to do with loading and parsing classes

It boggles my mind that loading and parsing even a few thousand classes is something that a human can perceive as "dog slow" when carried out by a quad-core 1.2 GHz CPU.


My understanding is that it's I/O bound, that the JVM more or less fetches each class from the JAR file or disk individually.

I am not sure if that's an implementation decision or whether it's imposed by the JVM specification. At some point I want to tinker with OpenJ9 to find out.


From my own experience, the class scanning for finding all annotations is usually the bottleneck.

I believe that the last version of Spring (5) has a way to do the scanning at compile time now.


> I wonder if it would be possible to have a pool of VM processes that are already running and ready to accept an application code

This was tried with Drip and the earlier Nailgun. The idea doesn't work as well as you'd think for several practical reasons (the JVM is very lazy, so a JVM that does nothing until it has user code to run does just that - nothing).


>Assuming this is how much time it takes to initialize the VM

Why would you assume that?


Bad assumption.


For ARM, always use Oracle VM. It's around 15x faster for my application.


Please read on http://openjdk.java.net/jeps/297: "The contribution from Oracle provides full C1 and C2 support for ARM, putting it on par with other architectures."

So from now on the OpenJDK will be as fast as Oracle VM on ARM!


You could probably cut that down even more by "building" your pen jvm with a restricted set of modules.


The improvement is good, of course, but isn't the whole thing a bit of a straw man? The previous version taking 55 seconds on a Pi 3 seems utterly unacceptable to me, it's actually a pretty capable processor, and even a 15 second startup is pretty slow. Why put up with that when there are modern options (Go and Rust spring to mind) which would do so much better?


Because not everyone cares about the startup time of an application on a platform that is 10 times slower than an intel CPU from 2012 when the application runs for more than a minute?

Also why are you even considering go as modern? The best compiled alternative to java is D and it's possible to disable the stop the world pause for specific threads by detaching them from the runtime which make it useful for realtime workloads while still being able to take advantage of the GC to boost your productivity when determinism is not important.


I don't think that's what 'straw man' means.


It's a website that's for personal use (a dashboard), I don't update it that often.


I think its very exciting they were able to get the number of discrete modules up to 26. That's extremely impressive to me given the age and complexity of Java! I went to a talk a few years back from one of the contributors to this project (forget the name, sorry!) and I think if my memory serves me they had significantly fewer discrete modules at that time -- perhaps 6?

Anyhow, I think this is a great step forward for Java.


Jigsaw was what held up the Java 9 released, due to some members rejecting the proposal. It was changed and revoted upon, but it's hard to follow what was modified to make it acceptable.

Is there anywhere the design goals and constraints for modules are? I'd really like to know how they ended up in their current form.


> Jigsaw was what held up the Java 9 released, due to some members rejecting the proposal.

That was only a few weeks. If you check the official release schedule http://openjdk.java.net/projects/jdk9/ Jigsaw was supposed to be feature complete a year before that vote happened. Jigsaw was simply not close to ready when it was merged to master. And yes, Oracle still claims it hit every milestone on that schedule.

> It was changed and revoted upon, but it's hard to follow what was modified to make it acceptable.

http://openjdk.java.net/projects/jigsaw/spec/minutes/2017-05...

http://openjdk.java.net/projects/jigsaw/spec/minutes/2017-05...

http://openjdk.java.net/projects/jigsaw/spec/minutes/2017-05...

Not much really. Basically Oracle called their bluff. At that point is was really too late as Jigsaw was already merged to master and more or less touched every part of the JDK.

> Is there anywhere the design goals and constraints for modules are? I'd really like to know how they ended up in their current form.

Not really, it was really just Oracle making things up as they went. For the past ten years Mark Reinhold would give talks at Java conferences what the module system comping in the next Java version was supposed to do and every year the content was different. For example they have no good explanation while they mashed the module declaration into a .class file, basically everybody except Oracle thinks it's a bad idea but the just went with it anyway. They claim "reliable configuration" as a goal but without versioning that's meaningless.


> They claim "reliable configuration" as a goal but without versioning that's meaningless.

Versioning is not currently built into Jigsaw, but Jigsaw provides all the building blocks necessary to support it in third-party tools.

> For the past ten years Mark Reinhold would give talks at Java conferences what the module system comping in the next Java version was supposed to do and every year the content was different.

Plus the entire development was done in the open, in an open source project with an active mailing list and frequent prototypes.


> Versioning is not currently built into Jigsaw, but Jigsaw provides all the building blocks necessary to support it in third-party tools.

If I need third-party tools to validate my dependency graph at compile time and at runtime to ensure "reliable configuration" then why do I need Jigsaw at all? That's the current state of affairs with Java already, Jigsaw provides no value.

> Plus the entire development was done in the open, in an open source project with an active mailing list

Ostensibly. There is a difference between having an open mailing list (where people could give feedback and get a superficial thank you before getting ignored) and a readonly repository and actual open development. Quite frankly I find it telling that in 2017 we have to stress that there was and open mailing list and readonly repository. The most important thing, the decision process, was completely intransparent and all Oracle/Sun internal.

> and frequent prototypes.

Which weren't useful because none of the frameworks, libraries and tools would support it.


> Jigsaw provides no value.

I think you should read more about Jigsaw.

> I find it telling that in 2017 we have to stress that there was and open mailing list and readonly repository.

We don't need to stress it, just to correct an incorrect report.

> The most important thing, the decision process, was completely intransparent and all Oracle/Sun internal.

This is simply not true, as a quick look at the mailing-list archive would show, as well as from the fact that the project took 9 years.

> Which weren't useful because none of the frameworks, libraries and tools would support it.

They were welcome to support them. I'm not sure what it is that you want, exactly.


me guessing, i see at least two constraints that doesn't work well with a text file. The current format allows user defined annotations and the compiler find all packages of a module and automatically insert them into the module-info (you only define the exported package not all the packages).


I'm still not sure of the practical benefits? Seems to be purely for IOT and nothing else. Am I correct?

Edit: To add to my question, jar hell was solved by tooling which generates your classpath for you. So it hasn't been an issue for me in years. And they refused to add versioning, so you still need to use those tools. Strong encapsulation just means they disabled reflection access to non public members, which I consider a regression on functionality. Either way, not a particularly useful feature to me, in fact it breaks some of my code. So I'm left with being able to have small JVMs which don't bundle the full standard lib, which seems to be mostly of use for memory constrained environments like IOT. But I'm actually hopeful there's some bigger practical benefits I'm not thinking of, so I'd love to hear from people who know more about jigsaw.


Encapsulation is another one.

Before Java 9, the only way to hide private APIs would be to have everything in the same JAR.

If a library is splinted across multiple JARs, then private APIs only to be used internally by library become exposed to everyone, and there will always exist someone making use of them even if explicitly marked as internal.

This is nothing new to Java, other languages e.g. Ada, Delphi, .NET and even Go have this kind of visibility concept.


.NET does have the module level directive "InternalsVisibleTo" that declares another assembly as a friend assembly that can use internal API's from the first assembly. So if you have e.g MyLib.Core and then two impls on top say MyLib.Windows and MyLib.OSX, where everyone would only use one of the impl libs but always the core lib (which is why it was split - then you could make internal API in the core lib visible ONLY to the Windows/OSX libs by using InternalsVisibleTo, and users of the library could not see that internal API.


Yep, that is what I meant by mentioning .NET in "This is nothing new to Java, other languages e.g. Ada, Delphi, .NET and even Go have this kind of visibility concept."


Right. Thought you meant "the file/assembly is the module, no way around it" was the concept.


I see, why would you split a library accross many Jars though? What's the advantages? Isn't it again IOT, so that if you only use part of a jar, you don't need to have the full thing loaded?


In the organization I work for, we're contemplating a move from Oracle JDK to OpenJDK, when we move from 8 to 9. I'm still nervous about OpenJDK from some historical scary stories of things not working well in OpenJDK or at least different. I've not found much on the subject, or even on OpenJDK as a functioning open source project. (Or is it just a code drop of the Oracle JDK?) If anyone has good insights, or links to some realistic comparisons, I'd be thrilled.


Note: Oracle is my employer, the following statements are my own — (insert Safe Harbor notification)

Good news regarding any trepidation on the differencesa between the runtimes — recently we announced that there will no longer be a difference between OpenJDK and OracleJDK [1]

To quote: “The Oracle JDK will primarily be for commercial and support customers once OpenJDK binaries are interchangeable with the Oracle JDK (target late 2018)”

[1] https://blogs.oracle.com/java-platform-group/faster-and-easi...


I think this is for people that had issues w/ 'jar hell'. I did not.


The biggest pain for me was that it felt like there was no good way to create a library with some internal structure in the form of packages without exposing parts of your internal implementation as public.

When Jigsaw was delayed I was playing with the idea to create a project called shitty-jigsaw, which would just merge all of the packages in a module and change the access modifiers to be more restrictive according to some module definition.


Would shading have done the job? Not to say that's a _good way_, but it _is_ a way.


Sure you can use shading to move everything you don't want in your public API into some other package, or you can use something like ProGuard to obfuscate everything but your intentional public API. However those things will still be public.

The only way to not have those things visible for the user of your library is to only use one package in your implementation.


It's really good for encapsulation and you can do services without dependency injection frameworks


Could you please elaborate, what kind of design you're referring to, here? What dependency injection do you see so vastly changing, here?


Does it have any positive impact on startup time for simple programs if you are not loading large chunks of the JVM?


yes. especially since in the future you can actually compile modules as libraries and aot load them. at the moment this is experimental and should probably only work for java.base


Will this resolve the slow start problem in serverless apps?


It's a step. Still 10x slower start I suspect compared to node, but much better "warm" performance.


Nope, it makes things (a little bit) worse because now you also have to start up a module system.


> start up a module system

i thought jigsaw was about compilation artifacts. are you saying there are is a (nontrivial) runtime component? what does it do?


> i thought jigsaw was about compilation artifacts.

Nope, jigsaw doesn't affect compilation artifacts other than an additional module-info.class present in a .jar file.

There is an intentionally completely unspecified, Oracle/OpenJDK proprietary extension (yes) called jlink that generates different artifacts. The aritfacts are unspecified, only work with Oracle/OpenJDK, and the licensing restrictions for redistribution are unclear.

> are you saying there are is a (nontrivial) runtime component? what does it do?

1. module definitions have to be parsed into memory structures to build a runtime model of the module system

2. the runtime model built in 1. has to be validated

3. every access has to be checked if it is permitted by the module system based on the information from 1.. If it is not permitted a decision has to be taken what should be done, this can include figuring out whether it is the first such access which needs additional memory structures (this is the default). See [1] for more information.

Probably some additional things that I forgot about.

[1] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-June/...


I remember this from... 11 or so years ago, i think. I was making some Java applet based game and the slow startup time and big JVM download was something that an issue among people making such games (which by the time i was into were dying - although they'd soon get a short lived shot in the arm thanks to Minecraft, but that was later). There were some posts in javagaming.org forums from some people who were talking with Sun (that was before Oracle - Sun at the time was actually interested into uses of Java in gaming and talked with devs, including from smaller/indie developers) about a new improvement for the VM to modularize the packages so that only things that are required would be loaded (faster startup) and the default JVM download would be much smaller (sort of comparable to Flash) which was a big issue at the time for those making web-based games (that niche for Java games would soon die with the introduction of Flash Player 9 and ActionScript 3, although it took a bit for people to actually upgrade).

I remember it was released at some point, but at the end it didn't amount to much since most of the JVM's actual implementation relied on everything else and the "optional" bits were only a small part, so it didn't help almost at all with applets. And by that time Java applet gaming was dead (it never reached Flash gaming's heights, but for a few years in the early/mid 2000s it was still something you could make a living off - if you didn't expect much).

I never heard about it much after that and personally moved away from Java with Flash becoming dominant on the web game scene and soon with Oracle being an dick to everyone. The only reason i used Java was for NetBeans' GUI editor for my tools but with Lazarus [0] becoming very stable by that time and NetBeans looking and behaving weird in my -then- brand new iMac, i abandoned Java for good (the only active Java project i still have is a tilemap editor [1] but this only if you stretch "active" to "hack on it a little every few years").

I also never liked Java much after Java 1.4 - it moved further away from its "simple language" roots and i liked how everything was solved in the library instead of adding extra stuff to the language. Well, that, and things like generics felt very "bolted on" while annotations like `@override` felt unnecessary verbosity (and ugly syntax-wise).

My favorite Java was probably around 1.1 or so (i have a book on that too), some times i think i should make a new language and VM after it but i lose interest quickly - i think i have a bunch of parsers for Java-1.1-like languages lying around in my backups, from every time i get that urge :-P.

[0] http://www.lazarus-ide.org/ [1] http://runtimeterror.com/rep/mapas



Thanks for the good memories. I remember those days of Java 1.4 start up times and the first talks of modularization coming out. It was so exciting! Then 1.5 came and startup speeds boosted a lot. I recently gave Java another try with a Kotlin project and am fairly impressed. Have not worked with Java in years, mostly in Python recently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: