Hacker News new | past | comments | ask | show | jobs | submit login

I have misgivings about making interop with native code easier.

In the node, python, and ruby ecosystems, native code dependencies are a horrorshow of brittle builds. The amount of my life that has been wasted on stupid build issues is significant (damn you nokogiri).

The JVM has been a relative sea of tranquility. The ecosystem is so large, and JNI so unpleasant, that everything important has been built JVM-native. Builds just work, even when you walk away for two years.

I don't want native code in my projects, and I fear this will encourage it.




Note that FFM requires that you explicitly allow the use of native code with --enable-native-access (and soon JNI will, too: https://openjdk.org/jeps/8307341). As JEP 454 (https://openjdk.org/jeps/454) states:

> To allow code in a module M to use unsafe methods without warnings, specify the --enable-native-access=M option on the java launcher command line. Specify multiple modules with a comma-separated list

The warnings will become errors in a later release.

This means that no library you're using (even some transitive dependency) can call native code without you, the application author, knowing about it. This restriction on native code is part of the work on "integrity by default": https://openjdk.org/jeps/8305968


Oh no, not more command line flags to allow functionality that always worked before! That same decision with modules and reflection in Java 9 is why so much stuff is stuck on Java 8 still.


It has nothing to do with why so stuff is stuck on Java 8. In fact, one of the goals of those flags to prevent that from recurring. 99% of the problems migrating from JDK 8 are due to low level libraries depending on internal implementation details that were never subject to backward compatibility and that changed in JDK 9, which was a huge release (https://openjdk.org/projects/jdk9/). Libraries became intentionally non-portable, and applications didn't know that they were made non-portable by the libraries they were using, often inadvertently (transitive dependencies). Now, whenever a library does something that bypasses the spec, i.e. the portion that's covered by backward compatibility, it cannot do it without the application knowing about it and approving it. An application that sticks to the spec and enjoys Java's compatibility, portability, and safety doesn't need to add any flags.


Developers should understand if a dependency use C bindings. But why bother end users with a command line flag.

Perhaps there is a manifest entry/flag so the command line flag is not required ?


1. Application developers frequently don't know how the libraries they're using work, and often don't even know what libraries they're using because they're transitive dependencies (even a version update of a library can pull in a new transitive dependency). If a library imposes any kind of risk on the application, the application has to acknowledge that it accepts it.

2. The last word is given to the application, and the point is that libraries must not make decisions that have a global impact on behalf of the application. That's why an executable JAR, i.e. an application, can grant such permission in its manifest but library JARs cannot.

See more here: https://openjdk.org/jeps/8305968, https://openjdk.org/jeps/8307341


> an executable JAR, i.e. an application, can grant such permission in its manifest

No it can't. If it would, then the change indeed wouldn't be a problem. Oracle has removed --illegal-access=permit completely, and --add-opens doesn't let you specify wildcards.


The question I answered was about --enable-native-access, not --add-opens. --add-opens indicates an actual problem in the program that has to be fixed, so of course we don't allow wildcards. You can still specify that in the manifest even without wildcards, though.


The point is that they were literally breaking changes for the sake of being breaking changes, rather than for any legitimate need.


That is untrue, which you can tell from the simple fact that lots of stuff broke on JDK 9 even though none of the access control restrictions were turned on yet, while far fewer applications broke in JDK 16 or 17, when we finally turned on the access restrictions by default. That stuff broke because of non-portable libraries shows there was a very legitimate need to stop libraries from making applications non-portable without their consent (and there are other motivations: https://openjdk.org/jeps/8305968).


But how is that connected to the concern expressed?

Problem: If the JVM ecosystem starts to depend more on native code it gets less pleasant to work with.

Answer: There is this command line flag that is used to activate the FFI.

There doesn't seem to be any relation between these two things, unless you're trying to imply that because of the flag people just won't use the FFI at all and will write pure Java instead. Which would be strange, as the justification for investing so much into Panama in the first place is the expectation that it will be used, because native code is getting more important rather than less.

There are two ways to avoid the JVM ecosystem becoming crappy due to native code:

1. Encourage the development of pure Java libraries.

2. Make native code work better.

Panama tries to do (2) but it's got a lot of missing functionality that the scripting lang ecosystems nail. How do you compile a Java library that uses native code? Python/Node/Ruby worlds know how, but OpenJDK ignores the question. How do you ship a Java library that uses native code? Python/Node/Ruby worlds know, but OpenJDK ignores the question. These problems have been pointed out before and Panama guys just say not in scope.

The community hasn't come together to solve these problems either. Maven/Gradle are too disorganized to come up with answers in the absence of any leadership from OpenJDK. Everyone reinvents the wheel when it comes to loading shared libs from JARs. It's a mess and nobody is solving it.

So that leaves (1), encourage the development of pure Java libs. This also isn't working. People write in C/C++/Rust because that way everyone can access the functionality regardless of what ecosystem they work in, so such projects get the biggest collection of stakeholders. That's more important that implementation language, so Java is losing badly everywhere. Pure Java libraries are never best in class anymore (they once sometimes were), which is how you end up with situations like this:

https://blogs.oracle.com/developers/post/open-sourcing-jiphe...

> We’ve chosen to base our products on OpenSSL because it is the most open and most widely used cryptographic toolkit on the planet. At Oracle, we make extensive use of the OpenSSL 3.0 FIPS 140 provider to operate in regulated markets. We like the OpenSSL FIPS provider so much we decided to build a Java cryptography toolkit on top of it called Jipher. By converging on a single toolkit (OpenSSL) we reduce our attack surface, simplify security patching, achieve assembly-optimized performance, and help our customers meet regulatory compliance requirements.

It means the first use of Panama in the wild is to replace the built in memory safe Java SSL stack with OpenSSL, a giant pile of C with a long history of memory safety vulns, and it's Oracle itself doing this because OpenSSL is "the most widely used" library and nothing else matters. They even say replacing Java SSL with OpenSSL will reduce the attack surface.

If even the well funded OpenJDK developers working in Java can't outcompete the poorly funded OpenSSL devs working in an unproductive language, then what chance do other Java library devs have? None at all! Native code will always win because it will be faster and have a bigger community, and in the end there's safety in numbers.

This is very sad. It means Java's future is as a collection of thin wrappers around bug-prone C APIs accessed via Panama, which, as you are well aware pron, will make Loom virtually useless as it cannot work with native code by design.

A long steady decline is the most likely path here, in which Java libs get hollowed out to try and keep up with the functionality and performance of native libs (e.g. http/3), and so Java devs steadily lose the benefits of the JVM. There are things Java could do to avoid this fate but the OpenJDK people are never going to do them.


> How do you compile a Java library that uses native code? Python/Node/Ruby worlds know how, but OpenJDK ignores the question.

Because you compile it like any other Java library.

> How do you ship a Java library that uses native code? Python/Node/Ruby worlds know, but OpenJDK ignores the question.

It doesn't. jmod files and the jmod tool were added in JDK 9 precisely for that (and other things, too).

> If even the well funded OpenJDK developers working in Java can't outcompete the poorly funded OpenSSL devs working in an unproductive language, then what chance do other Java library devs have? None at all!

People aren't really rewriting OpenSSL in Python or JS, either, but high-level languages are many times more popular than low-level ones [1], and you're misreading the normal behaviour of a fragmented market, when it's worth it to compete and over what. I don't think any language today has a healthier library ecosystem than Java. That's not to say it's as good as wish it would be, but no one seems to do it better.

There is, however, a problem with build tools not supporting newer JDK features (that are coming faster now than before) and we'll need to do something about that.

[1]: https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...


>> How do you ship a Java library that uses native code? Python/Node/Ruby worlds know, but OpenJDK ignores the question.

>It doesn't. jmod files and the jmod tool were added in JDK 9 precisely for that (and other things, too).

I have never seen a lib package native dependencies using jmod. It's not even clear to me how this would be done. Everyone I'm familiar with bundles binaries inside the jar. Can you point to an example? I would love to learn a better way of doing this.


That's what JavaFX does (https://openjfx.io/openjfx-docs/#modular), but the problem is that Maven and Gradle don't support that well, which is why few libraries do that.

The idea is that an application is best deployed with a custom runtime created with jlink (well, all Java runtime these days are created by jlink and so every Java program uses jlink whether it knows it or not, but few use it directly because, well, Maven and Gradle don't support jlink well, and so few applications use jlink well), and jmod files are jlink's input from which it generates the image.

Anyway, build tools aside, a library can be distributed as a jmod file, consuming it with jlink places the native libraries in the right place, and that's about it.


JavaFX doesn't use jmods, not really. Try following the tutorials for getting started and see for yourself: you end up using custom JavaFX build system plugins that download jar versions of the modules, not jmods. Also some alternative JDK vendors pre-ship JavaFX in their JDK spins to avoid people having to use jlink.

I've worked with jlink extensively. The problems are greater than just Maven and Gradle, which at any rate both have quite sophisticated plugins for working with it. There was just a major error at the requirements analysis phase in how this part of Java works. Problems:

1. You can't put jmods on the module path, even though it would be easily implemented. They can only be used as inputs to jlink. But this is incompatible with how the JVM world works (not just Maven and Gradle), as it is expected to be able to download libraries and place them on the class/module path in combination with a generic runtime in order to use them. Changing that would mean every project to get its own sub-JDK created at build time, which would explode build times and disk space usage (jlink is very slow). So nobody does it, and no build system supports it.

2. When native code gets involved they are platform specific, even though (again) this could have been easily avoided. Even if JVMs supported jmods just like jars, no build system properly supports non-portable jars because the whole point of a jar is that it's portable. It has to be hacked around with custom plugins, which is a very high effort and non-scalable approach.

3. jlink doesn't understand the classpath at all. It thinks every jar/jmod is a module, but in reality very few libraries are usable on the module path (some look usable until you try it and discover their module metadata is broken). So you can't actually produce a standalone app directory with a bundled JVM using jlink because every real app uses non-modularized JARs. You end up needing custom launchers and scripts that try to work out what modules are needed using jlink.

4. The goal of the module system was to increase reliability, but it actually made it lower because there are some common cases where the module system doesn't detect that some theoretically optional modules are actually needed, even in the case of a fully modularized app. For example apps that try to use jlink directly are prone to experiencing randomly broken HTTP connections and (on desktop) mysterious accessibility failures, caused by critical features being optional and loaded dynamically, so they have to be forced into the image using special jlink flags. This is a manhole sized trap to fall into and isn't documented anywhere, there are no warnings printed. You are just expected to ship broken apps, find out the hard way what went wrong and then fix it by hand.

At some point the JDK developers need to stop pointing the finger at build tool providers when it comes to Jigsaw adoption. It's not like Maven and Gradle are uniquely deviant. There are other build systems used in the JVM world and not one of them works the way the OpenJDK devs would like. They could contribute patches to change these things upstream, or develop their own build system, but have never done it so we end up with a world where there are ways to distribute libraries that work fine if you pretend people never moved beyond 'make' and checking jars into version control.


> You can't put jmods on the module path, even though it would be easily implemented.

Yes, it could be easily implemented, although things would work nicely even without that. You make it sound as if there would have been proper build tool support if that were the case, but JARs can be easily put on the module path and still build tools don't properly support even that yet.

> When native code gets involved they are platform specific

Well, yeah. Native code is platform specific, and that's one of its main downsides and why most libraries don't (and shouldn't) use it. But when it is used, it's used for its upsides.

> At some point the JDK developers need to stop pointing the finger at build tool providers

All of the problems you mentioned could only be solved by build tools, but we're not pointing fingers in the sense that we blame build tools for things being bad. After all, modules have been very successful at allowing us to do things like virtual threads and FFM and remove things like SecurityManager. Build tools providers can have their own priorities, just as we do. But if you want to enjoy your own modules or 3rd party modules then you'll need good support by build tools.

> They could contribute patches to change these things upstream, or develop their own build system, but have never done it

Yet. There were more urgent things to work on, but maybe not for long.


Gradle will put JARs on the module path if the app itself is a module. If JMODs were an extension to the JAR format rather than a new thing, and if there was no such thing as the module path (modules were determined by the presence of their descriptor), then nothing new would be needed in build systems and everything would just work.


No, working with modules requires the ability to place some JARs on the classpath and some on the classpath arbitrarily. Any kind of automatic decision regarding what should be placed on the classpath and what on the module path misunderstands how modules are supposed to work. The -cp and -p flags don't and aren't supposed to differentiate between different kinds of JAR contents. If they did, you're absolutely right that the different flags wouldn't be needed -- the JDK could have looked inside the JAR and automatically say this is supposed to be a module or not; the reason they are needed is because that's not what the flags mean.

A module in Java is really a different mode of resolving classes and applying various encapsulation protections. If x.jar is a JAR file, `-cp x.jar` means "apply certain rules to the resolution and access protection of the classes in the JAR" while `-p x.jar` means "apply different rules for those same classes". In most situations both `-cp x.jar` and `-p x.jar` would work for the same x.jar, applying those different rules, regardless of whether x.jar declares module-info or not. The decision of which rules need to be applied belongs to the application, not the JAR itself; being a module or not is not something intrinsic to the JAR, it's a rule chosen by the application.

It's a little like choosing whether or not to run a program in a container. You can't look at the executable and say this should run in a container or not. The decision of whether to set up a container is up to the user when configuring their setup. module-info basically means: if the user sets up a container to run me, then these are hints to help configure the dockerfile. In an even stronger sense, a module is similar to a class loader configuration; some classes may work better with some classloader configurations than with others, but ultimately the decision on the setup is not something intrinsic to the classes but to the application that loads them, and the same goes for modules.

So having the build tool or the JDK guess which rules need to apply to which classes makes as much sense as having them guess the proper heap and GC configuration the application needs -- you can have an okayish default, but for serious work the app developer needs to say what they want. The JDK makes it very easy; build tools make it very hard.


> There is, however, a problem with build tools not supporting newer JDK features

This is important point.

From my experience in Enterprise IT the build pipeline is roughly at JDK-8 level in terms of generating deployable artifacts. I have contorted by maven build config to invoke jlink on builder nodes to make standalone artifacts that won't need JDK on target machines. But it is not optimal for few reasons 1) I have to manually track what JDK modules I need to include in build. 2) Its hard to get new JDK installed on build machines for ones, forget the dot upgrades to keep it latest bug fix level. Their reasoning is build machine is just to invoke maven build. Only target machines need latest patched JDKs. 3) From JDK-8 they are now directly jumping to make docker/kubernetetes builds which are fine but totally different direction if I am just looking to make no-fuss standalone Java application to run on a Linux VM.

As one can see some of these are organizational issues not necessarily technical one. But as single developer with no power to force IT infra teams on supporting alternate workflows I am basically stuck with JDK-8 workflows.

What is missing in setup is somehow to tell maven I want to generate a standalone Java app using jlink for linux ver.N target so gather up all modules / java binary and other tools required based on information in this pom file without relying on JDK version of builder . This should be able to run builder machine so it remains integrated with current build pipleline.


P.S.

Oracle's Java Platform Group (JPG), the group behind OpenJDK, had nothing to do with Jipher, but the number of people who contributed to OpenSSL in the past year alone is roughly the same number as the people working on Project Loom, and rewriting OpenSSL from scratch would have obviously required more people than those required for OpenSSL maintenance, so it would have been a project roughly the size of Loom. If it were to actually do the job, it would also require a commitment to long-term maintenance (something that's often missing from various rewrite attempts). I don't think it's necessarily the best use of resources, at least by us, and I don't think it's sad at all that we're investing our resources in things with better bang-for-the-buck. I think that projects like Loom and Panama and Leyden have a much better ROI than an OpenSSL rewrite.

Also, most native libraries work perfectly fine on virtual threads and with no degradation in performance. The only thing to watch out for is frequent blocking in native code.


But Java already has a cryptography stack (including TLS) in pure Java shipped as part of the JDK which has been maintained for years. So manpower isn't the issue, the problem is that Java code can't be easily used by anything else. The awkwardness of JNI cuts in both directions.

It could be solved if there was a way to generate a shared library from Java code with an auto-generated reasonable-ish C API. Then people could take the Java SSL code (+bouncy castle or whatever) and implement the OpenSSL API. Ditto for lots of other libs. But as no such thing exists, everyone gathers around the C impl because that way you get the most stakeholders.


> The awkwardness of JNI cuts in both directions.

On the consumer side FFM solves that; not awkward anymore! On the host side, well, even Go libraries (and I mention Go because I think Go is considered by some as being lower level than Java for reasons passing understanding) aren't easily shareable by other languages.

> It could be solved if there was a way to generate a shared library from Java code with an auto-generated reasonable-ish C API.

But why is that a problem worth spending our time on considering that C programmers probably won't appreciate hosting a JVM for that, and that Java is more popular than C as it is? No language is trying to control everything these days. The anomalous and short era of single language dominance is over, at least for a while. Java isn't the one dominant language, but neither is any other. Spending effort to go after a relatively small market share seems like a miscalculation. The market of popular open-source C libraries is both very important and minuscule. It would be like saying that iOS should focus its resources on going after the home-brew Raspberry Pi market.

> But as no such thing exists, everyone gathers around the C impl because that way you get the most stakeholders.

Why is that a problem, though? Libraries that are shared across languages and runtimes are very important, but their number is also very small. Big and foundational are two very different things in software, and we're in an era of specialisation, not consolidation. Don't confuse "a problem" with "a problem worth solving".


I think Java is just so insanely big, that this Java-pureness will stick, and JNI/Panama will only be used, when absolutely necessary. This is a unique trait thanks to the way the language grew.


Bazel streamlines the build process with native code (C++/Rust) in Python, making the experience a breeze. However, the drawback is that you would need to transition to Bazel.


I'm not sure if it's still true, but I believe back at Google the interop with C/C++ was even more integrated - if I'm not mistaken the "java" binary got recompiled from scratch (but thanks for caching, not much hit), and your C/C++ code got compiled in with it - so no ".so" loading - you still have single binary (and I think the .jar also went at the end of the file).

Or maybe that was for python (e.g. compile CPython along with any ffi modules as one) + .zip file with classes.

In any case, even if above was not 100% true, it's doable with system like bazel/buck/etc., and still allow you for smooth incremental development (or say in the default "fastbuild" .so files may still be used - locally, but in "opt" the full static link, or through a --config)


My main issue with bazel is that adopting it means giving up good editor integrations for anything besides JVM (maybe C/C++ as well, I haven't touched that). And even then, only IntelliJ has good support.

I think a lot of smaller (read: most) companies adopt bazel without realizing this. You will pay dearly in terms of developer experience in exchange for the benefits bazel purports to offer.


There are some workarounds. In my company we have a script that generated IDE files automatically. It uses `bazel query` and `bazel build` to find all external and non native dependencies, and then generates IDE config and files for rust, java, python, etc.

Positives: your IDE experience becomes native and you don't need to interact with bazel for normal IDE work flows. Negatives: need to run the generate script anything non native changes. Also need to deal with bazel build files etc for the git workflow, obviously.

We're small company, and this method has worked great, and we have a pretty complex build with python accessing rust and java (via jni). Java accessing rust and c (via jni).


Yes that's true. To this day using bazel + IntelliJ and I can't jump into some header files from other @repos// (it could be that I'm on Windows, and this doesn't work there). Need to give it a try on Linux/OSX.


Every time you touch anything ML-related you need it. Data scientists not only love their Python but also don't usually use libraries that have JVM bindings. I have not noticed much improvement in this area since 2018 and for some reason djl.ai is not popular in the wild.

Other than that, I can think of only really niche use cases (e.g. calling ffmpeg, query engines using the C++ Arrow implementation, likely something trading-related).


But native code dependencies in those languages usually work similarly to JNI in java, where you write wrapper code in c or c++ to convert from native apis to the interpretor's interface. So interfacing with native code isn't fundamentally easier than for java. Python does have ctypes, but that isn't what libraries like numpy use, probably because of performance.

I think there are a couple of other reasons why Java doesn't have the same problems with brittle builds. One is that for python and ruby and to a lesser extent node, it is more often necessary to use native code to get desirable performance, so there are more cases where native code is used. Another is that in the JVM ecosystem, it is more common for packages to be distributed in binary form rather than as sources, so you don't run into problems compiling dependencies, because you download a package that already has the compiled library in it.


> it is more common for packages to be distributed in binary form rather than as sources, so you don't run into problems compiling dependencies

Not sure what you mean here. Java bytecode is similar to source code, just easier for machines to parse, also easy to decompile. Mainstream Java is an interpreted language (requires JVM), not compiled.

Don't see what's the difference with packaging, say, python code to a zip archive.


I mean that java libraries that do use native code don't ship packages with c source code, that the user is expected to compile locally, as is common in python, ruby, and node. But instead they typically have a jar that includes a pre-compiled binary library, or several to support multiple platforms. So as long as you are using one of the supported platforms, it just works. But if you are on a platform that doesn't have a pre-compiled library, you will probably need to build the package yourself.


Its relatively common in Python for C/C++ code to be shipped pre-compiled as part of binary wheels (for at least the most common architectures). You do still end up falling back to sdists if your Python version or OS don't line up with any of the supplied binary wheels from the author. e.g. take a look at the pre-compiled set for pydantic-core https://pypi.org/project/pydantic-core/#files


Gotcha, yes, in my experience as long as a python library installs as a prebuilt wheel (.whl), then it's ok. If it tries to compile on my machine (setup.py) -- it never is able to.


Overall I agree.. I like not having native code.. I run the JVM so I don't have to worry about platform specific stuff. But this would be more-so an issue in the Java 8 world where your uberjar needs to have lib blobs for every platform it may be run on

In the Java 11+ world you are no longer in the build-once-run-everywhere model and you unfortunately are semi-required to build platform specific bundles. And adding a C++ build step here wouldn't be a disaster - in theory. You could actually do it cleanly/safely with CMake + Hunter + platform-specific toolchain files. Unfortunately the C++ world hasn't converged to CMake/Hunter so if you're using a uncommon/weird lib then it might take work to make it work.

But someone could then in theory clone your project in 20 years running BSDinux on a RISC-VI chip and it should build and run


> you unfortunately are semi-required to build platform specific bundles

Can you elaborate on this? It seems like a build tool specific issue to me. In Clojure, most projects use Leiningen (a build tool) and uberjars regardless of the target Java version will bundle native libraries for every platform you run on. You can exclude some of the libraries from your uberjar, but it's a manual process. This has an obvious drawback (e.g. depending on the SQLite JDBC driver bloats your JAR with dozens of native libraries), but it's still very much "build once, run everywhere".

The closest I've been to "platform specific bundles" was dealing with a Gradle project where the developer hard-coded a target platform into a property, thus making it impossible to bundle libraries for multiple platforms.


Right, both lein and depstar (or whatever the latest deps.edn one is) make uberjars. And this works great on Linux - which I'm pretty sure is the dev platform for almost all Clojurists. Linux distros basically always come with a OpenJDK JRE/JVM. But installing a JRE is actually discouraged by the Java people. You will notice on java.com you can't actually download a Java 11/17/+ JRE. You can only get a Java 8 JRE!

You're supposed to use `jpackage`. It basically packages up the uberjar with all the necessary pieces of the JRE to run the uberjar (so a pared down JRE) and does platform specific bundling like adding meta data and creating an installer (so your application can have a place to save settings and whatnot between runs, have an icon in the Start menu/desktop and stuff like that)

You can still do uberjars and get a JRE for Windows/Mac from third parties like Adaptium, but it's very unergonomic and looks kinda sketch (b/c it doesn't look official)

My own anecdotal experience:

I distributed a JavaFX/Java-17 app as an uberjar. My rational was:

- I didn't need any app persistence between runs, so I just wanted a small double-clickage ".exe/bin" equivalent that people could download and try out (no one wants install some random app to try out from online).

- `jpackage` doesn't allow you to make a simple double-clickable .exe (they can make something confusingly called an appimgage, which is not an Linux appimage - but it's similar!)

- I had no way to do testing on Mac b/c I don't own an Apple machine. So I don't wanna be generating Mac executables I can't even test.. At least with the uberjar I can be semi-confident if it runs on my laptop it'll run on a Mac later (sorta true)

The end result was a disaster.. 90% of user issues were from people that would who would go to java.com, "install Java" (ending up with a Java 8 JRE!) and then the app would just silently not run. The JRE doesn't produce any friendly error or warning saying anything about versions - it just silently fails. I have in bold on my landing page YOU NEED A JAVA 17+ JRE. No use.. people would still do it wrong


Yeah java isn't really used for Consumer apps and this is one of the reasons. The ones that do evade this problem by bundeling their own JVM.


No, consumer java desktop applications were already uncommon when java 9 came out.


More broadly, non-Electron apps have been uncommon for a while. I just don't work in the webspace and I need to write code that's relatively performant b/c it involves some numbercrunching. I understand it's all possible with Electron.. but it's a bit daunting to have a client-server web stack and multiple languages with some stuff in a local backend.. It's all a bit beyond me

JavaFX was a nice solution that's crossplatform and near-native. It has a very nice react-like library in Clojure called `cljfx`

https://github.com/cljfx/cljfx/


You're doing it wrong. Like way wrong.

Use jlink. It creates a double-clickable .exe from your JDK installation, pared down to a JRE for distribution. Works on Linux and Windows.


could you give me a link on how it's done? I looked into this extensively and there was no way to bundle an .exe . It was a 2-3 years ago so maybe things have changed


jlink only creates a runtime (the parts of the JDK that are actually used by your app), but there's a relatively new command called jpackage for creating installers: https://docs.oracle.com/en/java/javase/21/jpackage/packaging...

EDIT: I see now that you already know jpackage, but you don't want an installer. In that case you can use launch4j, which just wraps a jar in an exe: https://launch4j.sourceforge.net/


okay, Launch4j is new to me - but this seems to only work for Windows

I find the whole situation a bit silly, bc obviously it's creating an executable under the hood somewhere. The official tools just don't expose it to you.

I think what I could do is to run the installers on all target systems and then pull out the executables manually. I just think that'd be a bit difficult/annoying to make as part of a CI system


I only used Launch4j on Windows, but in the downloads you can find Linux and MacOS versions as well. At the bottom of the webpage it explains that it can be built on even more platforms, if you have MinGW binutils 2.22 (windres and ld only) on those platforms.

If I remember correctly, you can't just pull out the exe created by jpackage, because it doesn't contain the runtime. The installer installs both the runtime and the exe. The exe created by Launch4j also doesn't include the runtime, but Launch4j is better at finding a system-wide runtime, and can direct the user to a specific download site (such as adoptium.net) instead of java.com. If you want to have JUST a single exe, then I think GraalVM's native image is the only option.


Hmmm,yeah I should really just try GraalVM. JavaFX finally supports it. I just remmeber it was a bit unclear how to hook it up with Clojure/deps.edn but I'm sure the tooling has evolved since I last looked

Some day they'll integrate cosmopolitan builds with Graal native and we'll come full circle to cross-platform executables haha

Edit : https://github.com/oracle/graal/issues/4854


I think in your case it is actually javafx that makes packaging harder than necessary.

Not affiliated, but hydraulic conveyor is quite good at packaging stuff like this to a single exe.


> In the Java 11+ world you are no longer in the build-once-run-everywhere model and you unfortunately are semi-required to build platform specific bundle

That feels as an overstatement to me. You only need to build something platform specific if you want to _ship a JRE_, normal JAR works as usual if you expect users to have java or are happy with providing standard JDK. Sure jpackage seems nice but all the commercial end-user software I see bundles JRE only for windows and lands everyone else a nice fat jar.


Golang followed the same path; no dynamic linking and everything was 're'-created in Go. It's effective.

For FFM won't C access be isolated to the JAR? Maybe it's not normal to get pre-built JAR files?


It's effective... Until you need sqlite, or lmdb, or something like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: