Hacker News new | past | comments | ask | show | jobs | submit login
Mold – A really fast linker (github.com/rui314)
117 points by netr0ute on Jan 14, 2023 | hide | past | favorite | 58 comments




Thanks! Macroexpanded:

Mold linker: targeting macOS/iOS now requires a commercial license - https://news.ycombinator.com/item?id=34141912 - Dec 2022 (74 comments)

Mold linker may switch to a source-available license - https://news.ycombinator.com/item?id=33584651 - Nov 2022 (206 comments)

Mold linker creator considers changing the license - https://news.ycombinator.com/item?id=33495528 - Nov 2022 (19 comments)

Mold/macOS is 11 times faster than the Apple's default linker to link Chrome - https://news.ycombinator.com/item?id=31769699 - June 2022 (116 comments)

Using the mold linker for fun and 3x-8x link time speedups - https://news.ycombinator.com/item?id=31604772 - June 2022 (41 comments)

Using the mold linker for fun and 3x-8x link time speedups - https://news.ycombinator.com/item?id=31592678 - June 2022 (1 comment)

Mold 1.0: the first stable and production-ready release of the high-speed linker - https://news.ycombinator.com/item?id=29568454 - Dec 2021 (65 comments)

Mold: A Modern Linker - https://news.ycombinator.com/item?id=26233244 - Feb 2021 (122 comments)

Mold: A Modern Linker - https://news.ycombinator.com/item?id=25410312 - Dec 2020 (1 comment)


I'd be curious to see how fast mold can link Blender.

Which would also be a good test of how good of a "drop-in replacement" mold is given that the Blender build process isn't trivial.


It seems odd to me that we still require .o files at all.

Given the final program needs to be storable in RAM + Virtual Memory it surprises me that we still need the intermediate step of pushing to the file system only to then immediately reopen and merge those files.

Does someone have more info on this? Or is the reason just legacy? Or is the reason just some ideological “single responsibility” thing?


At the highest level of performance needs, if you want to parallelize the build across machines, you're going to need some kind of storage abstraction for intermediate files since translation units are the easiest place to split builds at.

In some places, where builds are particularly optimized, there are special distributed filesystems just for object files. In this case, it's not necessarily true that the object files are backed by disk even.

Being backed by disk locally mainly helps for incrementally building so that you can change one file and only recompile intermediate files for translation units that depend on this file. Disk/FS caching presumably helps a lot with redundant I/O, and I think most of the benefit of building with tmpfs winds up being putting the source itself in RAM.

Edit: also it's worth noting that many compilers can output object files which can be linked by other linkers, allowing you to mix output from different compilers in some circumstances.


Thank you the distributed compile usecase is a fun one to think about. I can see why it would require some intermediate set of bytes to shuffle across the network.

> Being backed by disk locally mainly helps for incrementally building so that you can change one file and only recompile intermediate files for translation units that depend on this file.

For the local incremental usecase I’d love to see a more state-full compiler instead. One that could change the bytes of a binary instead. e.g. give all functions some additional “empty padding”. Then any modifications could directly fit into the binary as needed until some defragmentation process which creates the final output binary.


Don't quote me on this, but I do believe MSVC does exactly this by default. The option is /INCREMENTAL.

That said, it accomplishes this still using an on-disk datastore.


There's actually quite a few reasons.

The first thing to point out is that reading a recently-written file isn't all that expensive: it's stored in the filesystem cache anyways, so you bypass the disk for the reads.

Moreover, the filesystem is actually a decent database for multiprocess communication. If every translation unit is compiled into an independent file, which is then combined into a single final output, there is no need to build any complex locking mechanisms or the like and you still get to take advantage of the embarrassingly parallel nature of compiling.

Incremental compilation is an incredibly important tool. If you make a small change to one file, it's frequently not necessary to rebuild most of the code. Making the output of individual file compilations work in a way that allows incremental compilation to happen requires basically building .o files--and there's very little savings to be had by not emitting them to disk.

Finally, I'll note that very frequently, debug builds of large applications cannot fit in RAM. Debugging symbols bloat builds tremendously, especially in intermediate object form (since many symbols end up needing to be duplicated in every single .o file). A debug build of a large application may take up 80GB in disk space, to build a binary that (without debug symbols) would be perhaps 100MB in size.

Keeping everything in RAM just isn't feasible at scale.


> Given the final program needs to be storable in RAM + Virtual Memory it surprises me that we still need the intermediate step of pushing to the file system only to then immediately reopen and merge those files.

You don't. I recommend you learn about unity builds, and how major build systems support toggling them at the project and subproject level.

The main reason why most people haven't heard about the concept and those who did the majority doesn't bother with it is that a) you have little to nothing to gain by them, b) you throw incremental builds out if the window, c) you ruin internal linkage and thus can introduce hard to track errors.

Also, it makes no sense at all to argue how the released software needs to run in memory to justify aspects related to how the software is built. At most you have arguments over code bloat and premature optimization, but at what cost?


Well, you might know that all those source files belong to exactly one executable (or library), but how would the compiler know?


You could combine the build system (e.g. make), the linker, and the compiler into one app that knows everything and does it all in memory. It could be slightly faster but much less flexible.


Isn't that what msbuild does for .NET (Core and 5+, not Framework)? The `obj` directory contains some intermediate stuff, but not object files.


What about the case of different compiler generated object files from different languages / object file formats being combined? (without the need to have access to each unique compiler generating the .o file)


Seems like it’d be useful to have .o files on disk for incremental compilation.


We make use of lots of distributed compiles at work which .o(bj) files are useful for, but to compile something locally it might be useful to keep things in memory - if all of the objects fit. The code base’s objects I work on won’t fit in ram, so will get paged to disk at some point during linking anyway


What problem are you trying to solve by keeping .o files in a processes’s memory without spilling to disk?


To reduce the latency of compile-link-test


I don’t think writing to and reading from disk is a significant portion of this loop. You’d want to measure the cost and thus maximum benefit. SSDs are fast. There isn’t much upside here.


The problem may be disk speed, but I agree it may not. The problem I see is the dropping of state.

Specifically, to produce a .o file the compiler has already read through and created indexes of module::function/struct names, their layouts, dependencies, etc.

My understanding is that the .o files need to be re-parsed by the linker to create these indexes, layouts, and dependencies. Especially with LTOs I'd imagine there would be additional inlining work (stuff the compiler is already good at).

This is all just wasted time, including the IO bottlenecks - even if those are marginal.


Builds can be much faster if you don't need to recompile all the modules into .o files.


Build directory in a tmpfs ram disk works really well.


I always wanted to use mold in the build system I did last year: the metrics are fucking impressive.

There were subtle differences to gold and ldd that I didn’t have time to chase down, but it seems like the future.


I can't say I've noticed much of a difference myself, maybe 20% faster at best.


20% sounds about right on the figures I’ve heard, but on a big build that is massive.


For me linking only happens once, while most of the time is actually building stuff.

Also the speedup is only for Debug builds, since Release ones use LTO anyway, and no amount of linker magic makes that fast.


Debug builds are in the iteration critical path. Improving the rate at which you improve your software is, well, you know the equation :)


But were you comparing just the linking step? Or the entire compilation process?


Just the linking step, of course.


1/5 of the way toward theoretical perfection seems pretty good.


If you speed it up by 1/5 ten times, is it 1/5 of the way to perfection each time? If yes I think that's an exaggerated way of measuring, if no then what was special about the particular baseline you picked?

I think you need to use a log scale for this. It's a step or two toward perfection, but perfection is infinite steps away.


Note that there have been some license controversies with mold before, namely that they wanted to make all outputs AGPL, not simply mold itself [0], seems like they walked this policy back however [1].

> Open-source license: mold stays in AGPL, but _we claim AGPL propagates to the linker's output_. That is, we claim that the output from the linker is a derivative work of the linker. That's a bold claim but not entirely nonsense since the linker copies some code from itself to the an output. Therefore, there's room to claim that the linker's output is a derivative work of the linker, and since the linker is AGPL, the license propagates. I don't know if this claim will hold in court, but just buying a sold license would be much easier than using mold in an AGPL-incompatible way and challenging the claim in court.

Regardless of their current stance, this type of policy changes on a whim led me to remove mold from any of my systems, since I don't want all of my code in the future to automatically become AGPL, even by accident.

[0] https://bluewhalesystems.blogspot.com/2022/11/mold-linker-ma...

[1] https://github.com/rui314/mold#license


In fairness, this was just a proposal (from someone who clearly has more knowledge of engineering than law). They got feedback that this isn't how the AGPL works, and decided to go with a commercial license for the macOS version instead. Which is annoying for me, as I was hoping to use mold on macOS and the monthly subscription seems a bit steep for a linker, but it seems like a perfectly reasonable license.


> this type of policy changes on a whim

Ehh...

"I want to share another idea in this post to keep it open-source [..] Let me know what you guys think" is not a "policy change on a whim". It's an idea. It was not "walked back" on, because it was ... just an idea.

Your comment is a horrible misrepresentation of what's actually in the post.


So just use mold during development and another linker for release.


Which do you use for testing?


Personally I would use mold for the dev cycle tests and then in CI or prerelease tests use the other linker


wow. that borders on extortion... "you should really buy a license... who knows, otherwise your code might turn into AGPL, good luck with the expensive court fees..."


What the heck? You can get an AGPL linker for free, or pay for a non-AGPL one. This type of thing is standard in OS. Selling a product isn't extortion.


It is unusual for the products of compilers and linkers to be infected by the original license. See: gcc.


I don't recall seeing a single developer tool where the output was anything other than fully under the copyright of the author of the input files (or fully liberally licensed in the case of additional code objects).


That aspect is unusual, but I don't see how that affects whether or not it would be license extortion.


Selling is not extortion, no. That's not what I claimed. But did you read the quote?

> mold stays in AGPL, but _we claim AGPL propagates to the linker's output_ [...] I don't know if this claim will hold in court, but just buying a sold license would be much easier than using mold in an AGPL-incompatible way and challenging the claim in court.

Sounds like protection money to me


Problem with my open-source startup: https://docs.google.com/document/d/1kiW9qmNlJ9oQZM6r5o4_N54s... Worth a read and I'm sure Rui would appreciate ways to resolve this.


This whole thread is old. He already split the Mac version into a separate commercial product.


If you were a big corporation who doesn't want to buy a license but speed up dev time, couldn't you just let your devs use mold for development to have faster recompilation cycles, but link the official binaries with some other, slower linker?


Why is the AGPL such an issue?


It depends if you find it acceptable that the work you created gets assigned the AGPL license. However, this problem was already solved at this point, so it's a non-issue these days.

One practical problem, even for developers who are fine with licensing their code to whatever open source license is easiest, is that not all open source licenses are AGPL compatible. Take for example the Mozilla Public License, which is inherently incompatible with AGPL because of the terms imposed; this means that any project using MPL licensed libraries could no longer be linked with Mold.

License incompatibilities can be a huge pain (see: ZFS + Linux). If you develop software for yourself this isn't a problem, but if you intend to distribute your software this becomes more of an issue.

This is probably also the main reason why normal linkers/compilers don't impose licenses on the produced work.


It's not an issue for linking AGPL code.

The issue was that they wanted to claim that AGPL was contagious - That by using mold, your outputs would also be required to become AGPL.


Normally the tools people use don’t dictate the license they must use for their code.


The point of copyleft is to dictate the licence you must use, if you wish to (roughly speaking) link with the copyleft-licensed work. There are plenty of libraries that you cannot use if you wish to distribute your program without making its source-code available.

The unusual thing here is that the creators of a linker are apparently trying to have the copyleft licence propagate to code that is input to the linker. Others have pointed out that GCC has exceptions for this kind of thing, despite that it is released under a strong copyleft licence (GPLv3+).


No, the point of Copyleft is for you to not restrict the freedoms you got when you used the software when distributing it to others. You can use Copylefted software in any way to your heart's content in combination with whatever other software you want, you can just not distribute it using a more restrictive licence.


This level of detail is incidental to my point, hence roughly speaking. Even the GNU folks summarise copyleft essentially as I have. [0]

Also, your account of copyleft is still incorrect. It's true of the GPLv2 and GPLv3 licences but not true of all copyleft licences. The AGPLv3 licence, which is the one relevant here, doesn't apply only on distribution.

[0] https://www.gnu.org/licenses/copyleft.en.html

edit I think I was mistaken in putting propagate to code that is input to the linker, though. As lokar's comment points out, it's instead about the output of the linker.


It's not, if it's contained to just the linker itself (which it is now), that's not why I stopped using it. I did so because it seems like they don't understand that much about AGPL and licensing in general and could change their license terms at any point to say something like "we claim AGPL propagates to the linker's output" which is very legally tenuous itself to claim so.


Tenuous or not, I believe GCC explicitly has a licensing exception that states that compiler output is not considered a derivative work of the compiler, and thus must also be licensed under the GPL. So the GNU/FSF folks at least thought it was a concerning enough legal idea to explicitly account for it.

Not sure we can say that a linker is the same as a compiler in this sense, but if so, maybe it is indeed worrisome.


It does not look like binutils (including ld.gold) has that exception[0], so I don't think the FSF would agree.

[0] https://sourceware.org/git/?p=binutils-gdb.git;a=blob;f=READ...


If I recall correctly it is the standard libraries(such as libstdc++) that need this exception and not the compiler itself.


That exception exists because compilers have a tendency to leave little bits of itself in code that they compile. For example, if you're compiling to a target that doesn't have a division instruction, you're going to be using a compiler-provided division routine that gets combined in with the source code. And that routine is a clear part of the compiler's source code.

The standard compiler license exception (this applies to LLVM to, e.g.) says that any such code that gets combined in with your application code doesn't count. Note that it's still a potential license violation to use that code elsewhere (say, using those routines in another compiler).

This isn't a concern for linkers because linkers don't really provide anything in the way of code, everything being provided by the compiler as a compiler or language support library. The largest code it might add to your program is probably the PLT stub code, at best a couple of instructions long.


That only claims the output is AGPL, not the code you feed into it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: