I always take this as inspiration that the performance quest is rarely over. LLD was specifically designed to be fast, as was Gold which came before it. But Mold blew them both away.
This is one of my most favourite article series and had so many eye openers for me. I also think there is no other resource, neither internet nor elsewhere, that has all this information in one place. I really wished Ian made a book out of it.
This is much closer to the metal of a modern ELF implementation. Linkers and Loaders is excellent for background and covers a wider spectrum, but was something like a decade old at the time this was written.
I've printed-to-PDF all 20 chapters and have my own book of it now. My only desire is that Ian made a single-page-with-all-chapters view of it available...
I can see why linkers were created, especially in a time of constrained memory. But given an abundance of memory in modern systems, are linkers even necessary anymore?
As a matter of fact, aren’t these shared libraries a supply chain attack vector (ie, xz attack that was thwarted earlier this year)?
I know it's fashionable to use flatpak, Docker, etc. but I'd still rather not have 30 instances of Gtk running for every GUI app I decide to run. Consider that we still run on Raspberry Pi, etc.
> aren’t these shared libraries a supply chain attack vector
Not any more than the apps themselves. If you're downloading a static binary you don't know what's in it. I don't know why anyone trusts half the Docker images that we all download and use. But we do it anyway.
I think what you mean when you say instance of Gtk is a copy of the Gtk library in memory?
That's not how flatpak works; identical libraries will share the same file on disk and will only be loaded once, just like non-flatpak apps. And because Gtk is usually part of the runtime most apps will use one of a few versions.
Somehow the compiler needs to either have the whole program in one single go--every last source file at the same time, all with exactly the same build options--or there needs to be a way to combine the results of multiple compilation steps.
Even with modern LTO, the compiler doesn't typically see _all_ files in the program at the source code level. Just many. Usually the C-library and C++ library are different.
So as long as various languages don't build the entire program in a single compilation and assembly step, we will need something that combines the results.
That's the linker.
Even building everything statically doesn't eliminate the need for the runtime linker, unless one hard-codes the exact address where a program can run.
That runs counter to security measures like ASLR.
> Even building everything statically doesn't eliminate the need for the runtime linker, unless one hard-codes the exact address where a program can run. That runs counter to security measures like ASLR.
You could have the program be position independent (use only relative adressing) and do without a linker for that limited use case.
1. static linking is still linking and you still need linkers to combine multiple object files into a single executable
2. mindset that memory and CPU are in abundance IMHO one of the reasons that user experience is not visibly improving over the years despite orders of magnitude faster hardware
Do you want builds to change a single line of code to complete in less than half an hour? If yes, then you need something in the vein of a linker to handle less-than-whole-program units of compiled code.
https://www.mediafire.com/folder/b8fdqx7eqcpdl/linker
or
https://0x0.st/Xycy.azw3
https://0x0.st/Xyct.epub
https://0x0.st/Xycv.mobi
https://0x0.st/Xycw.pdf