Interestingly, they want to do the opposite of what Microsoft did with their C++ compiler and linker. To enable "whole program optimizations" Microsoft's compiler writers added the intermediate code analysis, optimizations and code generation to the linker, allowing to, for example, if it is beneficial, inline function invocations for small functions that aren't declared as inline and that come from the different module. So their linker can now do both the classical linking (what Go authors have as the goal now) and the "linker does the code optimization, including intra-module, and generation."
Did Go ever use intra-modules optimizations in their linker? Do Go authors really find there's no need for that, when they want to fix the new object code format to only the "generated code" one?
Go's gc toolchain (as opposed to gccgo) tends to emphasize compile speed over producing optimal executables, so they may be flat-out deciding to leave some potential whole-program optimizations on the table to for link speed--pure speculation, though. (And don't know re: your question.)
Thanks. In my top post when I asked about intra-modules optimizations I've naturally meant "compiler produced pre-linking object" modules, but after the discussion I've recognized that I totally missed that "already compiled objects loaded for execution" (i.e. .so or .dlls) are also an important feature for a language that intends to support seriously big projects.
Are there any plans for making ".so" modules in native Go?
> Are there any plans for making ".so" modules in native Go?
Not from the main team, no. They support compiling `.a` library files that can be statically linked, but the authors of Go generally consider dynamic linking to be harmful.[0]
However, because Android development requires `.so` files, the goandroid project[1] contains patches to make go support shared libraries.
Compile time is a big deal when you have millions of lines of code that constantly change and very expensive devs that are spending large amounts of time waiting.
The infrastructure Google uses to distribute and track builds across datacentres and the world to make compile time of large applications bearable isn't exactly trivial.
Optimal executables aren't that important most of the time. Hardware is cheaper than dev time and the gain of 'optimal' isn't much for the compile time trade off.
Ideal scenario would be to have a "--full-optimize" flag or something that you use for the production build. And for casual dev you get quick compilation. Probably easier said than done.
>Compile time is a big deal when you have millions of lines of code that constantly change and very expensive devs that are spending large amounts of time waiting.
1) Sure. Most of us do not. So why should we care and/or trade other stuff for compile time improvements?
2) That's when you don't have a module loading system and have to build everything everytime.
The language authors obviously disagree with it being an "idiotic" engineering choice. Your choices of tradeoffs don't match theirs, and engineering is never as simple as "I'm right, you're wrong", as you well know.
Yeah I was being facetious with my original reply, and frankly I get your point, I was pointing out however that you _can_ fork it to make it work the way you'd prefer. If you truly don't want to (or can't) do that, then open a ticket or send something to a mailing list.
If this really matters to you, do something about it that can make an actual difference.
I'm trying to contribute to the Chromium project and believe me, when you're waiting over an hour to compile 1 day's worth of patches, you begin to dream of faster compile speeds :-)
I think having some sort of optional 'really fast compile' vs. 'optimal performance' build is the ideal - fast cycle development, then on deploy build something fast. I think gc go vs. gccgo is potentially a model for this :-)
Firefox build times are much faster than that. My MacBook Pro can build Firefox in 12 minutes, but other people can build everything in less than 8 minutes! :)
I'm trying to contribute to the Chromium project and believe me, when you're waiting over an hour to compile 1 day's worth of patches, you begin to dream of faster compile speeds :-)
But that is mostly a problem because incremental compiling in C++ is difficult for well-known reasons. Incremental compiling is well-supported in many other languages (e.g. Java) and is usually very fast. So, the issue of compilation time is IMO overstated by Go proponents.
Besides that, e.g. JRebel and DCEVM provide true hotswapping. So, generally, developing e.g. a web service in Java does not have a visible compile and deploy cycle at all.
I think having some sort of optional 'really fast compile' vs. 'optimal performance' build is the ideal
Why not have really fast compiles and JIT compilation when needed to make it fast (Java, C#, F#) or just JIT compilation (JavaScript). Of course, the trade-off is that you have to carry around a VM, but the JVMs and JS VMS are ubiquitous.
> But that is mostly a problem because incremental compiling in C++ is difficult for well-known reasons. Incremental compiling is well-supported in many other languages (e.g. Java) and is usually very fast. So, the issue of compilation time is IMO overstated by Go proponents.
Actually, chromium's ninja [0] build setup [1] is really awesome, and does what it can with incremental building, but it's obviously limited in what it can do, it doesn't seem to take very much to trigger a very big rebuild. It's a definite help though.
(I don't seem to be able to reply to your comment directly)
danieldk 8 minutes ago | link
>> As I've already said to someone else. This is about linking, not compilation. The rest of your comment is irrelevant to this discussion.
> And I am reacting to your grandparent, who was talking about compilation.
I'm fairly certain that that slow compile time is a combination of `compile + link` and the linking is probably a big part of the equation as well, just like it is in C and Go.
> But that is mostly a problem because incremental compiling in C++ is difficult for well-known reasons. Incremental compiling is well-supported in many other languages (e.g. Java) and is usually very fast. So, the issue of compilation time is IMO overstated by Go proponents.
As I've already said to someone else. This is about linking, not compilation. The rest of your comment is irrelevant to this discussion.
...or selling a shortcoming as a feature. What if someone picks up pcc[1] and claims it is better than the competition because it makes C compilation lightning-fast?
Of course, Go is a language that is more amendable for quick compilations. But it would be more interesting to see a solution that provides good optimizations and improves programmer productivity. Go doesn't really have an answer to that (except: use gcc-go), while the competition has (e.g. JIT compilers).
Been a while since I looked at ELF but it would sure be nice if it used, say, a well-defined ELF subset to make use of the many ELF tools out there already.
Did Go ever use intra-modules optimizations in their linker? Do Go authors really find there's no need for that, when they want to fix the new object code format to only the "generated code" one?