I suspect because it is an unusual case and doesn't get as much attention as the native gcc code. That is compounded by the need to "compute like an ARM" for the constants, leading to some emulation.
But the worst part is those damned autoconf scripts. They very cleverly probe the attributes of your x86 by compiling and running code during the build process and then make decisions about how the code should run on your ARM. They are a never ending sink of human effort. Best to just build on a machine where they will get the right answer without you fiddling with them.
The way I solved the autoconf problem was that you have one arm machine do the actual building and a cluster of powerful x86 machines running distcc with cross compilers. So this way it builds really fast and the package build system thinks it is native without any problems.
Honestly a little surprised this would be news. Doesn't everyone have an arm cluster or a distcc type setup like the above?
Edit: Back when mac's were PPC I did the same trick and had a handful of x86 Linux box's with apple's gcc setup for cross compiling running with distcc. Made the OS X builds run much faster.
If by "unaffected" you mean "correct", then yes, as long as it is set up correctly for cross-compilation (I mean the compiler and assembler, which are unused on the target box in non-LTO mode).
But with GCC LTO, distcc will only distribute the parsing of the source code while the optimization and code generation will be done on the target box, so the speedup gain with distcc will be much smaller (LTO makes the ratio of work that parallelizable to work that is non-parallelizable much lower).
GCC LTO partitions the work, if it can interact with distcc it can distribute optimization too, at the cost of some missed optimizations. I don't know if that does the right thing in GCC 4.6.0
autoconf scripts are generally OK. We cross-compile over a hundred packages for the Fedora Windows cross-compiler project[1], and it's not the autoconf projects that cause any problems. It's the people who roll their own half-assed build system that are the problem.
If things are configured correctly, and you have an oracle to produce the answers that would be generated if you were running locally, then it can work. A lot of the common stuff works pretty well, a lot of people's code fails miserably too.
It would be slow in terms of fixing the oceans of OSS code that doesn't cleanly configure/compile in a cross-compilation environment, rather than the compilation process itself.
Not sure, but compiling packages can be i/o bound rather that CPU bound, and is fairly easy to split up.
Rather than one fast multicore server, this solution gets them a lot of separate systems each with dedicated disk, memory, etc. Also, the reboot and wipe each time has security benefits.
The alternative equivalent solution would be a bunch of VM's on a host, which would probably result in memory or i/o bandwidth contention quickly.