Compiling LLVM for what purpose? Hacking on it? Producing an optimized build? Installing a build on a user's machine? You should be more specific.
The default clean build works just fine. On my machine I can clone the Git repo, do `mkdir build && cmake ../llvm && make -j64` and end up with working binaries after 5 minutes.
The build take 37GB by default here, and the binaries under bin/ are immediately usable work. If you want a smaller build with shared library just add a parameter. Want to throw in sub-projects such as LLD or Clang? Just another parameter. It's all very well-documented.
You missed the part where you upgrade your RAM to at least 128 GB and up the swap to 512 GB and use gold as your linker because the OOM killer will still raise it's head for ld under those extreme restrictions.
I don't understand why anyone would use ld or gold in 2022. Every distro has shipped lld for years and it links in less than seconds. It's a default to not break compatibility, not because it's good ; makes as much sense to use as ed(1). For reference on a 8C/16T CPU from 2016 an llvm initial build is ~10-12 minutes here.. not instant but not the end of the world either.
lld is not 100% compatible with ld; switching to it usually works, but occasionally breaks the build of some software. That's especially true for complex cases involving linker scripts and unusual binary layouts with various sections. (Those cases also tend to be hard to debug.)
I for one maintain the official toolchains for QNX. I've been trying to port LLVM, and it's not possible to bootstrap LLVM using LLVM because it hasn't been ported yet.
I don't understand, LLVM would be on the host machine under some Linux or Windows on x86 where the development happens, not on the QNX hardware anyways, no ? Unless for some reason one would want to port Mesa to have LLVMPIPE or something like this, but I have a hard time imagining the use case.
My experience with QNX is with it used for automotive stuff, so everything was cross-compiled from normal Linux systems. The little bit of work I did with it did really not make me think that this was an useable workstation OS (although the speed was extremely appreciable)
Sorry, what are you talking about. I have built ponylang, build llvm with a couple of patches from source on my 2018 Mac book air. Just don't put -j64. The first build will take a couple of hours but that is par for the course for a project of llvm's size and complexity.
64 cores, and it takes 5 minutes? That’s… not awesome. Most people do not have 64 (presumably high performance) cores lying around for the purpose of building dependencies.
Is there a service that rides on top of aws that offers distcc (or whatever equivalent) or direct pure compile with source uplodate and nothing else? I'd rather pay them than deal with AWS.
I personally work on something written in C and we use bog standard gcc. But when you have 10+ variations to build, tests to run, etc, the parallelism you can get from a bunch of cores makes an enormous difference.
> 10 minutes of a c6i.32xlarge costs less than a dollar at on-demand rates in the us-east-2 AWS region.
That assumes you have an AWS account. There are legitimate reasons to not have an AWS account; for instance, you might not want to risk accidentally racking up huge bills, or you might not have access to a payment method AWS accepts, or you might want to avoid using services under USA jurisdiction (due to GDPR concerns or similar).
Build time. If you only need one or two components (clang, clang-tools-extra) and trim down the number of target architectures, it's not so bad. But if you want most of the components and architectures enabled, and tested with a Release-with-asserts build...
The build parallelism scales pretty well though so if you want some white noise in the background, by all means. It's kind of nice for keeping your feet warm sometimes.
Building things on Windows by itself is a nightmare in most cases. I use WSL (1, not 2, though either should work) and it build just fine. I rec going that route if you want to do linux-ey development but still keep Windows.
This is all very exciting. I'm guessing we'll see some pretty major performance improvements for rust code in addition to all the new supported architectures
I don't see why the GCC codegen would end up with major performance improvements - my understanding is that GCC and LLVM are basically on par with performance, with some workloads being a tiny bit better in one or the other. Is that wrong?
GCC is still basically better overall but it's basically chaotic.
LLVM seems to have better defaults for branch-y compiler code, but GCC has a lot of tricks up its sleeve that LLVM hasn't got.
FWIW though I think GCC is really playing with fire when it comes to it's future - the mailing list approach to development deters new contributors, the testing infrastructure is quite arcane, etc.
GCC should be an aspirational tool that people want to contribute to but aside from a few core contributors it feels like its basically on life support in some regards. And I say that as someone who actively tries to use it.
Its core competencies are pretty good but it's (mostly stallman, to be blunt) legacy has made it very brittle at its extremities e.g. LTO explodes on me way too often.
>FWIW though I think GCC is really playing with fire when it comes to it's future - the mailing list approach to development deters new contributors, the testing infrastructure is quite arcane, etc.
And LLVM is better? At the start of last year I found and fixed a small bug in LLVM[1]. I submitted the fix via their Phabricator instance and it was approved by project members after a few days. As per the contributor guide, I then asked for a project member to commit my patch to the LLVM repository. This is because in Phabricator there does not appear to be an automatic way to commit approved changes. A project member has to manually take the patch and apply it.
A few weeks ago I tried to see what happened to my patch and found that it had never been applied. So basically my work and that of the reviewers was simply lost because of their infrastructure.
Up until now the rust compiler has been finely tuned for LLVM. I would expect the GCC backend to perform worse initially, then improve over time depending on how much use it gets.
Besides the gcc work I'm actually also hoping for the rustc_codegen_cranelift work to land one day!