Hacker News new | past | comments | ask | show | jobs | submit login

Compiling llvm is a nightmare



Compiling LLVM for what purpose? Hacking on it? Producing an optimized build? Installing a build on a user's machine? You should be more specific.

The default clean build works just fine. On my machine I can clone the Git repo, do `mkdir build && cmake ../llvm && make -j64` and end up with working binaries after 5 minutes.

The build take 37GB by default here, and the binaries under bin/ are immediately usable work. If you want a smaller build with shared library just add a parameter. Want to throw in sub-projects such as LLD or Clang? Just another parameter. It's all very well-documented.


> `make -j64` and end up with working binaries after 5 minutes.

I think the problematic part for most people lies in the first two words in that quoted sentence.


`mkdir build && cmake ../llvm -GNinja && ninja`

better?


You missed the part where you upgrade your RAM to at least 128 GB and up the swap to 512 GB and use gold as your linker because the OOM killer will still raise it's head for ld under those extreme restrictions.


I don't understand why anyone would use ld or gold in 2022. Every distro has shipped lld for years and it links in less than seconds. It's a default to not break compatibility, not because it's good ; makes as much sense to use as ed(1). For reference on a 8C/16T CPU from 2016 an llvm initial build is ~10-12 minutes here.. not instant but not the end of the world either.


lld is not 100% compatible with ld; switching to it usually works, but occasionally breaks the build of some software. That's especially true for complex cases involving linker scripts and unusual binary layouts with various sections. (Those cases also tend to be hard to debug.)


But it's really well tested for building llvm and clang, so that suits the use case this thread is focused on.


When LLVM doesn't support your platform :(

Granted, that means no Rust support today so probably not relevent to most people reading this thread, but yeah I'm still salty :|


... which platform is that ?


I for one maintain the official toolchains for QNX. I've been trying to port LLVM, and it's not possible to bootstrap LLVM using LLVM because it hasn't been ported yet.


> it's not possible to bootstrap LLVM using LLVM

I don't understand, LLVM would be on the host machine under some Linux or Windows on x86 where the development happens, not on the QNX hardware anyways, no ? Unless for some reason one would want to port Mesa to have LLVMPIPE or something like this, but I have a hard time imagining the use case.


What QNX hardware are you thinking of? The self-hosted x86_64 PC being used to develop on?


My experience with QNX is with it used for automotive stuff, so everything was cross-compiled from normal Linux systems. The little bit of work I did with it did really not make me think that this was an useable workstation OS (although the speed was extremely appreciable)


lld is good, but you answered why in your own comment: it's not always compatible, and so when you need ld, you need ld.


Sorry, what are you talking about. I have built ponylang, build llvm with a couple of patches from source on my 2018 Mac book air. Just don't put -j64. The first build will take a couple of hours but that is par for the course for a project of llvm's size and complexity.

The subsequent buttons take a couple minutes.


I compile llvm on machine with 4gb ram


Most people don’t have 64 cores available to them to compile LLVM in 5 minutes. Switching to ninja doesn’t really solve that problem.


64 cores, and it takes 5 minutes? That’s… not awesome. Most people do not have 64 (presumably high performance) cores lying around for the purpose of building dependencies.


10 minutes of a c6i.32xlarge costs less than a dollar at on-demand rates in the us-east-2 AWS region. That'll get you 128 vCPUs and 256 gb of memory.

I think people put up with a lot of stuff that there's no reason to any more with on-demand computing.


Q: "Does my data fit in RAM?" https://news.ycombinator.com/item?id=22309883

A: https://yourdatafitsinram.net

Is there a one-liner to build llvm e.g. with k8s and containers, or just a `git push` to a PR? #GitOps

And another to publish a locally-signed tagged build?

`conda install rust` does install `cargo`.

`conda install -c conda-forge -y mamba; mamba install -y rust libllvm14 lit llvm llvm-tools llvmdev`

conda-forge/llvmdev-feedstock; ninja, cmake, python >= 3 https://github.com/conda-forge/llvmdev-feedstock/blob/main/r...

.scripts/run_docker_build.sh is called by build-locally.py: https://github.com/conda-forge/llvmdev-feedstock/blob/main/....


Is there a service that rides on top of aws that offers distcc (or whatever equivalent) or direct pure compile with source uplodate and nothing else? I'd rather pay them than deal with AWS.


I believe the fact that you propose renting cloud resources just for compilation shows that something aint right

Im able to compile c# compiler way faster and with less resources


I personally work on something written in C and we use bog standard gcc. But when you have 10+ variations to build, tests to run, etc, the parallelism you can get from a bunch of cores makes an enormous difference.


> 10 minutes of a c6i.32xlarge costs less than a dollar at on-demand rates in the us-east-2 AWS region.

That assumes you have an AWS account. There are legitimate reasons to not have an AWS account; for instance, you might not want to risk accidentally racking up huge bills, or you might not have access to a payment method AWS accepts, or you might want to avoid using services under USA jurisdiction (due to GDPR concerns or similar).


AWS was just an easily accessible example. Sub in OVH if you prefer, I'm sure they have similar options for similar prices.


You can use AWS and be GDPR compliant. What are you talking about?


Pretty good April 1 post!


In terms of satisfying the dependencies, or the build time, or something else?


Build time. If you only need one or two components (clang, clang-tools-extra) and trim down the number of target architectures, it's not so bad. But if you want most of the components and architectures enabled, and tested with a Release-with-asserts build...

The build parallelism scales pretty well though so if you want some white noise in the background, by all means. It's kind of nice for keeping your feet warm sometimes.


Dependencies are few and largely optional but build time is tedious


it's two commands for me. What issues do you face?


Started on windows, installed shitton of stuff

Then after 30min on 99% ram usage it BSoD

Then i created linux vm but couldnt figure out crossplatform build or something like that

Then i created windows vm with 8gb of ram and it worked fine


Building things on Windows by itself is a nightmare in most cases. I use WSL (1, not 2, though either should work) and it build just fine. I rec going that route if you want to do linux-ey development but still keep Windows.


but did you use those binaries on windows? because thats what i want to achieve




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: