Hacker News new | past | comments | ask | show | jobs | submit login
Cargo-style dependency management for C, C++ and other languages with Meson (nibblestew.blogspot.com)
86 points by pabs3 on Oct 21, 2020 | hide | past | favorite | 164 comments



The best thing about Cargo is not that it's the best build system (feature-wise it's rather basic), but that there's nothing else for Rust.

Projects never have to argue whether Cargo is better than Mergo or Ronan or Rustfiles or their grandad's bash script. All the tooling knows how to build any project. Every editor knows where to look for files. https://docs.rs can generate documentation for 40000 different projects using the same build command.


What's also good is that one can combine cargo with Guix as an OS/multi-language package manager, for example when other tools or C libraries (say, alsa-lib) are needed. Guix provides currently 15,000 packages including many rust packages (https://guix.gnu.org/packages/R/page/21/ ), and uses of course cargo for building rust packages. The reason why I think this is the right way to do it is because in big , quickly-moving projects, the only comfortable way out of dependency hell is to use a Linux distribution with checked-to-work dependencies, or one is damned to the task of building a distribution oneself, and a checked distribution of packages is precisely what Guix provides.

Because own, or project-specific package recipes, which contain dependency and build instructions, can just be added to the public package definitions (https://guix.gnu.org/blog/2018/a-packaging-tutorial-for-guix...), this allows to deterministically and hermetically build multi-language projects on top of a very solid base. And for testing with another version of a C dependency (say, alsa-lib 1.1.9), one just needs a branch with another version number in the build recipe.


That sounds about fantastic. Is there a non-root way to build and a way to install at least some of the packages in my $HOME? Because as a developer I need to work with different branches.

edit: How does it handle python wheels and other non-source language-specific packages?


> Is there a non-root way to build and a way to install at least some of the packages in my $HOME

Yes. You can install Guix as a package manager on top of your POSIX OS. Then, you have profiles, and you can install packages into your personal profile in your $HOME, or into temporary project-specific environments. You can also define the environment by writing a manifest file which list requested packages and their versions, and put that file under version control.

How it is used is documented here:

https://guix.gnu.org/manual/en/guix.html#Package-Management

> How does it handle python wheels and other non-source language-specific packages?

Of course you can install Python packages:

https://guix.gnu.org/packages/P/page/12/

If there are no pre-built packages for your machine architecture (say, you are developing on ARM), they are built and installed on your machine (or, a build server which you defined), otherwise you get automatically a compiled package from the Guix server. Also, and in difference to pip and so on, any needed C libraries will be installed as well, with the right versions. So, it is a bit like Conda, but it will also work if your project happens to mix, say, Python, C, C++ and Rust, and perhaps runs on mips or armhf. (My next side project will be a Game with a strategy engine in Rust, and a GUI written in Racket, just as an example).

I have not yet tried it with Python, but I can confirm it works with Common Lisp libraries. Basically, you can get anything which is FLOSS, and if you need proprietary stuff you might be able it via a non-free package channel (similar to Debian).


Not being able to deal with binary libraries and multiple languages is a big pain point for me, and no build.rs is not a solution, rather a workaround.


If it is a common package, you could use Guix to retrieve binary library packages for your architecture. (It will build from source if there is no pre-built package).


Guix is a Linux only solution, and the whole point is not to build anything from 3rd party dependencies.


> Guix is a Linux only solution, and the whole point is not to build anything from 3rd party dependencies.

First, GNU Guix requires a POSIX system, and it will happily run e.g. on top of Debian in a suitable VM. (It will also run as a stand-alone OS which is nice if you want a bootstrapped, totally deterministic system - not good for gaming or casual development yet but there are serious long-term applications for that.)

Also, as said it will build from source only if there is no pre-built binary package available for your platform, otherwise it will use cached binaries. That's, for example, a great use if you need to use cargo with a complex multi-architecture set-up. Just don't forgot to add plenty of disk space - here is a good use for Terabytes-size disks. Further, you can also set up your own build server, which caches binaries.

The decision that it depends on the source code being available is rooted in the philosophy of the project; Guix is a GNU project and is intended to support Free software, as in FLOSS. I think this is also technically a good decision, because if the source code is not available, this leads sooner or later to practically unsolvable problems; think in all the expensive lab equipment in science which runs on decades old, obsolete, unsupported, unsafe platforms just because that expensive hardware requires a binary driver which won't run on more modern platforms. Or think in your scanner that stopped working after the last OS upgrade because the scanner vendor thought you ought buy a new one; that scanner will probably work fine under GNU/Linux, just as an illustration to the point being made.

The need to make software reproducible is especially important in science where the scientific method depends on that experiments, and their evaluation can be repeated, but there is just no budget to port everything to the newest Python3.x, or OpenGL Y, or Qt-Z, or Matlab-ω. Science is competitive, no less than business, and you don't make a career in science with porting old stuff.

But back to "Linux only" - right, there are platforms that Guix will not work on, like Windows. But complaining about that seems, at best, extremely naïve to me: By anyone who has the slightest knowledge in the matter it is well known that these proprietary platforms have literally spent decades on engineering to lock users in into their proprietary solutions. (And I think that the situation especially with C++ is in part a result of that, there is no inevitable technical reason that C++ does not have a standard ABI, or standard library discovery, it was just made that way to stop competitors similar to Borland's Delphi.)

Complaining that their OS (while claiming POSIX compatibility) does not support GNU Guix is justs as, perdon my french, just as silly as complaining that you can't fix the broken battery of your MacBook yourself by taking a screwdriver and replacing it with a standard part. You won't have a screwdriver that fits precisely because the MacBook's vendor thought that you are not supposed to fix it yourself. Of course, the marketing department will tell you that their platform is "compatible to open source" - I have a bridge to sell you.

These systems are just not designed for that, and the inability to use Guix on them is a consequence of your decision to use such a system; you are the one who will pay the costs of that, by having an obsolete scanner, or, far more expensive, by having obsolete, unmaintainable software years from now which is too expensive to port to a new supporting API.


So where is Guix for Aix or IBM i?

As for the rest, I have long learned not to bring religion into enterprise.

If it is a market Rust wants to leave for C and C++ developers, fine, just don't pretend adopting something like Guix is a solution those shops care about. What they have now works and delivers business money, which is what counts at the end of the day.


Not exactly true. Google uses Bazel for Rust code. Also there are definitely nice features of Cargo compared to other build systems, especially C build systems.

But I think you're mostly right. Fragmenting the Rust build ecosystem would be quite annoying.


Google uses pure Bazel, rather than just a Cargo wrapper? I was under the impression that Bazel was for plugging build and dependency systems into - that's certainly how I've seen it used.


Bazel is generally much more useful when it can manage the entire build. https://github.com/bazelbuild/rules_rust implements this logic for rust. It calls rustc directly, without cargo for the build, and interacts with cargo through a shim to fetch/vendor dependencies.

Google uses various non-cargo build systems to build rust. I don't think Google is actually using bazel for this internally, eg. Fuchsia uses https://fuchsia.googlesource.com/fuchsia/+/master/third_part...


I would assume though that any software they release that might be useful to the general public will also have the cargo files. Which will then be well-maintained and in-tree.


What makes you think Google is using bazel for rust? I haven't seen or heard any evidence of this.


Internally at Google the vendor all source code and make it build using bazel.

These rules also show some of the open source versions of this work: https://github.com/bazelbuild/rules_rust

As well as the features that bazel provides you. The caching, packaging, code gen, cross-language deps/interfacing, and other magic that make bazel extremely amazing to use.


> Internally at Google the vendor all source code and make it build using bazel.

It sounds like that is done for consistency, to make it easier for Googlers to move between projects written in different languages, and to lower the cost of setting up new projects into their existing tooling and infrastructure?

I don't think that's really a comment on whether Cargo is good or bad, so much as a language agnostic approach to managing code that Google finds works well for them.

If you're not Google, you're probably better off using Cargo.


This is not done only for consistency reasons. Cargo does not support building other languages than Rust well. When you have projects that span multiple languages, your build system needs to know about dependencies between pieces that cross the language boundary and also needs to know about how to produce libraries, perform linking, etc. On top of that, lots of infrastructure features like remote workers, build artifact caching, etc are just not supported by Cargo.

Disclaimer: I don't have insider knowledge of how stuff actually works at Google, but I've been using Bazel in cross language environments for a couple of years now and I have read the papers on Googles build infrastructure.


Cargo works well for building C dependencies, in my experience. I haven't really needed (from Rust) to depend on anything written in Java or some other random language, so my experience there is mostly non-existent.

Advanced "infrastructure features" you're describing just aren't important for most companies that aren't Google-scale. Compile time on CI is plenty fast without that, in my experience using Rust professionally.

Your mileage may vary, of course.


I've got a python project that depends on a c binary that depends on an shell script that depends on another binary.

Bazel let's you do this without caring about how your deps work with hermetic assurances on top.

Obviously this doesn't mean cargo is bad or worse or anything like that. It's just that it targets a different use case. Bazel targets many languages, with many targets, by many teams, in many ways. Cargo targets the Rust ecosystem, isn't hermetic, ist't reproducible by default, etc. BUT it's a great experience for rust.


> Advanced "infrastructure features" you're describing just aren't important for most companies that aren't Google-scale.

I've seen several C++ code bases with a few hundred developers working on them taking several hours to compile. I'd turn your statement around and say that any non-trivial commercial C++ and Rust code base will need compiler caches and other advanced infrastructure or productivity will go down the drain.


CircleCI offers build caching that works well enough, for example. It's not the super advanced stuff that Bazel can do, but you don't need Bazel to do build caching.

When the Linux kernel (~27 million LoC) can be compiled from scratch in under a minute on modern machines (30 seconds on some of the beefier machines)... there's clearly a lot of low-hanging fruit in that "hours long" compilation that has nothing to do with build caching.

Whether you can get stakeholders to buy into paying down technical debt or not is an unrelated discussion. Sometimes it's easier to sell fancier build systems as the solution instead of better code organization and simplifying overly generic code.

C++ code doesn't have to be super slow to compile: https://www.zverovich.net/2017/12/09/improving-compile-times...


Cargo's C building falls down when you combine it with build caching, distributed builds, cross compilation, etc.


Is this not also a function of being at least a /good/_ build system? If Cargo were too basic, or too hard to use, there would definitely be people creating a better one. This then causes the fragmentation you are talking about.


Yes, this matters too.

I've been bumping into some tough limitations of Cargo lately, but it's still good enough to stick with it, and paper over them, rather than throw the whole thing out wholesale.


what kind of limitations in a few words?



Cargo in terms of ease-of-use but not in terms of scale. You only get the packages that Meson (or the community?) has written build scripts for. By my count that's ~144 packages [1]. That's nowhere near as pervasive as Cargo given that basically any Rust project published will use Cargo vs C/C++ that has a mix of CMake & autoconf (largely standard for OSS) & BUCK/Bazel for Facebook/Google-influenced (potentially multi-language) codebases. That's also not including all the IDE-specific or platform-specific workspaces (Visual Studio, Xcode, gradle NDK builds, etc).

It's a nice thought but you have to be confident every 3rd party project you use is one whose parallel build you'll maintain as it updates. That's the strength of Cargo - it's the way to write Rust code & the maintenance is done by upstream for everyone.

Additionally Cargo's strength isn't just it's packaging & publishing system. Having a single Rust compiler that everyone uses on all platforms is powerful. C++ might get there eventually with clang but the ecosystem of libraries will need to support that as a baseline. Currently there's too many compiler or platform-specific hacks that are done to regularize the environment a bit whereas Rust generally does all of that under the hood for you (i.e. try writing any cross-platform BSD socket code - it's a total shitshow that basically results in Winsock & all other POSIX OSes basically different net stacks). Rust also comes with a very deep standard library for I/O, path manipulation etc whereas the C++ equivalents for all of this are still surprisingly lacking even in C++20 (especially on the network side).

[1] Based on manually counting the packages listed here https://wrapdb.mesonbuild.com/


Rust also comes with a very deep standard library for I/O, path manipulation etc whereas the C++ equivalents for all of this are still surprisingly lacking even in C++20 (especially on the network side).

I agree with this point and cross-platform development is painful in C and C++. However, one of the reasons people use C and C++ is because of platform specific development. It turns out the platforms themselves are written in C.

For example, the "very deep standard library" for Rust can't do DNS TXT queries. On Linux, I run 'man res_query' [1] and get the documentation. Notice the Synopsis, that's C/C++ and the Linux specific code I write can use that.

Rust can also call these functions but there is an impedance mismatch between the safe world of standard rust and the unsafe calls to the system -- I imagine that a crate is a available to ease the pain of the 'unsigned char *answer'.

I am not making this comment to belittle Rust, I actually think it is a good and safe alternative to C++. However, there is decades of system development behind the C/C++ ecosystem and coming onto a C/C++ thread (about a package manager) and calling things a "shitshow" is not going to win people over.

[1] https://man7.org/linux/man-pages/man3/res_query.3.html


One of the great things about Rust is you can write platform-specific code for each platform and then package it all up into one interface so anyone can trivially use it. They get the simplicity of one interface and the performance of platform-specific code.

Though I haven't used it personally, it seems like the trust-dns project is one such example:

https://github.com/bluejekyll/trust-dns


You can do that in literally any language.


Yep, but Rust makes it very easy to package and distribute libraries with Cargo, while maintaining performance on the level of C.

I was mostly replying to this:

> However, one of the reasons people use C and C++ is because of platform specific development. It turns out the platforms themselves are written in C.


Most languages that offer AOT toolchains can do the same.


I think I know what you're trying to say: Cargo's existence means that each of the platform-specific variations can be packaged up into a single library, and that library becomes the de facto standard.

It's a higher bar than "just about any language supports abstraction". It fits more into a Pythonic world-view where there's a one-size-fits-all standard library mentality.


> here is decades of system development behind the C/C++ ecosystem and coming onto a C/C++ thread (about a package manager) and calling things a "shitshow" is not going to win people over.

But it is as shitshow. Pretty much everything in system programming is a shitshow. From the moment you boot up a computer, it's just continous stream of bad ideas, lack of foresight, incidental architectures, hacks, bugs and workarounds all glued together with a duct-tape. That's what system development really is.

Everything we do in mainstream system computing is a shitshow. The fact that people are afraid to put any computer to handle public Internet traffic is a shitshow. It's a shitshow, everybody knows it's a shitshow and we even have whole mature industries sucking out a lot of money out of economy to deal with that shitshow.

Anyone that can't see it's a shitshow can't really be "won over". They will happily live their lives being quite skilled at navigating this shitshow, and repeating stuff like "UB is not a problem", "security bugs are only problem of bad developers", "modern C++ is mostly-safe", and so on.

The biggest shitshow... the meta-shitshow here, is how many people in systems development can't see it's a shitshow. They never ventured to investigate ideas from outside of the Unix/C/C++ bubble and tried to incorporate it into systems programming.

And just to be clear... Rust is not completely shitshow-free, and is just a small pocket of almost-shitshow-free environment in a vast shitshow ocean.


> Having a single Rust compiler that everyone uses on all platforms is powerful.

It's also quite problematic, in that this tends to create a lot of reliance on non-standardized compiler behavior and quirks.

> C++ might get there eventually with clang

I sure hope not! (and I appreciate clang). It's important and useful for there to be several compilers. Plus, if any compiler becomes the single standard one, then it definitely needs to be strongly "copyleft"ed, e.g. with a GNU GPL license.

Otherwise agree with the problems you identified.


> It's also quite problematic, in that this tends to create a lot of reliance on non-standardized compiler behavior and quirks.

While that may be a problem in other communities, with Rust there is a general aversion to undefined behavior. It is my belief that there will be more community pressure to fix quirks and such, and not depend upon them in general.


> with Rust there is a general aversion to undefined behavior.

Ah, but it will be defined behavior - just implicitly defined.


Having multiple compiler implementations is a strength, not a weakness. As long as Rust is not standardized and documented only by the implementation's code it will never be a serious language.


I am sure that a GCC implementation will happen, as there are already discussions to allow Rust modules in the Linux kernel.

But wait, are you saying that Go and Python are not serious languages?


Go has a second compiler—gccgo. Python also has other implementations, but because there is no Python standard, the compatibility of those implementations with the broader ecosystem varies enormously and CPython is the de facto standard (the extent to which an implementation wants to be compatible is the extent to which it must be slow, because it means adhering more closely to the design mistakes that CPython has committed itself to).


Python is a bad example. Cython and pypy are very widely used. IronPython, Jython, etc. also exist. Go is a better example, but even Go has a gcc backend, although it is rarely used in production as I understand.


Go has a spec. Rust doesn't even have a spec at the moment.


What is a serious language? Is being used in companies like Amazon to deliver runtimes used by a considerable % of the internet not serious?


What is a serious language?

I would call C a serious language. Its running your laptop, your phone, your car, the airplane you took on holiday, the fighter planes that protect your airspace, your washing machine, and if you are unfortunate enough to get seriously ill it will probably be running on the medical equipment that keeps you alive.

One of the reasons it is serious is because it is standardized and multiple implementations exist.


Brainfuck has a language specification (it's nice and short) https://esolangs.org/wiki/Brainfuck and a long list of implementations, some even hardware backed. https://esolangs.org/wiki/Brainfuck_implementations

So if that's the criteria, then brainfuck is a serious language.


It seems implausible that you don't know the difference between an international standard and a specification document written (of all places) in a wiki, which means you're being purposefully obtuse and don't really have any counter-arguments.

If one wants to use a tool in a life-critical system, a system that might be in use for decades they're going to reach for one of the said "serious languages": C, C++, Ada, etc. That list's very short and it includes neither Brainfuck nor Rust for that matter.


Ok, I picked brainfuck as an - obviously extreme - example of how a language can tick both those boxes and still be not a serious language. I can pick a number of examples of languages that have no international standard and a single implementation and are used in systems that might be in use for decades. Let's take go. A single implementation, it does have a spec, but I'm not aware of an ISO standard. ADA, while having an ISO standard, isn't exactly blessed with many implementations either. Java, while having multiple implementations, is not an ISO standard.

The point is: neither of those two datapoints is a particularly useful metric on how "serious" a programming language is and much less even a useful metric on how well it is suited to any given task. Even how mainstream programming languages are is not a useful signal for that. Javascript is very much mainstream, but I hope no one ever will want to program a cars ECU in a JS dialect. Specialized applications are almost by definition the realm of specialized languages.


They have, in the past, reached for LISP and various other languages. What matters for life-critical systems isn't the presence or absence of a formal specification. It's more important to foster a culture of design verification and validation of tooling. You can use Brainfuck in a medical device provided you can create the right organizational architecture to ensure the quality of your Brainfuck code. It's more about creating a rational engineering organization and having a process of searching for problems and solving them proactively rather than waiting for your customers to test your code.

Of course most rational engineering organizations wouldn't choose Brainfuck because it's harder to write code that other people can evaluate. But it's not automatically out of the question if you have a good reason for its use.


It doesn't include JavaScript, either - and yet it's one of the most popular and successful PLs today. Very few people write code for a "life-critical system".


I do wonder what other languages you consider serious apart from C. You could use similar examples for a lot of other languages, C isn't the only language used to keep our civilization alive.

I did a fair bit of development in C, and developed hard real time systems in VHDL that did involve life & death, but, wouldn't call any language used in the industry generating value as a non serious language...


I do wonder what other languages you consider serious apart from C.

Off the top of my head I thought C++ and Ada. However, you mention VHDL which I agree with and somebody else pointed out Javascript which is also a good point. The Javascript example (along with C++) make the point that serious computer languages are not necessarily great languages.

I would argue that Rust is not a serious language (at the moment [1]) because its still young, there is no specification and if anything happens to the one implementation it would be a disaster.

Despite C being a null pointer minefield, if someone was to sue a developer the argument could be made that it is an industry standard. In the less likely event that a Rust program hurt somebody what are you going to say?

[1] https://users.rust-lang.org/t/is-there-a-plan-to-get-a-forma...


So also javascript? :)


Spacex did launch a rocket with an electron interface...

In practice, though, V8 is the serious impementation. The interesting thing about C and C++ is that I don't know if any given dev is using Clang, GCC, or MSVC until I ask him. I know a JS dev is using V8.


Critical software for planes is written in Ada. C is not safe enough.


That was the idea back in the 80s. Didn't turn out that way, planes, rockets, NASA, still use C just fine, and Ada was not that widely adopted in the end.


And yet C++ was (in)famously used in the JSF... A less notorious example is Airbus's use of Frama-C.


I’m not sure (but could be wrong) that lambda and Fargate (the services which use firecracker) service “a considerable percent of the Internet”, but CloudFlare’s network does and it has some routing components in rust IIRC.


No, "used for pet project by mismanaged megacorp" is not a sign of seriousness. (I'm sure there's some team inside FAANG that's using Brainfuck in production right now.)

"The Rust development team isn't interested in supporting your obscure embedded architecture" or "sorry, this platform was phased out due to technical debt" is what I mean by not serious.

Look at Python and how any attempts to make alternative interpreter implementations ultimately failed. (And they at least have PEPs.)


I'd hardly call for example firecracker a "pet project by mismanaged megacorp". It's running production workloads - and at AWS scale, I'm fairly certain it's running more work than most of us here ever touched in their career total. https://github.com/firecracker-microvm/firecracker

> And they at least have PEPs.

Rusts language development has been guided by RFCs since like the beginning. There's no comprehensive standard as in "somebody went and wrote down the current state", but it's also not the case that there's no written spec of how things should behave. You could base a new compiler on that and there's actually developments to have different backends to rustc. Cranelift for example https://github.com/bytecodealliance/wasmtime/blob/main/crane...

> "The Rust development team isn't interested in supporting your obscure embedded architecture" or "sorry, this platform was phased out due to technical debt" is what I mean by not serious.

So any compiler team that stops supporting some sort of architecture because it's either rare or no longer worth the effort is not serious in your opinion? Like GCC, that has a decision making matrix for that? https://gcc.gnu.org/backends.html And dropped various architectures along the way?


Until Rust becomes at least a mainstream language (and if it becomes one), criticism addressing its lack of mainstream adoption and novelty are perfectly legitimate and these arguments are worth considering when deciding what to base a project on.

The fact that a specific corporation has implemented a specific project by using a specific tool is completely meaningless as to the appropriateness of using that tool in another context.


You're moving goalpost by quite a bit. Mainstream adoption and seriousness - whatever that might be exactly - are certainly not the same metric. For example, in your sibling comment, you mention ADA as serious language. ADA certainly is not mainstream.

> The fact that a specific corporation has implemented a specific project by using a specific tool is completely meaningless as to the appropriateness of using that tool in another context.

You're entirely correct. That's another goalpost move - nobody proclaimed that being used for business critical projects makes rust in any way magically fit for any and all other conceivable projects.


I'm not moving any goalposts because I'm not the original person you were talking to. I was merely saying two simple things:

a) the fact that a corporation uses a language is meaningless and can't be used as an argument for anything - except maybe what to learn if you want to get a job there.

b) that all arguments based on Rust's novelty, call it "serious language" "mainstream language" or whatever - have merit. Rust is obviously not mature enough for many potential users and you can't convince them by talking about RFCs.


> a) the fact that a corporation uses a language is meaningless and can't be used as an argument for anything - except maybe what to learn if you want to get a job there.

I disagree with you here. This is a low level production grade application in a domain that requires secure and stable operation. The fact that a team picked a specific language for this application demonstrates a certain level of stability and suitability for various problem domains. There are more examples of large scale rust codebases. That does not imply that rust is magically suited to all problems or that you should pick rust for your project at all cost and without further consideration, but it does mean that rust is used in “serious” projects that have significant monetary value attached to it.

> b) that all arguments based on Rust's novelty, call it "serious language" "mainstream language" or whatever - have merit. Rust is obviously not mature enough for many potential users and you can't convince them by talking about RFCs.

You’re again arguing against points I never made. Rust has no ISO standard - true. But the point I was responding to was “the implementation is the spec” and that’s plain false. The RFC is written first and the implementation is tested against the RFC. There is a written document that specified expected behavior. Sometimes you need to follow the chain if RFCs modify existing behavior and there’s certainly cases where the implementation doesn’t follow the RFCs, but it not like all c compilers are 100% spec compliant all of the time.


it isn't a serious language because it doesn't have a standard? is python somehow a joke? i agree with the docs part but there's plenty of de facto standards in the industry and in the civilization at large and they're pretty serious things.


Taking the strongest version of the argument Python is partly standardized in multiple implementations.

My understanding is that mostly CPython does whatever it wants and the other follows, but it is still better than a single implementation where the boundary between bug and spec would be nebulous.


Not quite - CPython docs specifically say that some things are implementation-specific, and that other implementations are free to do them differently. For example, there's no guarantee of refcounting, and portable Python code has to assume that implicit cleanup is non-deterministic (which is observable with __del__).

The problem, rather, is that coders tend to ignore this, and write code that only works on CPython.


This is what I meant as the advantage of multiple implementations, by "CPython does whatever it wants" I meant that the power dynamics are almost exclusively in favour of CPython


Yeah, but Python is competing with Perl and Ruby, not C.


In practice you're just fighting different compiler bugs.

Except for C, for all intents and purposes every other major language basically has one authoritative compiler implementation.


C++ has GCC, CLANG, MSVC, all of them being major, and a dozen of less popular ones used in specific places like Intel C++ Compiler.


And the funny thing is, C++ is so complex it's the last language I'd expect to have multiple optimizing compilers (though rust is probably similar, I imagine.)


I'm lumping them together, as most people do. Even though I guess both groups hate it (C devs vs C++ devs).


When using Rust libs you also have some faith that the lib will not have some sort of memory corruption and introduce instability to your app. This is why C/C++ can never have the Rust crate system, no matter how user friendly the package manager.


If anything this argument would make more sense the other way around: Rust cares about memory safety a whole lot more than C++, so it should be more concerned about having insecure interfaces leaking through dependencies.

C and C++ don't have bespoke build and module systems because it wasn't really a thing back when they were conceived, that's it. And bolting that stuff as an afterthought on top of existing, widely popular languages is extremely difficult.


?? You're telling me C++ devs don't use libraries?


To the degree of Rust no.


It's probably because of the complication of adding a library dependency in C++. If the ecosystem would use a Cargo-like management, I think C++ would have as much libraries, eagerly used, as Rust.


One could leverage something like Arch PKGBUILDs or nix recipes to create C/C++ "package manager". C/C++ libraries aren't really hard, all you need is "root directory" with the common {/include,/lib} layout. For C you could even have binary libraries, but for C++ binary libraries are gonna be quite a problem, because there's no ABI compatiblity and even different compiler flags can cause ABI incompatibility.


It's easy to do on the beginning until you want to add a library that has a set of custom perl scripts to do the actual build.

Then you want to add a library that compiles fine on Fedora, but fails on ArchLinux, because they've moved some header file from `/usr/include` to `/usr/include/subdir`.

Then you want to add another library, but you want to statically link it, only to find out that the package manager installs a version of the library that allows only dynamic linkage.

Then you want to add Windows support, and the whole thing detonates because Visual Studio uses completely different compilation switches than what you'd expect from a compiler in a unix world.

Next there are 1000 other problems, including people who insist on using raw Makefiles because they think they know better.

By the way, the fact that C doesn't use rich signatures is a problem, not a feature. C ABI can also be changed by using different compilation options, but the user won't even know it until the app explodes (crashes). At least C++ will emit a linker error in such case (sometimes, other times it will explode as well).


>Then you want to add a library that compiles fine on Fedora, but fails on ArchLinux, because they've moved some header file from `/usr/include` to `/usr/include/subdir`.

This is why pkg-config exists.

>Then you want to add another library, but you want to statically link it, only to find out that the package manager installs a version of the library that allows only dynamic linkage.

This sounds like a problem in such "package manager" or problem in the upstream build system.

>Then you want to add Windows support, and the whole thing detonates because Visual Studio uses completely different compilation switches than what you'd expect from a compiler in a unix world

This is a valid point. Visual studio already has its own solution however. I personally avoid visual studio anyways, because it sucks for C.

>Next there are 1000 other problems, including people who insist on using raw Makefiles because they think they know better.

These aren't really problems unless you rely only on the upstream build system. PKGBUILDs, nix recipes they can do their own thing.


> One could

One could, but no one has. That's the whole discussion, there are multiple partial implementations with limited adoption.


MSYS2 actually has one based on pacman and PKGBUILDs. I remember unofficial vita SDK going similar route as well. Probably there's tons of similar solutions but not marketed well enough.


The marketing part is the hard part, ideas are a dime a dozen, unfortunately.

These solutions need to be almost ubiquitous to be truly useful.


Maybe it's just my experience, or lack of it, with cpp but everything feels hacky. With rust it just feels right. The websites like docs.rs and crates.io are amazing and there's no equal in cpp.


Everything is big and old, so there are multiple ways to do it and making them work with the old ways is sometimes trouble. If you wait a couple of years Rust will be like that too.

cppreference.com is good and hast almost everything you need.


> If you wait a couple of years Rust will be like that too.

Debatable. It's also a matter of design choice. Much like Perl, C++ chose to "include everything, be everything to everyone". Almost no modern language does that, if you look around, every modern language is focused and for almost every choice has at least a recommendation if not an enforced choice.

This makes for much better design.

I don't think there is a living C++ developer that knows all of C++, it's impossible. I'd be shocked if Stroustrup knows all of it.


C++ is 40 years old by now, C is reaching 60, almost the same age as COBOL (about 10 years younger).

Stroustrup does assume not to know everything, just like Ken Thompson won't know everything about C17.

Wait until Rust reaches their age, actually enjoys multiple commercial implementations, assuming it does survive that long, and those still alive will say the same words about Rust.


I'm going to agree with your parent poster. Just wait a few years. I'd even claim that having deprecated bits is a sign of acceptability and industry trust: the language is used widely enough to have grown multiple implementations of multiples features.

Sure, it might now be features Rust currently has, it will be technology X or programming trend Y that will appear down the line and which will be implemented by multiple teams.


cppreference.com only covers the standard library, docs.rs covers (almost) the entire library ecosystem.


Thankfully for many tasks in C++ the standard library is already good enough.


I tried to do a simple web app with C++ recently.. nothing against the language but trying to decide which framework to use and then trying to integrate them to my project and build took me a couple of days, ultimately I gave up and did the same in python in 30 minutes.

I don't understand how such a popular language can have such a weak story in dependency management and integration with third party libraries.

Even something that should be simple like using grpc / protocol buffers in C++ took me way more than I was intending, plus my final CMake file ended up looking extremely hacky with some other dependencies...

IMO C++ needs something akin to the rust book (https://www.rust-lang.org/learn) or the python official docs (https://docs.python.org/3/) and a built in dependency manager.

Every C++ project Make/CMake seems to end up like its own unique dumpster of fire that is non trivial to add dependencies to...


> I don't understand how such a popular language can have such a weak story in dependency management and integration with third party libraries.

For one thing, C++ pre-dates the common notion of package managers, and a widely available internet. So it was just never designed as part of the language.

Language and tooling, is a relatively "new" concept.

C is 48 years old, C++ is 35 years old, the first web browser is 30 years old, and Java merely 25 years old.


Java wasn't designed with it in mind either, yet Maven's dependency resolution scheme has been the de-facto standard for over a decade at this point (despite all of its warts, like being completely binary-oriented).


This is one of the reasons that header-only libraries have become quite popular. They're easy to deal with: just add to your include path and you're ready to go.

I put off starting a C++ project I've wanted to start for months because I couldn't bring myself to dealing with the dumpster fire known as CMake, and I now try to limit myself to projects I can do that only require header only libraries, if possible (and use tup [1] to build, since its simple). Since I only really do C++ for fun these days, that's worked out ok.

[1] http://gittup.org/tup/


The reason is called laziness and unwilling to learn how things work.

If I as a 80's high school student was able to manage how to deal with dependencies in MS-DOS world, with several commercial compilers, in age where information came only on books and magazines, so can anyone on the Internet age learn how to do it.

I will avoid them as much as possible.


I disagree. CMake is a freaking mess, but its won, so most non-header-only libraries are setup for CMake. I've tried to learn CMake a number of times and have written non-trivial CMake configs. I spent a lot of time on it that I could have spent on my actual project instead and in the end it was still not quite right (breaking after moving to a different system and trying to compile there). I made multiple attempts to try and learn how to do it and failed.

So I gave up, I have better (more fun, more productive) things to spend my time on. Life is too short to waste on such things.


Didn't know tup. Seems to be Makefile done right. (at least at first glance...)


Take a look at DJB Redo, as well. There are many implementations, so here's a well-documented one: https://redo.readthedocs.io/en/latest/


I haven't used it for anything particularly complex and I haven't used it with anything that has a ton of non-header-only third party libraries, so I don't know how it holds up in those cases, but for my smaller projects, its been really pleasant. The tupfiles are easy to write and the tool is super fast. It took me from dreading starting a new project with cmake to enjoying using C++ again.

I used to just use QtCreator and let it setup the build system. I could just go back to that, but for now I'm very happy with tup.


C++ is a terrible language for small, one/few-person projects.

It still sucks for bigger projects that cannot reinvent the ecosystem around them and need to rely on third-party libraries. Frameworks try to alleviate the problem and to provide a saner ecosystem sandbox, with varied success.

Once a project becomes large enough to define its own ecosystem (Android, Chrome, Firefox, KDE/Qt, google3, etc), C++ becomes remotely sustainable: the language ecosystem is not a bottleneck anymore, compilers get fixed and extended when needed, libraries are heavily morphed or rewritten, build systems are usually custom anyway. It takes a gigantic snowflake to fit C++ niche, and even then it requires a lot of internal engineering and in-house experience to make developer experience bearable.

Rust is eating the small-scale niche like a fire, it is proving successes in the middle-scale niche. It will be interesting to see growing pains and ecosystem language/adjustments when the new generation that started with Rust grows up and large projects start to get written and maintained in Rust.


As one person team, C++ was quite good to avoid touching C as much as possible, while being able to use the safety and abstractions I enjoyed so much from Turbo Pascal.

Rust isn't eating any small-scale niche of GUIs and shipping binary libraries any time soon.


To be fair Google libraries are famously hard to build and integrate. It doesn't help that they use their own build system, though it seems gRPC can be built with CMake now. It's a good number of years since I did it last, but integrating gRPC was one of the worst experiences I ever had integrating a dependency (on Windows at least).


>I don't understand how such a popular language can have such a weak story in dependency management and integration with third party libraries.

>All of it could literally be replaced by something akin to: cppvenv --depend-on grpc-1.40 zeromq json

C++ existed about ~20 years before the practice of "automatically reach out to the internet and download dependencies".

As a result, C++ doesn't yet have a Schelling Point[1]. Your proposed packager syntax requires a Schelling Point (i.e. canonical http repo source as a Focal Point to download "grpc-1.40,zeromq,json"). My previous comment about this: https://news.ycombinator.com/item?id=22583139

Tldr, no vendor with enough sway to influence the fragmented C++ community (commercial shrinkwrap software vs open source, embedded vs server, HPC/scientific, games, etc) has created such a Schelling Point.

[1] https://en.wikipedia.org/wiki/Focal_point_(game_theory)


This is a very good point.

In fact I would add that no vendor exists with enough sway to influence the a significant part of the C++ community (not even the committee can).


Historically the dependency management tool for C++ is the system's package manager, and/or a shell configure/provision script.

I don't see any value add to a "built in" dependency management tool to C++. Not as in it isn't a useful thing to have, just that it can't solve the big problem which is that most C++ projects would never support it.

There's also the fact that long dependency chains are frowned upon in many C++ code bases.


What about supporting multiple versions of libraries at a computer? They all support it: npm, rvm, virtualenv, cargo, golang, python, maven and etc.

A very common use case is to use some newer version of something than what shipped with the OS but at the same time you don't want to potentially cause instability in the OS...

Or you know, developing multiple projects w/ different versions from a single OS.

Also in case I really want to do it, I have to go to some obscure website download some payload and read some build instructions to see how to best integrate to my messy CMake/Make system...

All of it could literally be replaced by something akin to

    cppvenv create myproject
    cppvenv --depend-on grpc-1.40 zeromq json
    cppvenv build


You would have to go to some obscure website to download some payload and read build instructions to see how to beat integrate it with your new cppenv program instead. That's the problem to solve, C++ is old and most of it written before those solutions existed, and most of it doesn't use the same set of tools.

Like I said, I don't think this isn't a problem or that a solution wouldn't be nice. It's just not functionally different from the myriad of existing solutions.


You edit your generated makefile or tweak the autoconf (or whatever you used) config to point at a specific version of the library, I guess.


Nix does what you want.


Yes, I love Nix. The only caveat is that it does not work on Windows...


> dependency management tool for C++ is the system's package manager

The system package manager has a different purpose than a "development" package manager.

The system packages are supposed to give you a consistent set of libraries needed to support the application shipped by the distro.

If a package you depend on for daily work, say Firefox, requires libfoo-1.0 and your project needs libfoo-2.1, you're screwed.

Similarly if you're supporting an old release myapp-1.0 for some customers and it needs libfoo-1.0, but at the same time you're working on myapp-2.0 which needs libfoo-2.1, you're out of luck.

The system package manager just doesn't cut it for development work.


You can resolve libs by full name beside the system default version.


> Historically the dependency management tool for C++ is the system's package manager, and/or a shell configure/provision script.

I agree with you here. The system package managers where invented to solve system global dependency problems between libraries/executables as well as provide headers and static libraries for development on the current machine.

> I don't see any value add to a "built in" dependency management tool to C++. Not as in it isn't a useful thing to have, just that it can't solve the big problem which is that most C++ projects would never support it.

Even if you don't see a need, you have to acknowledge that many other people see the need to have some sort local dependency management, be it compile time like cargo or the different cross-build environments (openembedded, buildroot, co.) or runtime like chroots, containers, virtual machines.

It can be argued that the local dependency management solutions are in their infancy.


> Not as in it isn't a useful thing to have, just that it can't solve the big problem which is that most C++ projects would never support it.

For a living language the solution is never: "throw your hands up in the air and give up".


Unlike Rust and Golang. C++ has long history and baggage, thus there is no standard tooling for C++ such as building systems and package managers. In the past, every operating system had its own set of preferred tooling for C and C++, on Linux and Unix-based system, lots of projects still use GNU autotools or Makefiles; on Windows, many projects still use the XML-based MSBUild from Visual Studio IDE. Many IDEs also have their own XML based building systems as well. Nowadays, CMake major IDEs support CMake and allows using CMakeLists.txt as a project file by just opening the directory containing this file. Some of those IDEs are Visual Studio IDE, Visual Studio Code, CLion, Eclipse (Via plugin), KDevelop and so on.

Regarding libraries dependencies, CMake has a FetchContent feature which allows downloading source code directly from http, ftp or git servers and adding this code as project dependency. This approach can avoid wasting time installing and configuring libraries. However, this technique is only feasible for lightweight libraries when the compile-time and library object-code size are not significant. For large libraries and frameworks where the compile-time is significant, such as Boost Libraries, Gtk or Qt, Conan or Vcpkg are the best solutions as they can cache and reuse library object-code with many other projects which reduces the compile-time and disk space usage.

C++ is not best solution for Web applications, in most cases you will not gain anything using C++ for this case, unless the web application is running in an embedded system such as router or a network printer. C++ is most suitable for cases where you need: high performance; access the operating system low level features; implement an operating system; implement an embedded system or a triple-A game.


It is not ideal but for the subset of libraries that use CMake it can be as easy as dumping the library in a subdirectory and you are done. Things only start to go sideways when you have to integrate with google libraries since they have typically insane build tooling (skia).


Aren't things like Nuget supposed to solve this : https://devblogs.microsoft.com/cppblog/nuget-for-c/


> I don't understand how such a popular language can have such a weak story in dependency management and integration with third party libraries.

At some point 40 years from now, someone will complain about Rust not having full integration with whatever IPFS/blockchain distributed computing package managers/sharded source control/vulcan mind meld the internet turns into. There really was a world before the internet was universal. People used tools like Fortran on these things called Crays; (or CDC6600 or IBM doodads before that) -you'd bring your data on a VHS cassette or reel to reel tapes, because your university only rented a single T1 line -1.55mbps FWIIW; it was seen as plenty of bandwidth for all the scientists in the university. Nobody but the scientists used the internet in those days.

That universe is still around; matrix math is still ultimately Fortran -matrix math is why people invented computers in the first place, as amazing as this may sound to people with ipotatoes in their pocket. That's 50s and 60s tech; with gotos and line numbers: back then, using multiple files for your software was considered getting fancy. You were usually submitting jobs as a stack of punched cards. I never did, but my older roomies in grad school certainly did.

Similarly, the OS you're using is basically mid-1970s technology, written in mid-1970s programming languages. Back then, package management wasn't so much of a problem; the whole source code for the OS could be published in not-too-big book form (and was, FWIIW). Just a reminder: the runtime for modern Rust is absurdly bloated compared to those days; "Hello World" in Rust is literally larger than the entire hard drive of 1970s computers with the OS written in C; hell there were 80s computers with 4mb drives that were seen as fairly adequate. Anyway, that's why C++ (basically PDP-11 assembler with Bjarne's deranged ideas about objects ... and some later better ideas) has a shitty package management story. It's not so bad really; just a skill that n00bs don't have any more. I personally never needed anything fancier than makefiles, and saw CMake as a regression from autotools.


> Just a reminder: the runtime for modern Rust is absurdly bloated compared to those days;

Just a reminder: the runtime for Rust is effectively the same as in C; the smallest binary rustc has produced is 137 bytes https://github.com/tormol/tiny-rust-executable

The defaults do not focus on size reduction because we aren't in the 1970s anymore, though, it's true.


> basically PDP-11 assembler with Bjarne's deranged ideas about objects

C++ was originally pretty much Simula-67 with C instead of Algol-60 as the foundation. What's "deranged" about the Simula object model?


With the Bjarne's goal of never having to do BCPL style programming ever again.


I think Conan [1] has wider support in the C++ community. It definitely does not match the Cargo toolchain for integrated ease of use, but it does support CMake directly.

1 - https://conan.io/


my only problems with meson wrap today are its lack of packages, and the fact that it's source-only. Actually, I tried to use SDL_image and while the page for the package exists on wrapdb[0], it has no versions listed and so building with SDL_image fails. Source builds on windows for SDL2 fail because it presumes a UNIX-style build and fails looking for 'unistd.h'.

I've found the real 'one true solution' for my needs to be conan + meson. Conan includes a build script that supplies pkg-config files to Meson for dependency handling, and with conan you can use the gigantic repository of packages along with bincrafters for an even gigantic-er repository of packages. I have not yet found a package I needed that wasn't on either conan center or bincrafters. And on Linux I can download the source to build if I really want to, but on Windows I'm happy just downloading the binary .lib files. And it's so easy to use. cd build && conan install .. && conan build .. is all you need to download and compile everything.

Before this setup I used vcpkg with cmake and I'm never going back to that. Yet before vcpkg I used ExternalContent, which I would also never go back to. Ergonomics are extremely important with this kind of thing. I think that's why cargo is a huge hit in this space.

0. https://wrapdb.mesonbuild.com/sdl2_image


Ugh.. Meson and CMake, don't know how they did it, but they manage to be even worse than autotools.


I really don't think Meson and CMake should be mentioned in the same breath here. I would be glad to know what you think makes Meson so bad.


My experience with working on a project that's converted to Meson is that its main flaw is its philosophical commitment to inflexibility. You do things the Meson way or not at all. Want to have file foo/bar/baz.c compile to foo/bar/baz.o in the build tree? Nope, must be foo_bar_baz.c.o. Want to have source files with a .inc.c extension which aren't compiled like plain .c files? Nope, you need to rename them to .c.inc. Want to write a wrapper function around a Meson builtin like 'dependency' that adds extra functionality to it? Nope, Meson doesn't provide functions in its build scripting language. Want to have a build rule be "run this perl one-liner"? Tough, Meson will mangle it by turning all the backslashes into forward slashes so you need to put it into an external script file. And on and on. And we needed to upstream new functionality into Meson before we could even convert to it in the first place!

My view of build tools is that they must provide flexibility and escape hatches, because on a sufficiently complicated project eventually you're going to need it. The Meson designers have a strongly opposed philosophy. If you agree with them on that design philosophy you'll probably be happy using Meson, but if not then you're likely to get frustrated with it every time you do anything non-trivial with it, I think.


So I have zero experience with Meson but quite a bit of experience maintaining C++ codebases of varying quality and with all sorts of build systems (qmake, autotools, cmake, simple Makefiles, ...) and honestly I think if you want to make a build system that works reliably, doesn't take 3 months to learn to use correctly and gives you all the features you want you'll have to cut on flexibility.

Otherwise you end up with a complicated mess that everybody uses differently and you have to relearn on every single codebase.

Rust's cargo is one of the most inflexible build systems I've ever used. It's great. When I open any random Rust crate I know exactly what to expect, where to find what I'm looking for and what I need to tweak to fit my needs.

>ant to have file foo/bar/baz.c compile to foo/bar/baz.o in the build tree? Nope, must be foo_bar_baz.c.o. Want to have source files with a .inc.c extension which aren't compiled like plain .c files? Nope, you need to rename them to .c.inc.

Great. Sold.

Not that I particularly care about these particular stylistic choices, what matters is that I know that any project using Meson will work the same way. I never liked the "mod.rs" convention of Rust, but I definitely think that it would be worse if you could override it on a per-project basis.

Again, I have no first hand experience with Meson so maybe it does suck but in my book being opinionated is actually a very good sign and I'll definitely have a look at it if, god forbid, I have to start a new C++ project in the future.


Rust's cargo very much has a first mover advantage. Since (almost) all rust projects use cargo, it doesn't have to fit/work with any other styles. I think this position is close to untenable in the C++ ecosystem and for existing codebases therein, since it's too much effort to port large dependencies and thus, it remains very hard to convince people to adopt it. But I'd love seeing something like this, since cargo is pretty much the best thing ever in this space IMHO.

The only way I can imagine this working is a very large vendor pushing it hard for their platform. This is effectively just Microsoft, all other platforms either use Java/C#/Swift/JS/... or are much more community driven.


There's many problems with meson and cmake, but as for meson. Meson is the kind of build system that makes using tools hard as it forces certain workflow to you. It detects the compiler and tries to be too clever what you can do with it and what you can't. For example recent mishap was that it detected emscripten compiler and didn't allow me to use it as non cross-compiler and so on. There's no way to build static libs, if the project hardcodes shared library building. The codebase is also pretty nuts, I welcome you to go read it.

(GNU) Makefiles with some standard variables and pkg-config are still the king. Cmake and meson can work okay for very simple projects, but other than that I'd recommend always using native build system for each platform, or going Makefile + mingw route (if you need windows builds).

Using native build system eventually saves you time and headache, especially when these "portable build file generators" don't support some feature you absolutely need, or doesn't allow you to put that command line flag in one of the tools that gets executed under the hood.

CMake also absolutely sucks at finding system dependencies when everyone uses those "FindFooBar.cmake" files with hardcoded search paths and what not, instead of using pkg-config. This forces many packagers to hack the projects build system, because the project would otherwise link to wrong library, or not find the library at all. With pkg-config the distributions can provide the correct paths and flags when linking against something.

Autotools, while slow and generates tons of shell script that's compatible with all sort of ancient shells and platforms, still manages to offer a standard interface to packagers and fine tune exactly how things get linked, how things get configured and where stuff gets installed. Autotools in general is less headache to packagers than meson or cmake.


I think being sort of "rigid" is part of the appeal of Meson and definitely intentional (keeps complexity down). I can understand how that might be off-putting or even obstructive to some though. But being too flexible is in a way what made CMake and autotools so bad.

I have only done really small stuff with Meson, so I haven't had problems like the Emscripten one you described and I can't say much about it.

Regarding static libs Meson docs explicitly state that "library" is preferred over "shared_library", so that you can choose at build configuration time. If some dependency of yours does not follow this advice without having a good reason for it the fault is with them.

I use CMake over Makefiles, because for libraries you want people to be able to easily integrate your stuff, so you need to use what everyone uses. And in general it's still the best way to have a (mostly) platform-independent build. Yes, almost all non-trivial CMake files have a bunch of platform-specific extra stuff, but at least you can do it. I also cannot believe how bad the dependency finding works. It actually doesn't work more than it does work. Very often even with libraries that I installed through my systems package manager. I also consider CMake horribly flawed, but it's still better than raw Makefiles.


I'm not sure I agree with the complexity. You can go read CMake's and Meson's source code and probably agree with me it's not very trivial. Sure they can provide you a "cross-platform" build with ease, but it doesn't come free. You are giving away flexibility and it only works when it works. When it doesn't work you usually have to employ ugly hacks or your build files start to creep in way more complexity than would be needed with native solution, or even by using a simple Makefile or heck even a shell script. In worst case, you'll become a CMake or Meson developer to add that missing functionality, or fix bugs whatever, just to get your thing to build.

I also want to point out that the build system isn't just for you, it's also for the users and packagers. Especially for packagers it's important the flexibility is there.

I used to use CMake myself a lot way back as well, but eventually got fed up with its problems it always caused (mostly when other people tried to package my projects and there was always some incompatibility or dependencies not being linked correctly, etc...).

I also used to think Makefiles are bad, but actually when I sat down reading the GNU Make's manual many things clicked with me, and (GNU) Makefiles are actually very good at building stuff and they aren't that complex at all. Sure there's lots of bad Makefiles around the internet (just like there's also bad CMakeLists) the Makefiles are very powerful when written correctly, and most importantly they are very very fast.


MinGW still has downsides, like using the ancient version of the C runtime, with missing features and bugs. There are also ABI issues between MinGW and VC++ - more so if you're doing C++, but even in C, stuff like FILE* or locales is not interoperable.


ABI issues are fundamental problem with C++, not really mingw's fault. If you have to use binary library in C++, then you have to use the same compiler it was compiled with and with the same compiler settings.


ABI issues are a fundamental problem with having many different ABIs - things are much better on Linux, where there's a clear standard, and everybody conforms to it. VC is the de facto ABI standard on Win32, but I'm not even sure to what extent it's documented, beyond the bits that are needed for COM.

In any case, I'm not trying to blame anyone here - just pointing out that MinGW is a second-class citizen when it comes to C/C++ development on Windows, and this is usually more important than having access to MSYS2.


Meson is the "soup nazi" of build systems. It's great if you want to build your code exactly the way the Meson author likes (I don't).

CMake is much more in the spirit of C++. It's clunky and inelegant but you can get it to do anything you want.


Speaking only of CMake, it makes my life much much easier. If a project has < 5 files, a pure makefile is just fine. However once it gets any larger CMake makes linking, recompile times, and other stuff much much eaiser


I don't think anything can be worse than autotools. Well, maybe a collection of shell scripts to do the actual build.

Wait a minute...


I'm increasingly converting to just having a build script in a real programming language, certainly for the parts of the build that do work rather than managing dependencies. I try to avoid python for it purely because although it's tried and tested I'm terrified of introducing silent bugs that a compiler would catch, i.e. I have no instinct for it

The D compiler has a build script in D, and for me at least it works perfectly where the makefiles often break due to whichever random make I have on a given windows box. I just run "rdmd build.d" and off it goes, it's not only fast but I can also actually read and instrument the code if I need to.


Honestly, C and C++ libraries being annoying to deal with is a Windows-created problem.

POSIX systems have pkg-config as the de-facto standard way to find installed (or non-installed, if done correctly) libraries, and all common build systems (including autoconf, CMake, Meson, ...) support this.

Meanwhile on Windows you're left with a pile of unpacked library directories somewhere on your disk and everyone does their own thing and it's a clusterf*ck of build system twiddling.

And the worst part is that this bleeds over into POSIX systems when Windows-created libraries make a jump over and don't know to stick to the conventions.


When Cargo removed POSIX-only features (running a bash script pre-build), I protested. Who cares about Windows and their mess!?

But in retrospect that has been a great decision. The ecosystem has followed and supports Windows without friction. Cargo solves the "pile of unpacked library directories" problem. Crates that use system dependencies know how to use vcpkg or build from source. They will even read Windows registry for you to configure MSVC, so you don't need to run vcvars.bat.

The effect is that I can just write stuff on Mac, and my Rust projects mostly "just work" on Windows.


I develop on Windows, and my experience has been that unless a crate is specifically written for another platform, they generally work perfectly fine on Windows with two exceptions:

1. Terminal-related functions. This is getting better as cross-platform libraries are made, but Windows has had a generally shite terminal emulator and the more common libraries expected a *nix-style terminal.

2. The crate has a C or C++ dependency.


The terminal story should be a lot better on Win10 these days, what with ConPTY and VT100 support.


This is so true. It is always Windows people trying to bend libraries to make them easy to "install" for them. On Unix installing libraries system-wide was never too much of a hassle, and now with Nix you can have reproducible distinct development environments for each project. I hate that sadly a lot of the projects I work on must also work on Windows, but I'd just hire a Windows person simply to deal with their build system stuff.


Try having the fun to install something on e.g. Aix.


Oh how I wish that everyone would support pkgconfig! I've been adding pkgconfig support to various libraries, sometimes being shut down or ignored (dangling PRs). Meanwhile cmake etc often write their own proprietary support (find modules and include scripts) instead of just piggybacking on pkgconfig..


Yeah, but the thing is, even on POSIX stuff, even without Windows-created libraries, how do you:

package-manager install bleeding-edge-library

and have it just work?

Keep in mind, I don't want to use an entire bleeding edge OS in order to easily get 1 bleeding edge library. Also, how do I do it on another POSIX OS, in a standard way for my development flow?


I think your question/problem is actually underspecified to be answerable. What exactly do you want? (non-exhaustive list):

1. have your entire system use a newer version of some library?

2. have a specific part of your system use a newer version of some library?

3. build a particular package to use a newer version of some library?

4. just get access to the accompanying executables/tools from a newer version of a library?

5. get a whole bunch of versions so you can test your project against them?

It really depends, and, sure, some of these are very hard to do... but also some of these are really hard problems to begin with, so it's not unsurprising the solution is also a bit involved.

(My answers for these cases:)

1. build a package for the newer library from source; either from included packaging or by transplanting packaging from an earlier version. If it doesn't work, this is an indicator that you have a bigger problem and maybe shouldn't be shoehorning this in.

2. If the newer version is a higher SO version, install that and rebuild part of your system to use it. (Because you need to rebuild anyway.) If it's the same SO version you'll need to install it somewhere else and set up LD_LIBRARY_PATH.

3. Just build the library statically and build the package against that.

4. Build the library/tools with static linking.

5. No free lunch, this is a nontrivial problem. Doable with LD_LIBRARY_PATH and PKG_CONFIG_PATH.


I want to:

cppm install library-2.0.0-alpha

like in every other programming language, and be able to develop my project with it.

Why would I ever even want to do 1 or 2? That's the OS package manager's job. The fact that the OS is written in C/C++ is an implementation detail, it shouldn't leak out. The package manager is also not portable, the commands are not the same, the available libraries are limited and frequently incompatible with each other.

Your 3, 4, 5 are all manual steps, if you notice. I don't see any mention of a package manager in there :-)


It's overstated that that works everywhere though, it certainly doesn't in Python. That's why we've got pip, conda, virtualenv, pyenv, etc. for version and package manaegemnt.


pip + virtualenv fully solve this problem - maybe not in the most comfortable way (which is why there are all those extensions and alternatives), but certainly way better than anything C++ has - and they ship with Python.


It's not even possible to install every python package (and it's dependencies) using pip, which is why Conda exists.


If some package is available in the Conda repos, but not in PyPI, that's down to its author choosing to support one but not the other. Ever since wheels became a thing, I can't think of anything reasonable that's impossible to do via pip when it comes to packaging a library.


> like in every other programming language, and be able to develop my project with it.

You're confusing application development with system development. C and C++ are very much aimed at the latter. By the time you're in a situation where you need to mess with a specific version of a library that is included in your host system, something has fundamentally gone wrong at an earlier point.

C and C++ don't provide an application-level package manager, and I think this is pretty much intentional. Both languages "want" to build a system, not an application. Case in point: the C standard library is largely specified in POSIX, right next to system tools - not ISO C.


The C standard library is entirely specified in the ISO C standard. POSIX qualifies some things, and adds a lot more, but none of that is the C standard library.


Install Guix as a package manager on a POSIX system. It really solves these problems.


But the inherent problem is still the inability to deal with dependencies in a standard and portable way. As you said - pkg-config is standard on POSIX, but the world is bigger than this.


Nope, Windows is not the only OS without packing management, and good luck actually using pkg-config on any POSIX platform that isn't a BSD or Linux distribution.

Also good luck using it on platforms where using multiple commercial C or C++ compilers, each with its own ABI is current practice.


`Cargo` and `go` are the reasons I don't want to work with other languages that don't have good default tooling for dependency management, formatting code, linting, etc.


Surely nixpkgs is a better choice for c and cross-language dependency management at this point. As long as you're not on windows.


Never again with Meson. The developer is a jerk and has dogmatic views on prpoject structure that do not match reality.


Does it have an equivalent of build.rs ?


I know this is slightly annoying knee-jerk reaction but it would be remiss not to mention vcpkg [1] whenever C++ dependencies are brought up. In practice vcpkg has by far the most packages of any C++ package manager. For example, it's unusual to find any one of opencv, grpc and QT5 for Windows, Linux and Mac in any other C++ package manager, let alone all three.

Unfortunately I'm not enough of an expert on the alternatives to make a proper comparison, but it seems to me that vcpkg works better with whatever build system the packages natively prefer to be built with (e.g. although vcpkg is CMake-based it can build Visual Studio projects on Windows and make-based libraries on Linux) whereas following links in the OP's article takes me to a zip file full of meson.build files that seem to have been written from scratch for the library. Using the library's own build system directly seems like a much more managable long-term solution. Also, vcpkg's policy of putting all the ports into its git repository, while it does have disadvantages, does make it very clear everything that vcpkg is doing and needs to know to build something (e.g. [2] for grpc).

Edit: A partial comparison of vcpkg with Meson WrapDB and Conan Centre:

* Meson WrapDB lists just under 145 packages, although some of them appear to be empty (e.g. grpc, libtiff). As mentioned above, it seems that you need to re-implement the whole build system for a library in Meson for it to be included. I found that the package data is on github e.g. for spdlog [3]

* Conan Centre lists 583 packages, so it seems like a much better choice than Meson. The way to specify different configurations look awkward to me - they mix up decisions that you probably want to share across your dependencies (e.g. platform, static vs dynamic) with library-specific stuff; but maybe I just don't understand them properly.

* vcpkg has 1509 ports, including several quite tricky ones like grpc on Windows and QT as mentioned above. The configuration stuff is split over places in a way that makes sense to me: triplet files contain global options like static vs dynamic and even cross compile options, the portfile itself contains any package-specific tweaks, and feature packages allow extra features in ports (e.g. I can vcpkg install opencv[eigen3] to mean "include eigen3 support in opencv3", which would also update the list of transitive dependencies).

[1] https://github.com/microsoft/vcpkg

[2] https://github.com/microsoft/vcpkg/tree/master/ports/grpc

[3] https://github.com/mesonbuild/spdlog/tree/1.8.0


Curiously, more and more Rust crates, mainly wrapper ones that depend on C/C++ libs, pull those in via the `vcpgkg` crate[1] in their `build.rs`. And I also saw some that then use `pkgconfig`[2] as a fallback.

I already fixed several `vcpgkg` ports as I started using this myself in several of my crates.

1. A package manager is only ever as good as its coverage of people's needs - packages wise. Forget usability, CLI, etc. If it covers the packages I need (and can actually build them) I'm happy to bite those bullets.

2. Having a single package manager for a language X certainly helps with 1. While any extra one that someone comes up with because they do not like the CLI/UX/"principles" of the existing one or whatever is a detriment to 1.

[1] https://docs.rs/vcpkg/0.2.10/vcpkg/

[2] https://docs.rs/pkg-config/0.3.19/pkg_config/


I love vcpkg, specially since it is written in C++, a package manager should use the language it serves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: