Hacker News new | past | comments | ask | show | jobs | submit login
How to Write Portable C Without Complicating Your Build (nullprogram.com)
215 points by ingve on March 31, 2017 | hide | past | favorite | 105 comments



Just to reality-proof that: Redis managed to survive 7 years without autotools so far, using similar techniques, you just type "make" regardless of the unix system you want to compile Redis with. No major complexity was faced in order to do so, however Redis depends on nothing externally if not libc. For software projects with many external deps it may be worth to use autotools instead.


To add another example: spiped is 6 years old and should build on any POSIX system with the Software Development Utilities option, /dev/urandom, and OpenSSL. Over the years I've collected a handful of workarounds for not-quite-POSIX systems, but the "check if we need any workarounds" step of the spiped build is still much faster and simpler than using autotools.


People can really easily abuse the Makefile ecosystem and write complicated, difficult to read, spaghetti builds. If something fails somewhere in the m4 macro forest while building on your specific machine, you're hosed for potentially hours or days while trying to figure it out so you can just build the damn project. The whole point of this kind of tooling should be to take some of the heavy lifting out of getting source code built, not add extra complexity to a project.

I've been working on and off with Make, C, autoconf, etc for ten years. Maybe if I were a full-time C or C++ person I'd have less trouble with this, but it seems rather excessive. The thing that most discourages me from contributing to certain open source projects is when I clone the repository and the build fails and I can't get the project to build in an hour or less of build-debugging time (and I'm not on a weird setup, I'm on a modern Lenovo with Debian).


Quick, how do you install redis to a different prefix? Everybody knows `./configure --prefix=...` (or if they forget, there's `./configure --help`), `make PREFIX=...`, not so much.


No way other than "cp redis-server /your/path".


I've seen quite a bit of "./configure"-less software use "PREFIX=/foo make install" (or some variation thereof) for that (something which I have to address as part of maintaining various Slackbuild scripts). This is usually part of the compilation/installation documentation. If not, install targets are usually simple enough that I can glean which environment variable the Makefile is using to control the installation prefix.

Worst case scenario, the SlackBuild I'm writing has to include some call to patch or sed in order to tweak the installation path, or I just have to cp the build output into the package's root directory (something which I'll likely have to do when I get around to updating the CouchDB SlackBuild, but have been putting off since 1.6.x works well enough on Slackware).


D is a tiny bit more with two options: `make -f posix.mak` or `make -f win32.mak`.


Redis does not have a win32 target, otherwise probably that would be needed as well for simplicity.


This was my own thought when reading the article: how exactly do you use pkg-config to find libs+headers without autotools (or at least an m4 processing step)?


You can run pkg-config yourself, the m4 macros are just for convenience. Look at the shell the macros expand to to see how to do it.

Though I don't see the point in reinventing your own build system, exactly. Some people I trust seem to be getting into Meson these days but I havent tried it.


The article downplays a little bit the value of the autotools. It's not just about building for 30-year ago hosts, it's also about providing a lot of convenient tools and providing the user with a known interface. If you write your own configure, people may expect to be able to pass additional CFLAGS (not overriding the existing one), setting the path to various elements, installing in a different root, having certain targets for the generated makefiles (like dist, clean, install, uninstall), having cross-compilation work (including crosscompiling to Windows with mingw64 which solves the toolchain problem of releasing binaries for Windows), ... It's a tedious task and easy to get wrong.

On the other hand, using the autotools in a modern way is dead easy. You don't need to add many many tests if you don't intend to support old stuff. You get access to automake which is a fantastic tool on its own.

Don't read old tutorials, don't look at how big established projects are doing, look at the Autotools Mythbuster instead (autotools.io) and start with a minimal configure.ac.


> If you write your own configure, people may expect to be able to [...] It's a tedious task and easy to get wrong.

The author's point is to do <em>without</em> a configure step entirely, not write your own.

A standard Makefile is perfectly capable of implementing these things in a manner which is clean, reasonably portable and without the level of indirection that makes things hard to debug.

My own experience is that autotools has evolved to feel less standard than the modern platforms it purports to smooth over the differences in; I seem to frequently find that I can't generate a 'configure' for a 3rd party software or libraries from Git repositories because I of the wrong autotools versions. Efforts to investigate this by unpicking the various macros etc. provided in the build have almost always been unsuccessful, leaving me building from distributed .tar.gz files. With something that feels like such a moving target after 30 years, I'm glad to see people realising the benefit of a simple Makefile that's easy to customise for the edge cases.


There's no way you can do what autotools does in a Makefile without implementing half of autotools yourself. How do you check for headers? How do you check for functions? How do you test if qsort_r expects

    (*)(void*, const void*, const void*) 
or

    (*)(const void*, const void*, void*)
as function pointer? If people are forced to detect that kind of stuff in Makefiles, they get the urge to match `uname` against some hard-coded platforms and that's a terrible solution.


>There's no way you can do what autotools does in a Makefile without implementing half of autotools yourself.

That's part of the argument: with standards compliant code, you don't need to do "what autotools does".


ENOTSUP is part of the standard and I do have autotools macros checking if some function calls produce that error value on the host system.


> no way you can do what autotools does in a Makefile

Well, you're responding to something that wasn't actually said; "these things" quotes specifically the list of requirements given by the previous poster -- CFLAGS etc.

Your example is indeed valid, it's an ugly case and I agree that testing for the feature specifically (rather than arch, compiler) is always preferred. But let's look at the practicalities -- how many of these cases do I actually need in a codebase to tip the balance that would justify use of autoconf? In your example, a simple #ifdef against the platform (Windows/BSD/Linux) and it's gone. qsort_r is off limits for massively portable program anyway, so autoconf's ability to help is limited anyway.


> a simple #ifdef against the platform (Windows/BSD/Linux) and it's gone.

That's equivalent to the uname trick the GP was complaining about. That means your software won't compile in a lot of platforms.

On a small scale, I don't think that's a big problem. Somebody on those platforms only has to add the correct conditions to your #ifdef forest and if he sends the update back, other people on the same platform won't even have the same problem. It's not much different from software not working on untested platforms.

It starts becoming a problem when done often, or on high level (at the calling stack) code.


> That means your software won't compile in a lot of platforms.

Is won't anyway; qsort_r has already limited me to a tiny number of platforms. If I care about further portability, autotools can't do anything to help me; my next step is not autotools, it's "don't use qsort_r"


> How do you test if qsort_r expects

    > (*)(void*, const void*, const void*) 
> or

    > (*)(const void*, const void*, void*)
You can write C++ code that uses templates and SFINAE to do it. I have. I will admit, it looks like garbage. But it is possible.


Who uses the latter form? I don't think I've ever seen that before.

Also, isn't the first supposed to be:

  void*, size_t, size_t, int (*cmp)(const void*, const void*)
Or are you using some wtf version that hides the element size and count behind a void*?


The signatures came from another answer. I have no idea what the signatures are, or should be, or why there's a difference.

My SFINAE code was to handle a similar problem picking between a GNU-specific strerror_r and a POSIX strerror_r that have different signatures (both are available on Linux, determined by whether certain macros are set: https://linux.die.net/man/3/strerror_r , I couldn't just rely on those macros because I wanted my code to compile on any POSIX platform).


You're commenting on the prototype for qsort() itself, whereas the original point was about the function pointer that is an argument to qsort_r().

Both Linux and BSD chose to add a non-standard qsort_r(), each choosing different ways to do it.


I wish standard (POSIX) Make would be sufficient but it frequently isn't. While some Make features from GNU and BSD Make made it into the POSIX spec (such as "include"), I frequently find myself desiring a more orthogonal facility to integrate output of external programs into Make builds. That is, by using POSIX Make's "VAR=`cmd`" syntax with backtick-quoted commands (or other shell evaluation syntax), you can define macros programmatically for incorporation into build rules as lazily evaluated text substitution variables, but you can use those only in build rules rather than in requisites or targets (where they get interpreted verbatim). A partial solution would be adopting GNU Make's eager evaluation syntax (VAR:=`...`) into POSIX.

OTOH, while I'm not a big fan of autotools internals (especially libtool), being able to run "./configure && make install" on tens of thousands of F/OSS packages is something I'd hate to loose. In fact, the extreme consistency in installation procedures (including use of pkg_conf etc.) and discipline in directory layout accross so many F/OSS packages is something I very much admire, given it's an unexpected outcome of a "Bazaar" style development model.


>"./configure && make install"

You forgot 'make'!

I'm tempted to turn this into a cheap shot about how the consistency obviously isn't worth much, but I won't.


Explicitly doing 'make' is redundant for makefiles generated by any version of automake I've ever used. Dependencies are setup correctly so that install also builds everything that it installs and isn't already uptodate (I'm not sure whether 'install' also explicitly depends on 'all')


The usual reason to separate them is that if you're installing to a prefix that needs root privileges to write to, then you'll be doing something like:

    ./configure && make && sudo make install
If you do this instead:

    ./configure && sudo make install
...then the compiler toolchain gets invoked as root and all the intermediate build products end up owned by root.


There is another reason. `make -j4 && make install' would be faster than just `make install' (multi-process vs. single-process).


You can do `make -j4 install`. Works as intended.


He also says later in the post you can write your own configure with a bit of bourne shell. If you expect your software to be widely distributed, a custom build script is a pain for distributions as most of them are heavily relying on ./configure && make with the appropriate arguments. Distributions also rely on the ability to override certain flags, something easy to do with autotools and quite hard with plain Makefiles (compare ./configure CFLAGS=... and make CFLAGS=..., the former don't override upstream CFLAGS).

I don't object that many projects have convulated and dated autotools scripts notably because they don't age well. It's easy to accumulate a lot of crap.

Most projects don't get autotools because they don't know how to use plain Makefile. They do because autotools bring much more than that.


>The article downplays a little bit the value of the autotools. It's not just about building for 30-year ago hosts, it's also about providing a lot of convenient tools and providing the user with a known interface. If you write your own configure, people may expect to be able to pass additional CFLAGS (not overriding the existing one), setting the path to various elements, installing in a different root, having certain targets for the generated makefiles (like dist, clean, install, uninstall), having cross-compilation work (including crosscompiling to Windows with mingw64 which solves the toolchain problem of releasing binaries for Windows), ... It's a tedious task and easy to get wrong.

So why not get a tool with 1/10 the complexity and 1/100 the legacy stuff of Autotool, but the same interface otherwise?

People who need the old checks 100% could continue to use Autotool-old, people who don't care for such BS could use the new.


I don't understand what you mean, if it has the same interface as autotools how do you remove the legacy stuff without breaking anything?

Autotools is a huge pain in the ass for the dev, unfortunately I haven't found any alternative that didn't end up being an even bigger annoyance.

A decent, simple, easily customizable and portable C/C++ build system is still very much a unsolved problem as far as I'm concerned (and I've tried quite a few of them). At least autotools are supported basically everywhere.


>I don't understand what you mean, if it has the same interface as autotools how do you remove the legacy stuff without breaking anything?

By having autotools for the projects that need "everything" (legacy BS checks) and this leaner version for the projects that don't need them.


I'm confused. Are you suggesting this leaner version should exist and it does not? Or are you advocating for a leaner version that I have not heard of?


I am suggesting a leaner version should exist.

The same way there's vim and neovim without the legacy stuff.


Have you tried cmake?


This whole discussion and article had me thinking about that.

CMake is faster than autotools and works fine with all the compilers talked about so far and all the windows compilers I know of.

Creating a CMakeLists.txt covering moderately complex build that links against a few libraries (but doesn't need and custom logic for moving files or other uncommon stuff) is normally just a few lines of code. Usually just one line of code per source file and per library (depending on how you feel about automatically including files this can be further shortened), then a little bit declaring the language and other settings. There are plenty of 10 line CMakeLists that can build large and seemingly complex projects.


You don't have to use all the legacy stuff. See: https://autotools.io/whosafraid.html. You choose what tests you want to run. No need to check if you have unistd.h if you don't care about that.

As a rewrite using the same user interface but a completely different developer interface, there is mklove (https://github.com/edenhill/mklove). However, this just covers autoconf: you don't get the flexibility of automake. Also, I don't know how complete it is.

A rewrite just to speed up things a little is a bit useless. Autotools are not that slow. And if a rewrite was done, it would be a shame to keep the horrible syntax. What needs to be kept is


If you use a simpler configure solution, then the user probably won't have much trouble figuring out how to deal with niche edge cases that they expect to work with autotools.

On the other hand, when something goes wrong with autotools, the first step is to pour yourself a drink.


It is true that Autotools offer some nice standard options to use when building and installing software. However, for whatever reason, the ability to figure out portability differences seems to be the selling point to many programmers ( http://queue.acm.org/detail.cfm?id=2349257 , https://varnish-cache.org/docs/4.0/phk/autocrap.html ).


The m4 macros (autoreconf) and ./configure is too slow when it can be not. Of course './configure -C' may speed up running it a bit. Still not fast enough.

I really feel a need for something like AC_TRUST_HOST or AC_TRUST_DISTRO or something like that would not check for each header or each function in some library. Instead it will check the version of GCC, libc, and compile some (hopefully just one) program and test.

Of course the common tests shall be skipped only in popular distributions. Say, Debian, Arch, Fedora (and their derivatives), to name a few.


In the past, I've wrangled with autotools more than I'd care to admit. I spent very many hours indeed perfecting my automake files, configure scripts, libtool and gettext integrations, making sure you can pass in CPPFLAGS and LDFLAGS as one might expect, cross compiling, with all the .in and .in.in and .h.in files, with all the m4 macros, publishing some of my own autoconf macros, all the while figuring out how to make it all work on Linux, mingw, the various bsd derivatives, AIX, Solaris, HP-UX, OS X, Visual C compilers, and what have you.

From the developer's perspective, it is an absolute nightmare – and I consider myself a seasoned professional with the various unix-like systems and their shells. Even with this deliberate care, dedication, and time spent preparing the scripts beforehand, the build usually doesn't work properly out of the box on a new platform.

My mistake was in thinking that the value in autotools system is that you can build a system that will compile for anything. I imagined all you need to do is use (or create) feature tests to sniff out any differences between platforms, and your code will build and work anywhere.

The real value of autotools is for the user. The user can run "./configure && make && make check && make install".

Don't make the mistake I did, thinking that autotools will save you time. It absolutely will not.


I really wish there were an alternative to the autotools that offered that configure make install interface without the nightmarish developer experiente. Perhaps by compromising on the portability, if that is the only way.


CMake, SCons, maybe. This question was closed as not constructive, but the discussion looks reasonable to me:

http://stackoverflow.com/questions/600274/alternatives-to-au...


CMake was a step in the right direction, but it, too, suffers from thousands of under-documented options and fiddly behavior. In a former life I was tasked with maintaining a CMakeLists.txt for a project with multiple dependent libraries, binary blobs, open source dependencies that needed to get pulled in, and the whole thing had to be built for Windows, Linux, Mac, iOS, Android, and a handful of other lesser-known mobile platforms. I wouldn't wish that on my worst enemy.


It's not so bad. I have experience with two slightly different build systems based on CMake, which had to have customized toolchain definitions and build Cairo and WebKit. The Cairo build had a custom backend, which we integrated into Cairo's autotools. Cairo was driven using its autotools via CMake's "superproject" system.


https://github.com/roman-neuhauser/motoconf aims for that space, sadly it's very incomplete and dormant.


I wrote a simple PoC of a simpler way to achieve something similar:

https://github.com/SirCmpwn/scmake

It's definitely not something you should use, but I hope new things take a similar approach. It's small, give it a read. Here's a somewhat more complicated project that uses it:

https://github.com/SirCmpwn/libccmd

I think from a usability standpoint (both for devs and users) it's really great but the internals could use some work.


https://github.com/edenhill/mklove - mklove is end-user compatible with autoconf, without the nightmares.


  If you’re coding to POSIX, you must define the
  _POSIX_C_SOURCE feature test macro to the standard you
  intend to use prior to any system header includes:
Nooooooo.....

Take it from someone who religiously keeps many of his very complex libraries portable across many systems (AIX, FreeBSD, Linux/glibc, Linux/musl, NetBSD, OpenBSD, Solaris, and others), the _last_ thing you want to do is define the POSIX portability macros.

Once you do that, you're in for a _world_ of hurt trying to use any extension, including routines from newer POSIX standards that are almost universally supported, but because some systems don't claim 100% compliance to the latest POSIX release, become hidden once you start using the user-definable feature macros.

Most systems (particularly the BSDs) make _everything_ available by default. On Linux, however, you should define _GNU_SOURCE; on Solaris, define __EXTENSIONS__ and _POSIX_PTHREAD_SEMANTICS; on AIX define _ALL_SOURCE; on Minix define _MINIX. This way, anything you might possibly want to use is available.

There are a few cases where a native extension conflicts with a POSIX routine. The classic case is strerror_r on glibc. Dealing with these cases is easier to deal with than fumbling with feature macros.

Remember, you'll _always_ want to use extensions unless you're writing pure ANSI C (C90) code. Portable is not the same thing as POSIX compliant. And many POSIX routines are effectively extensions on systems not yet certified for the latest POSIX standard.


In my experience, I've found that the configuration management aspect of writing software is the least understood and hardest to learn.

All the books on learning languages I've read just skip it entirely, or provide a default setup with no explanation. Which is somewhat understandable. But then the documentation for various projects also do the same thing. I'm left reading the man pages for various tools that some project says I need and going "Why do I need this?" on a higher level than the man pages.

And in my professional life it's much the same. On one project we have one or two people who understand how the system is setup, reject any kind of change or improvement, and just keep throwing bailing wire at it to keep the process going.

On my own, partly as a result to institutional intertia, we have an ancient TFS build controller running a powershell script running gulp running webpack running babel, and then deploy it using robocopy to network share in IIS (using iisnode). It's amazing it all works. It often breaks, rarely with the same error. It also takes 10 times as long as building on my local machine.


I will write this rant here because it will get a bigger audience than any blog I would have.

Speaking of someone who occasionally ports software to less common environments I have a few things to say that I was hoping to find in this article.

First the case for and against autotools:

autotools are well supported by every packaging system out there (freebsd ports, pkgsrc, apt, rpm, etc) where it's really just a line or three to do the build in the package definition.

The main things that are not usually thought about by developers but are loved by porters and packagers (and come out of the box with autotools) are: DESTDIR support (installing into an intermediate path, so DESTDIR/PREFIX/usr/bin/myapp) and cross-compilation, which is one of the biggest strengths of using C in the first place!

The biggest argument against autotools is that feature discovery at build time is complete bullshit. I hate it. It reduces determinism and ties execution environment to the build environment.

So if you can support DESTDIR and cross compilation without autotools, go for it! It's not that much extra work!

---

The second point I was hoping this article would address is project layout. In the last few years a nice, standard-ish layout for portable projects has emerged but package maintainers end up having to teach it to every new (big) project.

Here are examples where it has gone okay: https://github.com/nodejs/node/tree/63243bcb330408d511b3945c...

https://github.com/golang/go/tree/964639cc338db650ccadeafb74...

Where you are writing to non-portable parts of unix you can build non-portable stuff into their own files named simply after the platforms and include the correct stuff higher up in a .h

and an exmaple of it not going so well: https://github.com/dart-lang/sdk/issues/10260

(fwiw you can find https://github.com/mulander all over this "teaching" effort, so points to him)


Totally agreed about autotools. It's fine (if very slow) when it works, when it doesn't work right away it's a massive pain.

Sometimes I need to build C code for iOS and Android. Libraries with a nice clean setup like the author of this article describes are great. Libraries that use autotools are more trouble than they're worth.


One of the nice (perhaps underappreciated) side effects of Rust has been first-class support for all major platforms. If you're linking against other Windows bits you still need to think about mingw vs. msvc, but beyond that you don't need to customize/hack up a build system to get things compiling sanely on both. I recently switched back to C for a particular project and there's been quite a bit of reverse-culture shock when it comes to setting up builds & projects.


Rust still doesn't have first class support for UWP.


UWP?

EDIT - I should clarify. I google UWP and it could be Universal Windows Platform. If so, how is that different than the windows builds that Rust does support?


Think of UWP as Windows Longhorn reborn.

It is the same programming model as .NET, but built on top of COM and just the minimal set of Win32 APIs to support COM based APIs.

It is also sandboxed, just like on OS X.

For Rust to support UWP, it needs to be able to consume COM with the new UWP semantics and at the same time expose traits as COM interfaces accessible to any programming language able to consume UWP libraries.


That doesn't sound like a good idea at all, any of it. Why is microsoft doing this?


Rust is pretty nice if a bit rough with the cross compilation stuff. The big problem is speed. Compiling rust projects takes a while and some operations like reading a file line b line are absolutely glacial.


Are you using a BufReader for those file reads? If not, system calls will slow you way down.


Yep. And it's something of a known issue. Using BufReader#lines is glacial (and something of a known issue) in debug mode and merely slow in release mode. My understanding is that the BufReader is allocating a new String object for each line and so you're dealing with strict UTF-8 parsing and memory allocation.

My use case here is reading Vyatta config files. So lots of files that are generally (but not always) short. Mostly it's a fun teach myself Rust project, and it's hit lots of interesting edge cases in Rust, but otherwise I'd be all over just writing it in Python or Ruby.


Interesting, I did not know this.



The nice alternative is MinGW(-w64) with MSYS or Cygwin supplying the unix utilities, though it has the problem of linking against msvcrt.dll

You can use MingW's make but compile with cl instead of gcc and that problem is not present. You will have to check if all compiler flags are supported though. Or use a wrapper around cl which translates compiler flags, pretty sure that exists already.

My preferred approach lately is an amalgamation build

That won't scale well though. For large projects you'll really want to use something as mentioned above, or have seperate Makefiles and VS projects, or use a build file generator like CMake.

edit another thing I wanted to add: instead of using

  #if defined(_WIN32)
  void foo() {
    //win implementation
  }
  #else
  void foo() {
    //unix implementation
  }
  #endif
all over the place another option is to split the implementations over source files like impl/win32.c and impl/unix.c then have the build system deal with selecting the correct one. Especially when there are more than a couple of platforms this is much cleaner and more convenient.


#if defined(PLATFORM)

I would recommend against that as well.

I have a directory structure where there's a module/hello.c (stub or generic, platform independent code) and then there's module/platform/hello.c which my Makefile selects over the generic one based on the platform. Primary issue is that there's some code duplication, and it's hard to keep .h files general across platforms all the time (not possible all the time), but in general it does make life a lot easier - especially if platform number grows over time or certain are removed.


That's quite similar to how Go does it (Go would have something like hello_platform.c), which in my opinion works quite well. I actually find it to be one of the nicest things about Go, compared even to Haskell, which for some bizarre reason uses the C preprocessor (!!) to pepper code with #ifdefs.


Go took this idea from Plan 9. In fact, Go shares a lot with Plan 9 C compiler structure.

Related: http://doc.cat-v.org/henry_spencer/ifdef_considered_harmful


> for some bizarre reason uses the C preprocessor (!!) to pepper code with #ifdefs.

I always thought that was funny. You are using the most powerful (of the widely used) language available for transforming text, but must fall back to the C preprocessor.


Interesting! I never went in-depth into Go, apart from spending an afternoon with it to have a go at it.


That has always been my approach as well, much easier than #ifdef spaghetti.


Completely agree about Cmake. The language may be quirky, but it has solved all the hard problems. Often the build files will be trivially short as well.


Yes, I've been using CMake again after a few years of something else. I came back to CMake because it seems to be the build system that sucks the least.

I use CMake with Ninja, which is faster and nicer than plain old Make.

CMake completely falls apart if you're trying to do anything else than normal user space applications that link with commonly used libraries. Some years ago I tried building a bare metal project with CMake and the job got done but the experience was atrocious.

But CMake really shines in building "normal" applications and in particular cross compiling them.


Also at least for those of us that care about C++, Google and Microsoft's adoption of it pretty much settles the question which build system to use.


I don't mind Make for small projects, but it can be a bit of a nightmare for large ones - there are some very unwieldy Makefiles out there. It gets more complicated when you want to have several libraries and/or executables in a project, each of which link against different external libraries. I've always found CMake makes this much easier IMO, and while the language is a bit janky, it does the job.


Make is kind of write only. I have this library of mine for image and graphics processing that I wrote and which I'm rolling through years. In the beginning I wrote in all this functionality I thought I needed, basically an architecture of project and it does what I need - I can make a static or dynamic library, it works on Windows, MacOS, and Linux, it has "modules" which can be overriden based on if there's a certain subdirectory within a module with OS/Platform name (basically a compile time multiplatform specialisation) and several other things.. It works and is rock solid. However, if you've asked me now what's in there and why and how... Just by glancing at that spaghetti code in there I know in general what's where, but damnit if I would be brave enough to make significant changes to it. It works though and it works great. I must also point out that I use gcc (different versions) and clang here and there on all three platforms and I change compiler/tools in the environment itself, not within Makefile.

If I would to do it again (I'm not active developer anymore though - it's more for personal use now) I would do it again with make. There's certain straightforwardness to it when writing it, and from my experience there's zero to almost none maintenance, but ymmv of course. I would probably look into CMake if I were using vastly different compiler tools (configuration-wise), but I'm not.


This guy has some really interesting blog posts about system level programming.


autotools sucks for developers, but as a user I highly value "make uninstall".

Every time I see software without a make uninstall, I have to manually create a textfile with the stuff the application installed so I can do a clean upgrade... it sucks.

Hell, most Windows software ships (mostly crappy) uninstallers, and most OS X software can be uninstalled by dragging the app folder into the Trash, but there are loads of .nix programs without uninstall support? What the f..k?

Oh, and .nix software using standard build systems has another advantage: it's usually trivial to package them in .deb/.rpm if you want to distribute e.g. custom ffmpeg/vlc builds on a fleet of servers.


For the thorny Windows platform: how about using cygwin so you have the shell and regular Linux commands but use the Visual C++ compiler for a more native exe.


Slightly off-topic or meta but still: the layout, CSS styles, fonts, colors and contrast on this blog (article) are just PERFECT. Rare occurrence.


>The man page also documents secure_getenv(), which is a GNU extension: to be avoided in anything intended to be portable.

GCC itself is portable, so?


That's like arguing that because cows are vegetarian, so is beef. Portable means that any system implementing the standard interface can use it; secure_getenv isn't part of the standard, therefore it's not portable.


No, it's more like arguing that if one is OK with the (lots) of platforms GCC covers, then it shouldn't matter if some extension is GCC specific. Your program is still portable for your use cases.


Not everyone uses GCC, or glibc.


       secure_getenv() first appeared in glibc 2.17.
not GCC


I don't like plugging my projects on HN, but in this case I'll make an exception.

Though I've recently warmed to autoconf, I still agree it's overkill for most cases. For many years I have, like many others, maintained an ad hoc library of purely preprocessor-based feature checks that I would copy from project to project. When recently writing a comprehensive Unix system API module for Lua I basically ended up with a ridiculous number of feature checks. I broke those out into a separate project I called autoguess.

  https://github.com/wahern/autoguess
Autoguess is a config.h file that exclusively uses preprocessor-based feature checks. It's very comprehensive. Much more comprehensive than most need, but compared to autoconf, or relative to modern C++ projects, all those preprocessor conditionals are effectively free.

autoguess doesn't provide any compatibility routines--just the detection. (For compatibility routines see my Lua module, lunix. Compatibility interfaces can be tricky, and context matters, so I've learned to stay away from trying to comprehensively "solve" that problem. One needn't look further than Gnulib to understand the pitfalls of trying to maintain such a beast.)

Most of the code in the autoguess repository is a framework for running autoconf checks and comparing feature detection results with the autoguess header. The library is just the single file "config.h.guess".

Autoguess uses the same naming conventions as autoconf, though in the autoconf universe consistency can be poor and HAVE_FOO names can differ from project-to-project. However, autoguess recommends to use "#if HAVE_FOO" arithmetic conditionals, not "#ifdef HAVE_FOO". That makes it possible to override feature detection macros from CPPFLAGS. Autoconf's rule about using #ifdef is an archaic solution to a mostly non-existent problem these days--broken C preprocessors not evaluating an undefined macro as 0 in arithmetic expressions. (There's a 6 line preamble at the top of configure.ac in my repository that will fix how autoconf generates config.h files.)

Of course, as new operating system releases are made, the autoguess checks can become outdated. Though many autoguess checks aren't directly reliant on version numbers, autoconf feature checking is, I agree, a more robust method. But autoconf checks aren't immune to regressions, either. There's no substitute for regularly building your software on various platforms.

Autoguess does much more than detect system and compiler versions, but I've benefitted greatly over the years from this project:

  https://sourceforge.net/p/predef/wiki/Home/


Why are people still so obsessed by C? There are better alternatives for almost any usecase. Where there are none, there portability is a non-issue (eg. embedded stuff. although C is also not especially suitable for that use, yet still prevails, because of the existing tooling.)


Obsessed? I don't think so.

It's just that C has been around for 40 years, it'll stay around for at least another 40 years. You can still compile source code from 20-30 years ago with little or no modifications.

C gets the job done fast and efficient.

That said, I'm waiting for a good excuse to learn Rust. Prior to that there have been very few alternatives to C.


I've seen C used for safety critical software. I've used it for that. I've seen the insane amount of tooling and support and processes and rules to make it suitable for the task it is utterly not suitable for.

It does not get the job done fast and efficient, if you consider the develpment costs. The fast code the compiler generates could be genrated from other sources just as well, it's not especially the merit of that language.


I write security critical software with C and I know exactly what you're talking about.

It's written in C because the tooling to analyze and certify it for security in embedded/automotive/aerospace is targetting C (or C++). For some industries, Ada might be an alternative.

It may be paradoxical, but you're not going to be able to write safety critical software in Rust because the tooling to certify it for security doesn't exist and it'll still take years to get there.

Do I like this situation? No, I do not. Do I think it's a big problem? No, not big enough that it can't be solved by pouring money and engineering resources on it.


I also write safety critical software, in C++. I disagree that the necessary tooling for certification doesn't exist for Rust. You don't need that much tooling. Measuring test coverage is the hardest part (and Rust doesn't do this very well currently, but it is possible). Most of the things coding standards for C (e.g. MISRA) require are irrelevant for safer languages, so you don't need complex checkers. For static analysis the story is similar.


In safety stuff, not only has the source to be vetted. Also the compiler building the target binary must be blessed and hopefully well understood.


I have seen a certified compiler (GreenHill) compiling invalid code happily (not one relying on undefined behaviors), despite its certifications, for which even an ancient 3.x gcc complained (rightfully!)

These certifications unfortunately often are on the level of the golden shields CAs provide to customers to put on their sites as sign of trustworthiness.

So yes, blessed is a nice word for it.


I hate GreenHill C compiler as much as the next guy [insert huge rant] but the Blessed Compiler of Choice in its frozen buggy state at least usually means it's a KNOWN ENTITY. We code around the bugs, we compile in -O0 mode etc etc, we analyze the assembler output to death and so on.

I see these "certified" stamps less as "secure" and more of a huge slowing down of progress and change - which strangely can translate to more secure since you know what you are dealing with.

If I can dream, I'd take a "certified" GNU Ada, or Rust, or something over C. But that is bound to take many years to happen if ever. I think we sooner will see Rust compiled to C which is fed to something like the abominable Greenhill so security consultants can pour over the intermediate C output and put it through the abominable Greenhill.


For safety stuff I've worked on, the source had to be vetted, and then someone goes through line by line and confirms that the assembly generated for each line of code matches the source code. For C or Ada with conservative optimizations, this is fairly mechanical (and extremely boring), but means that trusting the compiler is not a requirement.


Then you must have worked on tiny software.


Blessing a compiler consists of running it on a trivial test suite and writing down the results, however bad they may be.


Actually, it is the merit of the language. C - and FORTRAN for that matter - are fast because their restrictions force you to program a certain way. A way that just-so-happens to be efficient (if not particularly safe).

For example, generally, people use a lot of static arrays in C. Arrays are efficient. The way you program with fix-sized structures like arrays tends to be different to how you program with dynamically-sized structures and that style of programming tends to be more efficient.

In the vast majority of cases, a C or FORTRAN program will be more efficient than other languages not because they are C or FORTRAN, but because they force you to use an efficient paradigm.

Even other compiled languages like C++ or D are, more often than not, slower than the C counterpart UNLESS they are programmed in a "C style".

e: To be clear, the fact that they are compiled low-level languages obviously does help, but I'm trying to make the point that it's not the only reason.


C is fast because decades of work have been spent on making good compilers and excellent libraries. C's lack of e.g. templates makes it more difficult to write efficient code in certain cases, c.f. sorting. I disagree that a typical desktop program that hasn't been specially optimized for speed will always turn out more responsive in C that, say, in Java.


> Prior to that there have been very few alternatives to C.

There were, but the market chose otherwise.


> There were, but the market chose otherwise.

That depends on what you mean by alternative to C. If you are just looking for a "portable assembly" or systems programming language: there were many alternatives.

If you are looking for the systems programming language that has a chance building and running the same source code on multiple different systems then you have to wait until Posix which is tied to C.

Of course you can well argue that once you have Posix, C is not actually required. You just need a language that can call C style functions (this is a trivial exercise for many languages) with compilers for the platforms you are interested in (not trivial, but it is straight forward work that has been done often enough that it is well understood).

Most people mean Posix+C when they say there were no alternatives to C, and in this form they are correct. The other choices may be better for most definitions of better, but there are few alternatives that let you write code and have a chance of it running on something else.


You don't need POSIX at all if the language has a rich set of libraries.

In a way, I always though of POSIX as the C batteries that ANSI didn't want to make part of ANSI C, to make it easier to create compliant compilers.

Which was kind of wasted effort, because to make it easier to port code, many C compilers outside UNIX always bundled a subset of POSIX with them.


>You don't need POSIX at all if the language has a rich set of libraries.

Agreed, but that increased the cost of porting a language to the new platform. The larger (and richer) the library the more expensive it is. Unless your large library is built on a smaller internal library (like Posix).

Note that I'm arguing that C was the first to really achieve this. I'm not arguing C is the best choice, not am I arguing that the other languages couldn't have reached that. There are other languages (some better than C) that could have done just as well, but for some reason didn't.


Out of interest, what were the alternatives? As much as I'd love a world in which we were on Lisp machines or running Dylan, these weren't exactly alternatives to C at the time to my knowledge.


Before C was brought into the world, OSes were being written in Algol, PL/I dialects, since 1961.

At Xerox PARC they moved from BCPL into Mesa, used to write Xerox Star and Pilot OSes. Also one of the first IDEs, also known as Xerox Development Environment (XDE). The year was 1976.

Mesa eventually got automatic memory management support (RC with a local tracing GC for collecting cycles) and became known as Mesa/Cedar.

Niklaus Wirth created Modula-2 in 1976 after his first sabbatical at Xerox given his experience with Mesa, used it to create the Lilith workstation at ETHZ, this was followed a few years later by Oberon for the Ceres workstation, inspired by Mesa/Cedar after his second sabbatical at Xerox.

The OOP extensions that Borland added into Turbo Pascal are actually from Apple's Object Pascal, used to create Lisa's OS and the initial versions of Mac OS, before Apple decided to make the development tools appealing to the growing UNIX workstation market and introduced Macintosh Programmer's Workshop.

On MS-DOS compatible systems, which were written in Assembly, there was a plethora of Basic, Pascal, Modula-2, C and C++ compilers to choose from. Plus business languages like Cobol and xBase.

It was only with the success of Watcom C++ adoption among game developers, thanks to its DOS extender, and the move to OS/2 and Windows 3.1 that C and C++ started to grow in adoption.

However most developers on OS/2 and Windows 3.1 were actually adopting C++ frameworks like CSet++, OWL and MFC, or alternative environments like TPW, Delphi or VB. Mac guys had Powerplant.

On Windows 3.1, C++ patterns like RAII were already common place, and even though each compiler had its own library, all of them provided support for safe strings, vectors and some form of smart pointers.

Writing pure C on Windows, besides Microsoft themselves, has always been mostly done by those porting UNIX stuff into Windows.

Even Microsoft by the time they released the Windows 3.1 SDK, a new set of macros was introduced to try to make it safer to code in plain C.

https://support.microsoft.com/en-us/help/83456/introduction-...


C is the Lingua Franca, all other languages can easily link to it. It is the most portable language, doesn't introduce hidden runtime overhead, and it is a very simple language. As long as CPUs are around, C will be around too (whether this will also be true for C++ is quite another question).


>doesn't introduce hidden runtime overhead

Or protection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: