Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Programming without a build system?
63 points by mikewarot on Nov 12, 2022 | hide | past | favorite | 106 comments
I'm an old MS-DOS / Windows programmer used to Turbo Pascal, Delphi, etc. The old days were simple, in that you compiled code, and gave it to your customers. Everything just works.

Now every time I try to learn something new... there's always a build system in the way, with MSTOICAL it was Autoconf, with switching to Linux, I couldn't get WikidPad to build on Linux, trying to build a lifeboat for Twitter, Python works, but then modules require builds that break.

Is it possible to have a system without Make or the like?

Alternatively, any good resources for the above?




In your MS-DOS / turbo pascal example, there are no third party dependencies. There is just the platform and the libraries that come with the product when you install it, and your code built on top of that. In this scenario, you end up reinventing a lot of wheels in your own app code, although the advantage is that all the code does exactly what you want and nothing ever breaks due to some complicated dependency hell upgrades.

You can do this today with any newfangled languages, just avoid third party libraries, or at least use them as a last resort, and choose options that are stable.

Autoconf is just a complicated system for finding and detecting third party libraries. If an application just uses the standard library, the build system can be very simple and reliable.

Most of your pain is actually due to 'package managers', not build systems per se.


> Autoconf is just a complicated system for finding and detecting third party libraries. If an application just uses the standard library, the build system can be very simple and reliable.

Another way to avoid autoconf is vendoring third party libraries. If they are open source, one could just use Git sub modules. You might need to call ‘./configure’ in the vendored library directory, but you don’t need to use autoconf for the application itself


> just use Git submodules

Easier said than done. I have never gotten submodules to work the way I want them to, and I’ve since abandoned the idea completely.


The UX is admittedly awful, but once you get familiar with its warts submodules are entirely usable.


> The UX is admittedly awful, but once you get familiar with its warts submodules are entirely usable.

The point is not how much effort you need to invest in getting git submodules to eventually or occasionally work.

The point is whether vending the subproject as a prebuild library/package is simpler or not, or whether it requires more work or not.

And it's abundantly clear that submodules are always inferior to vending packages, not to mention they are audit-friendly.


> Autoconf is just a complicated system for finding and detecting third party libraries. If an application just uses the standard library, the build system can be very simple and reliable.

It's slightly more than that.

Autoconf is a Makefile generator. It was designed to do system introspection and perform sanity checks, and configure projects so that they could be built and installed in multiple platforms.

It's always possible to switch back to Makefiles. You'd be foregoing sanity checks, abstractions that come for free, and you'd have to fill in the blanks left out by missing features such as what compiler I use and how to configure the compiler and how to put together Release and Debug builds and how to consume dependencies and where do I install the project.

Complaining about build systems is like complaining about a high level language: you might focus on the bloat, but you completely miss the extra work you have to reinvent the wheel and have to maintain it forever.


Don't avoid third-party libraries. Unless you're some sort of genius you're constantly going to be reinventing existing wheels, whether it's networking, date, time, leap years, security stuff, and so on. And do you really need your own utility folder for manipulation of strings, lists, and the likes? And as soon as there's one thing you feel isn't worth or possible to build yourself, you need a dependency after all, back to square One. So I wouldn't search a language without a package manager or a build system, but find a good one. And then you may find a language/community with decent culture around their published packages (NPM sure has had its problems there).


In the case of the examples you cited, the IDE was the build system. In most cases you were using libraries that shipped with the development tools, so it appeared to be easy to manage since everything was done for you. Using third-party libraries wasn't too bad either. It typically involved changing a couple of parameters in the IDE. From my limited experience with Visual Studio, the same can be said of it.

Much the same can be said of simple Makefiles. If you are targeting your own machine, you can create a couple of variables to track libraries and such then tweak them much as you would tweak the configuration of a traditional IDE. The trouble comes with creating software that other people will build or while building software created by other people. Linux (actually, Unix in general) is notorious for its inconsistencies: libraries and headers aren't always in the same place, sometimes libraries are missing, sometimes one library is used in the place of another. Tools like autoconf help developers to work around that. Of course, modern build systems go a step further by pulling in dependencies.

It probably wouldn't be much of an issue, except for one thing: everyone seems to have their pet build system.


Odin is a programming language without any silly complex build system, it is as simple as "odin build .", even for large projects.

Linking is done by describing it as part of the code with its `foreign` system [1][2]. The benefit of this approach is that your code itself describes what needs to be linked against what. It also has the benefit that if you don't use a foreign library, it doesn't get linked against due to its minimal dependency system.

If you want to see huge examples of this, I recommend checking out the vendor library collection[3].

n.b. I am the creator of Odin, and one of the original goals was to create a language that didn't necessitate a complex build system and just get to programming. Requiring a complex build system and package manager are anti-features in my opinion and make programming a worse experience to use. Odin borrows heavily from (in order of philosophy and impact): Pascal, C, Go, Oberon-2, Newsqueak, GLSL.[4]

Niklaus Wirth and Rob Pike have been the programming language design idols throughout this project.

[1] https://odin-lang.org/docs/overview/#foreign-system

[2] https://odin-lang.org/news/binding-to-c/

[3] https://github.com/odin-lang/Odin/tree/master/vendor

[4] https://odin-lang.org/docs/faq/#what-have-been-the-major-inf...


+1 for Odin. It's a very nice language to use, very rarely has bugs in the compiler, and has never required more than an 'odin build .' to build my projects. It does a lot of things right and removes a lot of the cruft of other languages.


In the rare instances where you do have bugs in the compiler, have you been able to resolve them reasonably? Or is it deeper language issues that you then have to work around? Not trying to poke holes; it seems really cool and I want to try it myself soon.


Most compiler bugs I’ve seen (a rare occurrence) had decent workarounds that didn’t require much effort, and kept me moving while I waited for a fix (the community responds very fast).

Overall, the design of the language is very clean and well thought out. Every decision that was made, and continues to be made, feels like it was put there for a reason; and, for the most part, nothing feels out of place.


Odin has poor support for Windows:

https://github.com/odin-lang/Odin/discussions/2047


This issue is asking Odin to move away from Visual Studio (by which I assume they just mean the MSVC toolset), which is by far a highly preferred environment for C++ programming on Windows[0]. Saying it has poor support for windows because of this is a bit hyperbolic at best and misleading at worst.

Additionally, the issue has a build script using a different compiler and the asker basically said it wasn't good enough for them.

[0]: https://www.jetbrains.com/lp/devecosystem-2021/cpp/

According to this survey, Visual Studio is preferred by 24% of respondents, just behind VsCode and CLion. And MSVC is the 3rd most popular compiler, just behing gcc and clang. And keep in mind these results are across all C++ developers, not just windows developers.


Visual Studio is a bloated mess, and has been for many years. Its at least 10 times larger than other options, such as MinGW-LLVM:

https://github.com/mstorsjo/llvm-mingw


> This issue is asking Odin to move away from Visual Studio (by which I assume they just mean the MSVC toolset), which is by far a highly preferred environment for C++ programming on "[0].

Your source does not support your claim at all.

Even though vscose and Visual Studio are reported as popular IDEs, you're not talking about IDEs. You're discussing compilers and compiler toolchains, which is not the same thing as a IDE. Microsoft's msvc compiler is not the same as Microsoft's Visual Studio. Visual Studio is an IDE which, among other things, uses msbuild and msvc compiler.

The poll you quoted states that 78% use GCC regularly, followed by clang being used regularly by 43%. Microsoft's msvc compiler is in 3rd place at 30%. The same poll also points out that 55% uses cake and 36% use Makefiles regularly, and Visual Studio projects showing up in 3rd place with 31%.


You left out the last part of my statement, in context I said:

> by far a highly preferred environment for C++ programming on Windows

The survey doesn't split the responses up by OS. That's besides the point, I'm not trying to say MSVC is the most popular toolchain. I'm refuting the claim that if you only support MSVC on Windows that constitutes "poor windows support".

> Microsoft's msvc compiler is in 3rd place at 30%.

Yes, and the question for this was "Which compilers do you regularly use?". 78% + 43% + 30% = 151% which is greater than 100%. I'm assuming this means that respondents were allowed to select multiple answers. It seems to make sense to me that people that use MSVC commonly use other compilers when developing for Unix, but the reverse isn't true nearly as much in my experience. Besides that, MSVC is in 3rd place and pretty close to the same amount of users as clang which was the proposed alternative. In that case it really doesn't matter which one you choose to support because you'll be supporting about the same number of people either way.

Anyways, this doesn't matter to the claim that I was trying to refute. OP said that Odin has poor Windows support and pointed to an issue that basically said if you only support MSVC on Windows that's not good enough. As a Windows developer, that seems like an odd stance to take imo. This survey shows that a third of developers use MSVC across all domains, including Unix. This seems to imply that if you support MSVC, you're at least supporting a large percentage of devs.

Also, I didn't mention the build systems because that's irrelevant. You can use CMake and Make with MSVC or whatever compiler you have installed. I say this as someone that regularly uses CMake and/or Make to compile projects on Windows using MSVC.[0][1]

[0]: https://learn.microsoft.com/en-us/cpp/build/reference/creati...

[1]: https://cmake.org/cmake/help/latest/variable/MSVC.html


That issue doesn't justify saying it has poor support. Rust is known to have excellent support for Windows, and has the exact same dependency.


I don't think you know what you're talking about. Rust has "windows-gnu", which doesn't require a Visual Studio installation.


The ad-hominem isn't advancing your cause lol. Rust defaults to the MSVC toolchain. Surely you knew that. You have to go out of your way to install the incompatible GNU toolchain.

You're the only one here complaining about MSVC. Have you stopped to consider why Rust and Odin and so many other languages are happily depending on MSVC instead of the alternatives? Or does none of that matter and your opinion is the only correct opinion?


> The ad-hominem isn't advancing your cause lol.

There was no ad-hominem. OP pointed out your initial claim contrasts with the facts.

> Rust defaults to the MSVC toolchain. Surely you knew that. (...)

This reads like a non-sequitur. The claim you've replied to was that Odin had poor windows support, and OP pointed out that Rust also supported Windows without msvc toolchains.

> You're the only one here complaining about MSVC.

Sorry, you're clearly speaking while oblivious to the discussion. The message you've replied to includes a link to a ticket asking Odin to shed it's dependency on msvc. It's clearly titled "move away from msvc".

https://github.com/odin-lang/Odin/discussions/2047

I believe you owe OP an apology.


> incompatible GNU toolchain.

incompatible? incompatible with what? Ive been using it for years with no problem.

> You're the only one here complaining about MSVC.

The ad-hominem isn't advancing your cause lol.


For simple C or C++ programs I add this snippet at the beginning of the file:

  #if 0
  set -e; [ "$0" -nt "$0.bin" ] &&
  gcc -Wall -Wextra -pedantic -std=c99 "$0" -o "$0.bin"
  exec "$0.bin" "$@"
  #endif
Then make the file executable and you can use it like a script. All the flags are just there and visible.

However it would be preferable to just have a smart enough language tooling to do it without such tricks.


tcc also allows you to run C with

    #/usr/bin/tcc -run
at the beginning of the file. I use that all the time while prototyping C.



Very nice!

One question if I may: how can I erase "$0.bin" after its execution?


Yes. Look into Go. Between the conversions and source placement, the built-in "go" tool understands source placement and how to compile everything. No configuration, project file, Makefile, Autoconfig, Configure file needed.


Well, there's still a build system (and an opinionated one!) in the mix, it just happens to be vendored into the same executable that provides the language/compiler. And that's great! A lot of folks like that tight integration.


So the solution to this guy's problem with build-systems is to use a different language? (Not just you, several commenters have proposed that solution). It's a bit like saying "Oh, that's because you're using Linux; get a better OS".


Well, he did mention he had tried a few different languages, so maybe they're actually looking for a some of these suggestions?


Well, I guess I'm a slow learner; I reckon on it taking me a minimum of 6 months to become competent in a new language. It's not just the syntax and the libraries; every language I've ever used has had heffalump traps, but even if there are no traps, you still have to become sure there are none.

Most programmers I've worked with have become comfortable with a new language much faster than me.


If you already know how to program in a curly braces language then you can be productive in Go in a few weeks. The language is surprisingly minimal.

Like everything, it has footguns, but for the most part there are really no new concepts to learn Go.


it is a solution to his python grievance somewhat

having a decent standard library saves you from needing to use a bunch of 3rd party modules that may or may not need to be compiled


Can confirm.

I don't program in Go but when there's Go software that needs fixing, it's so much easier to "go build" it and fix bugs in this unfamiliar language than it is merely to compile software written in many more familiar languages. Build systems that just work is a huge advantage.


Make still helps with go, if for no other reason than to simply provide shortcuts (e.g. `make test` instead of the `go test` command with all its options one has to type out. The Makefiles are generally quite simple, though.


That’s not really any different than a shell script, right?


They compose differently and non-trivial projects still benefit from build tooling even with go.


If you can get into the beta, the Jai programming language by Jonathan Blow ticks the box of "no build system required." With how the build system is managed (just running arbitrary code at compile-time that modifies the build process), I can do this 99% of the time:

  #run compile_to_binary("name_of_binary");
  
  main :: () {
    // ...
  }

  #import "base"; // my own library that has 'compile_to_binary'
I'll go into depth about what this does, but if you're not interested, skip to the final code snippet.

The above code '#run's any procedure at compile-time (in this case 'compile_to_binary' which is defined in the library I imported). That procedure (really a hygienic macro) configures my 'workspace' to compile down to a 'name_of_binary' executable in the current directory. Any third-party libraries I've imported will be linked against automatically as well.

To do this without a dedicated procedure, just put this in the same file as 'main':

  #run {
    options: Build_Options_During_Compile;
    options.do_output = true;
    options.output_executable_name = "name_of_binary";
    set_build_options_dc(options);
  }

  main :: () {
    print("This is all I need to build a project\n");
  }

  #import "Basic";
  #import "SDL"; // or whatever...
Then compile:

  jai above_file.jai
I've done this for projects with hundreds of files that link in SDL, BearSSL, etc.

The best part is neither of these systems are a requirement to build your project. Running the compiler against a Jai file will do everything I do with my own system (I just like having the ability to configure it in code).

Jai has been a breath of fresh air in terms of "just let me write code," so I highly recommend it if you can get in the beta.


If you can get into the beta. I'm a ten year C++ programmer and focus a lot on games but my dove's song didn't reach Jonathan Blow's ears, or something. I fear I might not be the "right" demographic. Which is disappointing.


I'd say to give it another go. There's a wide range of demographics in the beta, so it could've just been sent at the wrong time! Invites come in waves, usually at the release of a new version, so it could be a month or so before you get a reply.


I think you end up needing automations past a certain project size no matter what (for several reasons; the three largest are discrepancies between test/build and deployment environments, dependency management, and asset transformation if your ecosystem needs that sort of thing; others in these comments have gone into the "why" extensively).

But! I think the "can I get by without needing $big_build_toolchain?" question is excellent, and would recommend that you start by building automations yourself. That means writing small programs (often shell/batch scripts) to smooth over repetitive parts of your build/packaging/deploy process.

That approach yields a ton of understanding about how build/transformation/delivery software work with your platform of choice, and that understanding in turn makes you a way better programmer in a ton of different ways.

Now, this doesn't mean that you should stick with home-rolled automations forever; past a point, they become more or less equivalent to others' build systems and you can switch if you like. But switching to something from a position of full understanding of what it provides is a much easier and friendlier process than "I have to use tools A, B, C, and D just to get 'hello world' working, and don't know what any of those do".

I'd recommend this approach to anyone with the time; it really confers a lot of understanding and confidence. And if you don't have the time/you need a prototype deployed yesterday, don't worry about it; copy the Medium article snippets and learn what the tools actually do some other time--just try to make sure "some other time" isn't "never".


I think this is an excellent suggestion. When you do switch to a full build system you'll be able to troubleshoot any issues a lot faster.


It's perfectly cromulent to write software without a build system -- not even Make. Just write a build.sh script that compiles all the source and links it for you.

But the reality is you'll be giving up the features Make or another build system provides -- things like recompiling only the files that need recompiled, or, more fancily, automatic build configuration and dependency discovery.

When you were building on DOS or Windows, your Borland IDE or whatever handled these details for you. In the Unix world, everybody relies on a constellation of small tools to handle different aspects of the build process rather than just entrusting that to their IDEs.

I bet you can get Clion (C, C++) or Lazarus (Pascal) to work the way you remember Turbo Pascal working. But those only work for their respective languages.

If you really want to go whole hog, vendor any dependencies you have. That is, incorporate them into your project and build them all at once. That way you don't have to worry about builds breaking on systems that don't have them.


Rust has cargo, which just works and doesn't require you to set it up (cargo build --release; hand binary to the client, no makefile required).

but...

we have clouds now, so even if you find something, development will require CD/CI pipeline, github actions and so on.


> development will require CD/CI pipeline, github actions and so on

Nope, definitely doesn't. You can just distribute the source code alone, same as has been done for decades. And with languages like Go and Rust, it's quite fast and simple to build from source, even on Windows.


You had me until the cloud thing. Rust really is cargo r -- release, then distro the binary. On Win, just works. On Linux, generally the user's OS has to be the same or newer than the compiler's. Can usually sort this out by compiling on an old OS via WSL.

Example: I have a molecular simulator I'm working on. Has 3D Vulkan graphics, a GUI, and a few dependencies. My mother got it working by opening the (3Mb compressed) binary after downloading from a dropbox folder.


Just use Visual Studio, Delphi, C++ Builder, Eclipse, InteliJ, Netbeans, XCode stick with the project files, no need to mess around with build systems.


Some open source projects like OpenCV are IDE friendly... thanks to build automation tools like CMake. Sure you can use good old make, but you can also load the projects files into your favourite IDE to build. Easy.

Other projects are unfortunately not. Say you want to build with Visual Studio. That means you have to manually configure the include/lib paths first, and probably build some libraries before building your main project, etc etc. Annoying.


Not really the thing you’re looking for, but for those looking for a toolless approach static web apps are a possibility. Host a folder on github pages, put an index.html file in there, start coding.

Plugging my own repo: https://github.com/jsebrech/create-react-app-zero

It is a version of create react app that works in that way, no build tools needed, only a static web server for local development.


I feel your pain. I've given up on so many projects because I got bored tinkering with build systems.

Ultimately, the answer to your question is going to depend on what you want to make. For instance, if you want to make an iOS app, it's going to be difficult to do so without using Xcode's build system.

Go is a very good language in terms of having a build system that stays out of your way. On Linux, you'd do something like `sudo apt-get install golang`, and then be on your way.


My preferred "build system":

Start with bash script.

Translate that to a tiny ninja script when it gets to be more than a page or two. (Ninja is technically a build system, but it is tiny and simple).

Translate that to a tiny python script that emits build.ninja when it gets to be more than a page or two. Not much more to it than globs and f-strings.

Overall that scales well, has no large dependencies (you can just dump a Ninja binary in the repo), and is fast.


Start with a bash script, end having implemented a minimal HTTP 1.0 server on a bash framework is my story.


A lot of people have mentioned Golang, and that's fair, but this is HN so obviously Lisp can be the only correct solution. Not just kidding either - if you want to get away from build systems entirely, and especially if you're focused on building compiler tooling and the like, consider using Racket. It's kind of like a "DSL for making DSLs" based on Chez Scheme and a small amount of C-code. It provides a ton of tooling for common things you need/want/didn't-know-you-wanted when making programming languages. https://beautifulracket.com/ is a nice introduction to working with languages once you have a handle on the basics.


I'm not enough to profesionally use Turbo Pascal. By digging old codes on old computer magazines, my impression was typically Pascal projects are "fat projects", where the dependencies/libraries are stored inside the project, so you simply run the Turbo Pascal IDE and built it. Done.

Very different case today. We need to use various database/cryptography/imaging/network/etc libraries fetched from github or some other servers, and so far the most reliable way to make the building process works regardless of your OS is to use a build system.


> you compiled code

> gave it to your customers

You could start with an HTML page and vanilla JS. Use Bootstrap, Tailwind, whatever with a <link> tag. But then instead of a build system, you're wrestling with home-grown infra if it's not cloud.

The need for a build system should become apparent, but only once you're comfortable with the idea.

Without make, you were running `cc` or `gcc`. Even your previous languages may have had a big "Play" button in the IDE. Usually even those invocations are--behind the scenes--basically one super long command.

Classic ASP is another option. It's on Windows, IIS is free, and Azure (Microsoft cloud) supports it.

By the way, it appears Delphi is alive and well via Embarcadero [1].

If you pick Delphi and a web framework, you can put it in a Docker container and run it in the cloud. The "build" mechanism in that case is whatever you need from the container image's OS to bootstrap Delphi runtime, but that's well worth learning.

[1] https://www.embarcadero.com/products/delphi


This might seem pedantic, but there is always a build system. Even a basic C compiler knows how to include header files, find the system libraries to link in, and so forth.

A number of languages have compilers that are smart enough to find and build all your code, if you don't have any dependencies. It's, as others noted, the dependencies that create most of the difficulties.


> there is always a build system

not if you use an interpreted language


Unless your code is all in one file, your interpreter has to essentially build your app at run time by examining your import statements, and running rules for resolving them.


So? If keep your code in a single file and use an interpreted language, then there is not always a build system.


True. There's also no build system if you don't write any code, or do something totally unrelated like wood carving.

The point was to get people to view the issue slightly differently.


That's really stretching the definition of “build system”.


I literally began my comment with "This might seem pedantic", to emphasize I understood that.


I appreciate that the Julia base install and standard library includes dependency management and that compilation happens on the fly. The entire experience involves Julia itself modifying a few TOML files. It really helps when almost everyone using the language is using the same package management system.


Build systems transform source code into binaries or whatever the needed target is for target environment.

The alternative is live systems where the source code is compiled and interned for immediate use in the running system. Examples include: Lisp machines, Smalltalk, and Forth.

There mught be other paradigms as well.


Came here to mention Smalltalk. In things like Smalltalk-80 and Squeak, there was no build system, there are no source code files, there isn't anything but the Smalltalk Development Environment. With something like ENVY/Developer, building involved generating an exported image from the environment.

If OP wants to try it: https://squeak.org/


Even ancient Smalltalk-80 provided —

"Within each project, a set of changes you make to class descriptions is maintained. … Using a browser view of this set of changes, you can find out what you have been doing. Also, you can use the set of changes to create an external file containing descriptions of the modifications you have made to the system so that you can share your work with other users.

The storage of changes in the Smalltalk-80 system takes two forms: an internal form as a set of changes (actually a set of objects describing changes), and an external form as a file on which your actions are logged while you are working (in the form of executable expressions or expressions that can be filed into a system). … All the information stored in the internal change set is also written onto the changes file."

1984 Smalltalk-80 The Interactive Programming Environment page 461

https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...


Someone suggested Go and I do believe it solves many of these problems in a nice way. IMHO, it’s a fun and reliable programming language.

However, you might also want to try some kind of PaaS. That can simplify those things, even for a Python application.

Fly.io is very nice[0]. AWS App Runner could be another interesting option.

Hope you’ll find something to your liking. Godspeed!

[0] https://fly.io/docs/languages-and-frameworks/python/


In the JavaScript ecosystem, check out Deno. Built in formatting, type checking, test runner, benchmarker, and dependency management via web standards. Minimalism is pretty core to their philosophy.


I tried Deno out extensively last year and found it wasn't really ready for production yet. There's a lof of packages missing to be able to address common use cases, and the node -> deno converters were really hit or miss.

Have you had a good experience with it?


The question didn’t ask about production readiness. But I would say that it depends on your use case.

For example, I think Deno is an excellent alternative to Node for general purpose scripting, but that doesn’t require production readiness.

On the other hand, platforms like Slack[1] and Netlify[2] are betting on Deno for their next generation developer platforms. These platforms are still considered beta, but I think that’s a decent indicator to track status of production readiness.

- [1]: https://api.slack.com/future

- [2]: https://www.netlify.com/blog/announcing-serverless-compute-w...


Unfortunately, many FOSS projects, especially the successful ones, have a history of devs getting burned out and new ones coming along. Each new team has a set of favorite tools - even within the same language. They want to "clean up the mess" with a meta-build that replaces (or even worse, wraps) the legacy make system with a (supposedly) smarter, better system.

This is how we end up with tools on top of make like configure, autoconf, cmake etc.

If it's any consolation, the front end has gotten even worse. Try understanding a react project if you haven't internalized the architecture... typescript.tsx --> javascript.jsx --> javascript.js --> webpack --> minify --> god knows what.

The irony is, js is an interpreted language! Didn't we (boomer coders) learn that interpreters act directly on code, and only compiled languages needed to be built? Forget about that.

Aaand.. wait for it -- webgl, websockets, webasm! Our woes are just beginning.


> This is how we end up with tools on top of make like configure, autoconf

In the case of configure and autoconf, it isn't.

Those were created to solve a problem which a Makefile by itself wasn't able to do, which is portability to the plethora of unix systems and a few others.

Nowadays a lot of software is not written with high portability in mind, and much of it is Linux only, but at the time Autoconf came out (1991) Linux barely existed, and there was no dominant unix-like OS. There were many different ones.

Autoconf was quite refreshing as an end user. To make software portable so that people could build and run it on their systems, especially systems the software author didn't know about or couldn't test on. The consistent ability to run "./configure && make" and have it just work was a big improvement over the ad-hoc myriad ways of configuring downloaded software that were common at the time.


There's still nothing that beats imake for the X Window System. From Wikipedia: "imake generates makefiles from a template, a set of C preprocessor macro functions, and a per-directory input file called an Imakefile". And it does all this within the byzantine cathedral that is X.


Just compile one mega C file into your final exe (`cc input.c -o output.exe`) but it seems like none of the new stuff likes making it that easy. Just the other day I was trying to discover how to turn typescript into plain javascript and the answer seemed to be "install node". (Fuck that.) A shame because the old version of FOO was plain javascript and managed to run (in the browser) from the source tree.


No one has asked: what is the pain point with using make? Because the answer will depend on that.

Otherwise vanilla JS on the web is a good option to avoid build systems. There are a lot of bundler services that provide URLs to import dependencies right there in the file. For example if you need d3 or similar you still don’t need to build.

I believe you can make chrome extensions without a build too. Not sure about electron, I don’t recall but probably you do although maybe not in the development/iteration process.


I've found make to be fine, if you avoid those fancy newfangled "features" and use rules and variables. It even has default rules for building C and for linking!


I don’t think it is for any reasonably complex project.

What I find frustrating is how every language has its own build system. I don’t see why (in principle) we can’t have one build system with rule sets for each language. Language designers could avoid reinventing the wheel and just write a set of rules.

Cross language stuff, such as generating an API client from a server application, would be much simpler under one system.


This is kind of the point with systems like blaze/bazel: it is a unified system with rules per language.


Yes.

You can use a simple build script and a C/C++ compiler like Clang or Zig or Tcc.

You can also use the small libc by Justine Tunney to build once and run your code anywhere: https://justine.lol/cosmopolitan/

We have fast computers, if your project is not huge with millions of lines of code you can easily skip the build system.


I think you want some golang. Gets out of your way, can compile it to any target from anywhere. Compiles blazing fast.


That is mostly true, but I've run into issues where you have to actually compile on a mac to build mac-compatible binaries in certain circumstances.


a bit o.T. but maybe relevant for the viewpoint on this question.

i've used turbo C in the 80ties - afaik the system was very similar to the turbo Pascal environment:

it was an early form of an IDE ... imho. this was THE outstanding feature of all this turbo $LANG environments by borland.

* https://en.wikipedia.org/wiki/Borland_Turbo_C

just as an example: in the late 80ties i used it on an atari 1040STF - it had as the name suggests 1 MB (!) of RAM w/o a HDD (!) ... i was a teenie back then & HDDs where prohibitively expensive.

i copied the whole turbo C environment during the system-startup from a floppy-disc into a (dynamic) RAM disc - about 700 kB - and worked with the rest of the available RAM :)


Many Lisp(-ish) languages and Smalltalk(-ish) languages get by without any external build system.


There's a big programming language discarded nowadays because it's not cool anymore and because it's classified as insecure (as just it's happening for C and C++): PHP.

You write PHP, put it behind php-fpm and httpd/nginx and you're done


This really speaks to me. Modern software is too hard to assemble from source. If you're shipping sources, every moving part you add increases the odds of something going wrong on other people's computers.

It's worth having some skepticism of tools. By making some operations easy, tools encourage them. Build systems make it easy to bloat software. Package managers make it easy to bloat dependencies. This dynamic explains why Python in particular has such a terrible package management story. It's been around longer than Node or Rust, so if they seem better -- wait 10 years!

For many of my side projects I try to minimize moving parts for anyone (usually the '1' is literally true) who tries them out. I work in Unix, and one thing I built is a portable shell script that acts like a build system while being much more transparent about what it does: https://codeberg.org/akkartik/basic-build

When I use this script my build instructions are more verbose, but I think that's a good thing. They're more explicit for newcomers, and they also impose costs that nudge me to keep my programs minimalist.

You can see this build system evolve to add partial builds and parallel builds in one of my projects:

https://github.com/akkartik/mu1/blob/master/build0

https://github.com/akkartik/mu1/blob/master/build1

https://github.com/akkartik/mu1/blob/master/build2

https://github.com/akkartik/mu1/blob/master/build3

https://github.com/akkartik/mu1/blob/master/build4

Each of these does the same thing for this one repo -- build it -- but adding successively more bells and whistles.

I think providing just the most advanced version, build4, would do my users a disservice. It's also the most likely to break, where build0 is rock solid. If my builds do break for someone, they can poke around and downgrade to a simpler version.


You can build Java straight in a jet brains ide. No build system needed.


This and the OP's experience reminds me of the world before build systems and automated build/deploy: But it works on my machine!


Its java. As long as you make a fat jar it will work on that OS.

Of course if you use this feature then the only difference it will make is that everybody has to use the jetbrains IDE.


Not unless you also bundle a JVM and a bunch of other things. Crypto libraries, back in the bad old days of export controls, were a definite problem, and certificate management can still trip up Java.



Not really. cc will still compile a single source file. So will rustc.


Definitely possible to not worry about make or autoconf, you just have to either

- not need any dependencies - or use almost any more modern systems language like Rust, Zig, or Go


> trying to build a lifeboat for Twitter, Python works, but then modules require builds that break.

> Alternatively, any good resources for the above?

There are many, unbelievably many writeups and tools for Python building and packaging. Some of them are really neat! But paralysis of choice is real. So is the reality that many of the new/all-in-one/cutting edge tools, however superior they may be, just won't get long term support needed to catch on and stay relevant.

When getting started with Python, I very personally like to choose from a few simple options (others are likely to pipe up with their own, and that's great; mine aren't The One Right Way, just some fairly cold/mainstream takes).

1. First pick what stack you'll be using to develop and test software. In Python this is sadly often going to be different from the stack you'll use to deploy/run it in production, but here we are. There are two sub-choices to be made here:

1.a. How will you be running the Python interpreter itself in dev/test (in other words, what Python language version you will use)? "I just want to use the Python that came with my laptop" is fine to a point, but breaks down a lot sooner than folks expect (again, the reasons for this are variously reasonable and stupid, but here we are). Personally, I like pyenv (https://github.com/pyenv/pyenv) here. It's a simple tool that builds interpreters on your system and provides shell aliases to adjust pathing so they can optionally be used. At the opposite extreme from pyenv, some folks choose Python-in-Docker here (pros: reproducible, makes deployment environments very consistent with dev; cons: IDE/quick build-and-run automations get tricker). There are some other tools that wrap/automate the same stuff that pyenv does.

1.b. How will you be isolating your project's dependencies? "I want to install dependencies globally" breaks down (or worse, breaks your laptop!) pretty quickly, yes it's a bummer. There are three options here: if you really eschew automations/wrappers/thick tools in general, you can do this yourself (i.e. via "pip install --local", optionally in a dedicated development workstation user account); you can use venv (https://docs.python.org/3/library/venv.html stdlib version of virtualenv, yes the names suck and confusing, here we are etc. etc.), which is widely standardized upon and manually use "pip install" while inside your virtualenv, and you can optionally integrate your virtualenv with pyenv so "inside your virtualenv" is easy to achieve via pyenv-virtualenv (https://github.com/pyenv/pyenv-virtualenv); or you can say "hell with this, I want maximum convenience via a wrapper that manages my whole project" and use Poetry or an equivalent all-in-one project management tool (https://python-poetry.org/). There's no right point on that spectrum, it's up to you to decide where you fall on the "I want an integrated experience and to start prototyping quickly" versus "I want to reduce customizations/wrappers/tooling layers" spectrum.

2. Then, pick how you'll be developing said software: what frameworks or tools you'll be using. A Twitter lifeboat sounds like a webapp, so you'll likely want a web framework. Python has a spectrum of those of varying "thickness"/batteries-included-ness. At the minimum of thickness are tools like Flask (https://flask.palletsprojects.com/en/2.2.x/) and Sanic (like Flask, but with a bias towards performance at the cost of using async and some newer Python programming techniques which tend, in Python, to be harder than the traditional Flask approach: https://sanic.dev). At the maximum of thickness are things like Django/Pyramid (https://www.djangoproject.com/). With the minimally-thick frameworks you'll end up plugging together other libraries for things like e.g. database access or web content serving/templating, with the maximally-thick approach that is included but opinionated. Same as before: no right answers, but be clear on the axis (or axes) along which you're choosing.

3. Choose how you'll be deploying/running the software, maybe after prototyping for awhile. This isn't "lock yourself into a cloud provider/hosting platform", but rather a choice about what tools you use with the hosting environment. Docker is pretty uncontentious here, if you want a generic way to run your Python app on many environments. So is "configure Linux instances to run equivalent Python/package versions to your dev/test environment". If you choose the latter, be aware that (and this is very important/often not discussed) many tools that the Python community suggests for local development or testing are very unsuitable for managing production environments (e.g. a tool based around shell state mutation is going to be extremely inconvenient to productionize). I guess nowadays there's an opinionated extreme in the form of serverless environments that eat a zipfile of Python code and ensure it runs somewhere; if you choose to deploy on those, then your decisions about points 1 and 2 above will likely be guided/reassessed based on the opinions of the deployment platform re: what Python versions and packaging techniques are supported.

Yeah, that's a lot of choices, but in general there are some pretty obvious/uncontentious paths there. Pyenv-for-interpreters/Poetry-for-packaging-and-project-management/Flask-for-web-serving/Docker-for-production is not going to surprise anyone or break any assumptions. Docker/raw-venv/Django is going to be just as easy to Google your way through. And if you really don't want to think about things right now and just want to get hello-world in your browser right away, there's always cookiecutters (e.g. https://github.com/cookiecutter/cookiecutter-django).

Again, no one obvious right way (ha!) but plenty of valid options!

Not sure if that's what you were after. If you want a "just show me how to get started"-type writeup rather than an overview on the choices involved, I'm sure folks here or some quick googling will turn up many!


What a mess. I wish there was a viable alternative for people working in machine learning that isn't goddamned Python.


Docker container. Minimal Python API server for just the ML parts. Another language for literally everything else.


I use Perl like this. It works very well for me.

Perl has very flexible syntax, but I stick to the readable bits.


Make a .deb/.rpm package that depends on packaged dependencies that are already built.


Javascript

ESM Modules need no build system.

(actually using Next.js)


> actually using Next.js

In the browser too, using: <script type="module">


borland C++ 3.0 included make


Details I didn't include but should have (I wasn't sure I'd have any replies at all... I should have had more faith, sorry)

It's a bit of a ramble, sorry about that.

MSTOICAL[0] is a fork of an old C based Forth variant, it took some help from the HN community[1] to get it to compile in a modern 64 bit environment, for which I am very thankful. However, it uses AutoConf to configure, build, install, etc... and I can't for the life of me figure out how to remove all of that logic. (C isn't my primary language, I'm willing to learn that, but adding AutoConf on top of it was too much)

I was hoping to strip out Autoconf and pack it all into one HUGE C source file, and be done with everything but a single shell script and/or windows batch file to compile it, so it could be cross-platform.

In order to start work on that, I was willing to switch to Linux (Ubuntu)... I got everything up and running for the most part, but then I couldn't access WikidPad[2], my local Wiki with my appointments, etc. I missed a doctors appointment because of that, so went back to Windows.

WikidPad is written in Python, so you would think cross-platform would be a slam dunk. The issue is that the wxWindows library it uses, for reasons I'm sure they think are sufficient, changed the names and nature of variables in some calls. Because you have to build from source, that means someone has to update it to fix those breaking references. I don't have the skill to do so. The community seems to have moved on from WikidPad despite it's once wide popularity.

On Windows, you just download an old EXE installer and you're good to go.

So finally... I'm back in Windows 10, and decided to try to craft together a twitter clone in Python with a bunch of weird ideas that I tossed out at 3:30 am in a twitter thread, and put into a more coherent manifesto.[3]

There are a ton of Python libraries that seemed like they'd be trivial to hook together, but when I tried to use Flask, the build system issues broke things, so again, faced with the same issues... I wrote the above "Ask HN"

[0] https://github.com/mikewarot/mstoical

[1] https://news.ycombinator.com/item?id=30957273

[2] https://github.com/WikidPad/WikidPad

[3] https://github.com/mikewarot/iceberg/blob/main/MANIFESTO.md


Things went south when Java's Maven gained traction. Before that projects usually contained all their dependencies in a binary form (.so, .dll, .jar etc) or sometimes in the source form. Then Maven showed up and popularized the instant gratification build system that downloaded 25% of the internet before anything could compile.

Other languages followed suit and monstrosities like npm took over in the land of Javascript. Meanwhile Python went totally off the rails and became the absolute worst developer experience of any popular platform today.

Sadly, we're not likely to see the olden days come back. Some people grab a bunch of random shit and put it in docker thinking they found the solution. Unfortunately that strategy comes with its own issues and it's really just caking more complexity on top of what is already much too complex.


Docker keeps insanity of irreproducible environments contained (no pun intended) and comparing with all solutions before, it works very well for that purpose.


Long before Maven came around there was Ant, and before Ant there was Mozilla Tinderbox.


Those did not come with a centralized repository of crap that every project pulled from. You composed your dependencies carefully and checked them into your source code control.


Ah, I think I get what you're saying now. So it's not just the complexity of the build system, it's the system of dependency resolution and management across a networked multi-repository of components. That does add complexity and risk, as the left-pad incident (and others) have made well-known. Come to think of it, besides javascript with npm, what other build systems consult only a single central repository? Is having a single central repo (or at least, appearing as a single hub) simpler, or is something like Go's modules, which can pull from any number of repos, public or private, added complexity?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: