Yes...For running gcc you need to have gcc installed.
You don’t need gcc to run the software. It’s not burdening anyone that gcc was needed to build the software.
It’s very standard practice to have development dependencies. Why should autoconf be treated exceptionally?
If they fail despite being available it’s either a sign of using a fragile tool or a badly maintained project. Both can be fixed without shipping a half-pre-compiled-half-source repo.
The configure script is not a compilation artifact.
The more steps you add to get final product the more errors are possible. It's much easier for you as the project developer to generate the script so you should do it.
If it's easier for you to generate the binary, you should do it as well (reproducible binaries of course). That's why Windows binaries are often shipped. With Linux binaries this is much harder (even though there are solutions now). With OSX it depends if you have the newest CPU architecture or not.
> If it's easier for you to generate the binary, you should do it as well (reproducible binaries of course).
I think that's the crux of what you're saying. But consider that if Fedora, Debian, etc. accepted released, built artifacts from upstreams then it would be even easier to introduce backdoors!
Fedora, Debian, Nix -all the distros- need to build from sources, preferably from sources taken from upstreams' version control repositories. Not that that would prevent backdoors -it wouldn't!- but that it would at least make it easier to investigate later as the sources would all be visible to the distros (assuming non-backdoored build tools).
You don’t need gcc to run the software. It’s not burdening anyone that gcc was needed to build the software.
It’s very standard practice to have development dependencies. Why should autoconf be treated exceptionally?
If they fail despite being available it’s either a sign of using a fragile tool or a badly maintained project. Both can be fixed without shipping a half-pre-compiled-half-source repo.