Hacker News new | past | comments | ask | show | jobs | submit login

How do you build software in 2021 then? Do you write everything from scratch?



One can use the packages provided by the distribution. That's how it worked since forever for things like C/C++.

For example distris like Debian even mandate that you build software for them only relying on things that are already packaged. No arbitrary internet downloads allowed during build. The build environment contains only what is provided by the stated build-dependencies of your package, which are themself of course Debian packages, and some common minimal build base (like shell and some utils).


What if the package you want isn't provided by the distribution?

What if the distribution's version doesn't support the features you need?

What if it can't be built by relying only on things that are already packaged?

How do you distribute your software so it runs on other distributions? Do you maintain a different build for each package manager?

What if you want to run on platforms that don't have standard package managers, like MacOS or Windows?

How do you update the packages without internet downloads during provisioning the build?

The C/C++ model has proven to be fatally flawed. It hasn't worked, that's why modern languages eschew reliance on the distributions' package managers, and greenfield C/C++ projects use the same model.

I'd go so far as to say this model is a key reason why we need containerization to deploy modern software today - since you can't trust software written on distro X runs on distro Y because it packages dependency Z differently all the way up to the root of the dependency tree (glibc, usually).

The fundamental flaw of this model is that it inverts the dependency tree. Build time dependencies are local to individual projects and need to be handled separately from all other software. With few exceptions, Linux distros make this mistake and it's why we can't rely on distro package management for new projects.


> What if the package you want isn't provided by the distribution?

You package it.

> What if the distribution's version doesn't support the features you need?

You package the version that does.

> What if it can't be built by relying only on things that are already packaged?

You package the dependencies first.

> How do you distribute your software so it runs on other distributions?

Give them the source so they can package it.

> Do you maintain a different build for each package manager?

Yes, that's the whole idea behind distributions.

> What if you want to run on platforms that don't have standard package managers, like MacOS or Windows?

Well, there you anyway downloaded random things form the internet…

But nowadays even those systems have package management that can be used!

> How do you update the packages without internet downloads during provisioning the build?

It's not about disabling package downloads (which come form a trusted source btw).

It's about disabling downloads form random places on the internet.

Also you can use a local package mirror. That's what the build systems of distributions do.

> The C/C++ model has proven to be fatally flawed. It hasn't worked, that's why modern languages eschew reliance on the distributions' package managers, and greenfield C/C++ projects use the same model.

Well, except for all packaged software out there…

> I'd go so far as to say this model is a key reason why we need containerization to deploy modern software today

Nobody needs that. Software form packages just works fine. All the Linux installations around the globe are a prove of that fact.

> since you can't trust software written on distro X runs on distro Y because it packages dependency Z differently all the way up to the root of the dependency tree (glibc, usually).

Of course you can. It works exceptionally fine. After you packaged it.

> The fundamental flaw of this model is that it inverts the dependency tree. Build time dependencies are local to individual projects and need to be handled separately from all other software. With few exceptions, Linux distros make this mistake and it's why we can't rely on distro package management for new projects.

More or less every distro does it wrong? And you know how to do it correctly?

Maybe you should share your knowledge with the clueless people building distributions! Get in touch with for example Debian and tell them they need to stop making this mistake.

By the way: How does your software reach the users when it's not packaged?

At least I hear "sometimes" that users demand proper software packages. But maybe it's just me…


I think you're conflating package management for distribution and package management as a build tool. This is the flaw that Linux distributions make. It's why flatpaks and snaps exist. On MacOS and Windows you distribute software as application bundles similar to flatpak/snap.

>More or less every distro does it wrong? And you know how to do it correctly?

Yes, exactly. For distribution you use snap and flatpak, or application bundles on MacOS and Windows. For building you use a package manager that can install vendored dependencies pinned to specific versions, and do it locally to individual projects so they are not shared across multiple projects. This is the tact taken by modern tools and languages.

It's not my idea! It's what everyone building software in the last decade has migrated to.

Packaging software so it can be used by any distro's package manager is not viable, and reliance on such an outdated model is why using Linux sucks for everything but writing software to run on a single machine.

> At least I hear "sometimes" that users demand proper software packages. But maybe it's just me

And when they do you almost never go through the official/trusted mirrors because it will never be up to date, you host your own (maybe put up some signature that no one ever checks) so they just `sudo apt repository add ; sudo apt-get install -y` in your install instructions.


> It's not my idea! It's what everyone building software in the last decade has migrated to.

Well, everybody besides the biggest free software distributors out there, which are Linux distributions…

> Packaging software so it can be used by any distro's package manager is not viable, and reliance on such an outdated model is why using Linux sucks for everything but writing software to run on a single machine.

Yeh, I get it. That's why we need Docker…

Nobody is able to distribute software otherwise.

Well, except all those "distributions"…

> And when they do you almost never go through the official/trusted mirrors because it will never be up to date, you host your own (maybe put up some signature that no one ever checks) so they just sudo apt repository add ; sudo apt-get install -y in your install instructions.

Moment. Does this mean someone managed to build packages that work on different versions of even different distributions, which is obviously impossible as we just learned?

This starts getting hilarious!

I'm sorry for you that nobody want's to listen to your great ideas. Have you ever considered that you're completely wrong?


> Packaging software so it can be used by any distro's package manager is not viable, and reliance on such an outdated model is why using Linux sucks for everything but writing software to run on a single machine.

Sorry, could you elaborate on why Linux "sucks" for anything but single-user, please? I haven't heard this argument.


... yes? I've often been ridiculed for wanting to build everything in-house, but events like these just validate that sometimes, to get the most reliable and best software, you'll have to reinvent the wheel a few times.


To be honest that's deserving of ridicule. Reinventing wheels is how you introduce fragility and instability, not prevent it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: