Hacker News new | past | comments | ask | show | jobs | submit login

Debian's dpkg and apt are both GPL2+. What's the reason they are reinventing the wheel here? Is there some kind of licence incompatibility that the GNU project cares about? Is there some kind of major architectural difference? Why is it important enough to fragment Free Software developers over?

I feel that this should be answered in an FAQ, but I can't find the answer anywhere.




From the e-mail:

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection

Maybe that’s enough? (I don’t understand package managers.)


Then, why reinvent the obvious other wheel?

http://nixos.org/nix/


It's not a reinvention, it's a fork: "Guix is based on the Nix package manager." [https://savannah.gnu.org/projects/guix/]

(More info here: https://lists.gnu.org/archive/html/guile-user/2012-07/msg000..., taken from icebraining below)


I'm not sure it's even a fork. It appears to plug a different backend/frontend into the nix structure, so it appears to be a "patch", for want of a better word. (at least that's what I gleaned from http://www.fdn.fr/~lcourtes/software/guile/guix-ghm-2012.201...)


According to the site of the project, "Guix is based on the Nix package manager."


The "unprivileged package management" is the important feature here from the GNU point of view. One of the philosophical underpinnings of the GNU project is empowering non-superusers as far as possible - that's why they've stuck with the microkernel design, too. The idea there is that non-superusers would be able to easily run their own filesystem or TCP stack, for example, since it's just a matter of running their own daemon.

Once you understand this, a lot of GNU's technical decisions make more sense.


Those things are never going to be adopted by Fedora, Arch, Gentoo, or any other non-Debian based distro. That is reason enough not to use it.


Why? Because you predict it won't meet your definition of popularity? Once upon a time, Arch and Gentoo were dismissed in exactly this way.


Arch and Gentoo are still tiny fractions of the linux install base (which is itself a tiny fraction of users), and are mostly irrelevant.


Unless you feel that those that write software have more influence in the direction of the community than those who just apt-get/yum install it.

Then they are a small fraction of users but are not irrelevant.


Irrelevant for what, to whom? Everything is a tiny fraction of something -- are you saying that value is purely a function of popularity?


Perhaps it doesnt matter, if this provides a large enough software base, then fedora, arch, gentoo etc would perhaps just write a guix->(dpkg|rpm|...) converter to keep their distribution up to date. Not sure if it would be possible but moving work upstream where more detailed and in-depth knowledge is available might be a good thing.


with guix/nix you don't even care if they get adopted or not. you can use it in coexistence with any package manager.

very common scenario i come across few times a year: you get to work on an old CentOS/RedHat/Fedora box but you need to install latest package X which is only in next version of current distro. which would mean you need to upgrade or do a lot of manual work. with guix/nix you just install it without need to upgrade.


"What's the reason they are reinventing the wheel here?"

Maybe they'd eventually like to stop having to say Gnu/Linux.


As long as they use the Linux kernel they don't have much other naming choiches.


Until they have their own gkernel.


GNU Hurd?


Ah, I was unnecessarily flippant; stopped paying much attention to it in the 90s.

https://en.wikipedia.org/wiki/GNU_Hurd#GNU_distributions_run...

So what would be a compelling technical or business reason to use this, either now or when they get to wherever they're going?

I suppose it would be great for study.


> I suppose it would be great for study.

Does anyone still care that much about microkernels, which were the standard design for state-of-the-art OSes back in the 1980s?

Seems like research now is focused on VMs, which do have a current business use, just like they have for decades now.

The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem. At least the userspace intended to go with the Hurd proved to be high-quality and very widely usable.


Yes, people still care for microkernels, particularly in research, security, embedded systems. The developments didn't end in 1980 and are still going on now. What's interesting is that some of those kernels from the 1980s were architecturally superior to the commercial OSes used today. It's really a mistake to discount the technical advantages of these kernels due to lack of popularity.

The main problem with them is simply lack of manpower. Research usually means that older solutions are replaced with the new, which leads to a lot of wasted effort where those changes aren't backward compatible. There's also the huge effort to keep kernels up to date with hardware, and to port over the thousands of software packages that people typically uses in day-to-day activities.

That's perhaps the real advantage of the current popular kernels - they have a strong requirement for stability and introducing breaking changes is out of the question. It's a propagating effect too, due to the many layers of dependencies we have - modifying the lowest layer, the kernel, has the biggest overall effect on the entire operating system.

That's why the current VM (or chroot/jail/namespace) solutions are being pushed and researched - because they bring some of the advantages of the microkernel design to modern computers, but don't completely break everything. A graphical application in user space for example, shouldn't care whether it's running in a VM or on the metal, it only cares about it's dependencies.

>The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem.

The Hurd is still a problem, rather than a solution. It's original goal is by part a failure because of design problems in the Mach microkernel. There's attempts to put Hurd on other microkernels, which has pushed Hurd more into a research position, but there's certainly things to learn from the history of the project on how not to create an OS on a microkernel. It's also not the only project still running with a microkernel design (see HelenOS, Genode etc).

And it's not like the research is completely wasted even if these projects don't gain popularity - as some of their features make it into mainstream kernels. A good example is FUSE (Filesystem in userspace) on linux - which allows people to experiment with filesystems without hacking on the kernel or requiring additional privileges.

But whether a microkernel design will ever become mainstream is a different question. The sheer amount of work to port applications over makes it seem unlikely - although good design of open source software will make the effort significantly easier. It's unfortunate that we seem to be heading in the opposite direction though, with key players in the open source world pushing for a monotonous ecosystem around linux/systemd et al, even excluding working kernels like the BSDs.


My guess is so they can make it very easy to install only free software.


This system works just as well for free software as for non-free. Downloading a binary blob from somewhere would be easy, just a few lines of code in the package description file.


Usually when there is no clear answer to this in a FOSS project, I think it's safe to assume the answer is ego.


While you're not necessarily wrong, "ego" gives off a pretty negative vibe. I would say it's quite often personal dissatisfaction of the status quo.

I support what they are doing. Wheels sometimes get better when they are reinvented.


I'm not one to declare other people's motivation, but it strikes me as very odd that there isn't a "So, here's what's irreparably wrong with dpkg/apt and rpm/yum".


it's fixing a (real or perceived) problem with nix, not with dpkg or rpm.


Well, there isn't a "So, here's what's wrong with nix" either.


there kind of is, if you read between the lines - their usp is the scheme interface, which provides a "proper" programming language with which to interact with the packaging system. they don't explicitly call that out as something wrong with nix, but it's clearly what they feel they do better.


What relation would you say ego has to scratching one's own itch?


I think the concepts are intertwined, if you are going to scratch your own itch and then release it for the rest of the world to scratch theirs.

"We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris." — Larry Wall




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: