Hacker News new | past | comments | ask | show | jobs | submit login
Gentoo goes Binary (gentoo.org)
341 points by akhuettel 8 months ago | hide | past | favorite | 273 comments



Disclaimer: I am a die-hard Gentoo fan.

The appeal of Gentoo is not compiling everything from source, it's having the freedom to install anything you want on nearly any hardware all with stellar documentation and minimal roadblocks. Want to run Enlightenment with OpenRC and NetworkManager on a laptop from 2008? Install Gentoo! Want ZFS as root on a smart refrigerator? Install Gentoo! Want a vanilla Gnome + SystemD install on a brand new laptop? Install Gentoo! The decision to ship binary packages only gives users *more* choices while other distributions have been actively removing one's freedom to choose. Debian, the so-called "universal operating system" just dropped 32-bit x86 support. You can install an alternative init system on Debian if you'd like, but it's a bit of a PITA. Meanwhile Gentoo allows you to pick between 17+ different stage 3 tarballs and 35 eselect profiles. Personally, I enjoy compiling everything from source and the flexibility that comes with it. On modern hardware it's painless. If you disagree, great! Go install a shiny new binary. The selling point of Gentoo has never been portage. It's always been the flexibility and the community.


> Debian, the so-called "universal operating system" just dropped 32-bit x86 support.

This is imprecise at best. There was a meeting where the release team concluded that most likely, there will be no _installer_ and _kernel_ support for 32-bit x86 at some unspecified point in the future. (At the current point in time, both are still delivered and fully supported.) In particular, you can still run multi-arch 32-bit/64-bit, allowing you to run 32-bit x86 software.


Not really the point I was trying to make. Gentoo still fully supports obscure architectures such as IA64 and PowerPC 32-bit while Debian dropped both back in 2018 and 2020. Hell, not only does Gentoo still support them but the handbooks are still updated regularly. When that unspecified point in the future comes knocking for 32-bit x86 Gentoo will continue to be a rock for fun devices like my Lenovo x60 with libreboot. :-)


> Gentoo still fully supports obscure architectures such as IA64 and PowerPC 32-bit while Debian dropped both back in 2018 and 2020.

We do that in Debian, too. FWIW, I am the one that eventually greenlit the removal of the ia64 port in the kernel because I was actually the one who took care of most of the issues in the ia64 port.

I also regularly debug and report (and sometimes fix) regressions in various upstream projects regarding targets such as 32-bit PowerPC. It's not enough to solely focus on downstream work, upstream work is as important if not even more important.


The Linux kernel just deleted IA64, so that support won't last for long, unless Gentoo keeps ancient kernel versions? Debian still has unofficial ia64 and powerpc ports too.


It doesn't, but the kernel also isn't package managed, just the source. You can keep around the ebuilds for older kernels in an overlay, or someone might maintain an enthusiast overlay just for IA64 kernel support.


What do you mean "it doesn't"? I'm maintaining Debian Ports and I build installer images for all old and obscure targets. We even have support for SuperH which Gentoo dropped at some point.

> https://buildd.debian.org/status/package.php?p=base-files&su...

> https://cdimage.debian.org/cdimage/ports/snapshots/

Installer images for LoongArch are in the works as well.


Does Intel or anyone still sell 32-bit x86 chips for non-embedded use? A quick Google was inconclusive, although it seems that while Intel sells low-power x86 chips under the "Pentium Silver" brand, they all seem to be 64-bit now. So eventually more and more old x86 boxes will be decommissioned, and 32-bit x86 will be dead as a desktop platform, no?

Which might mean people who want to run old proprietary 32-bit games/software are going to be annoyed, but that's not necessarily the distro/OS's problem; it could be argued that the user should install a 32-bit translation/emulation layer to run that stuff, ala DOSBox.


I believe the last x86-compatible chips without 64-bit support were launched around 2006 or so (Intel Core; Core 2 had 64-bit support), a couple of years after Opteron initially came out.

It's not really clear how long the Linux kernel itself will continue supporting such CPUs. The architecture certainly isn't something anyone really cares about (e.g. Meltdown wasn't patched for several months after it went public and 64-bit x86 got fixed).


There were Atom CPUs that weren't supposed to be 64-bit for a while after that.


Yup, I bought a brand new 32-bit Intel Atom tablet from Toshiba in 2014, gave it away but the person I gave it to is still using it. It has 32-bit UEFI, which is also loads of fun. I once installed Debian on it, but I couldn't get touchscreen to work, so went back to Windows 8.


Absolutely not... but hear me out. NetBSD plans on supporting 32-bit x86 until "long after 2038 (http://www.netbsd.org/about/)." Legacy hardware matters for many reasons. Outside of the fact that I still love my Lenovo x60s and use it as a distraction-free writing machine, much of the world still depends upon 32-bit architecture. We can't be replacing computers for all 8 billion people on Earth every 10-20 years. Hell, think about all the government bureaucrats still running MS DOS! :-)


> Hell, think about all the government bureaucrats still running MS DOS! :-)

That's not a good thing though, perhaps aggressively depreciating 32 bit will light a fire under their asses


One of Linux's oldest arguments for using it is that it can be used to revitalize old hardware that other, newer software might not properly utilize.

If one were to rephrase it in today's language, Linux serve(d) to reduce e-waste.

So, yeah, it kind of is the distro's problem if they are moving away and against that mantra now.


There’s a crossover point though where the effort to support it is larger than the supply of newer hardware coming through. With the last x86 deaktop chips coming out in ~2006, I would bet there are very few viable use cases left. That’s 6 3-year cycles past.


I have seen many Windows 98 machines in the last ten years. I saw one just last week connected to a giant robot in a factory. So, if you are looking for use-cases for 32-bit, there's the fact that some giant machinery relies on computers that will never be updated.

Last I heard, even the last BlockBuster in the US is still using old PC's to run their DOS software. They harvest parts from old computers sold on e-bay. So, if you have any old computers laying around, they might be worth something these days.


You wouldn't really want to mess with the OS in cases like that, though. If you did, a lot of the OS-included drivers would probably not work due to removal or bit-rot, or 3rd-party drivers might not work with a much newer OS. The only reason you might want to upgrade is if the machine is Internet-connected, in which case you're probably just going to need a whole new box (if you need the old software, you'd probably resort to a VM), otherwise the new OS will run too slow (especially with things like new crypto/security). Or you can keep the old box but put it behind nginx or whatever.

Point of sale or industrial control computers are kind of just an offshoot of "embedded"; not something you need (or want) to run a browser or a modern GUI[0] on. You stick it in a cabinet or a kiosk and hopefully you don't have to think about it again.

[0] sadly also a browser


I have a number of old computers lying around the house in closets, some 32-bit only. Linux not running on them anymore is actually pretty annoying since, as I said, that used to be one of its oldest mantras.

Outside of the enterprise crowd, Linux's next biggest audience are the tinkerers and hobbyists with too much free time and ancient hardware well past their Use By date. What a time it is that we can legitimately say it's better to run some version of Windows instead of Linux on them now.


The hobbyist Linux market generally tinkers with things like Raspberry Pis now, not ancient 32-bit x86 machines. (This is a natural consequence of the fact that sources for cheap hardware have changed a lot over the last twenty years.)


>while other distributions have been actively removing one's freedom to choose

I think that's a bit unfair. Everyone wants to support everything, but at the end of the day if you want to guarantee security and maintenance and do the job of a distribution you have limited resources on what you can look after. Something always has to give.

On any Linux distribution anyone can install what they want, but what you can genuinely claim to support is always limited by the amount of maintainers you have.


Perhaps I was being overly harsh. That being said, there is a certain philosophy of non-intervention which gives Gentoo a sense of freedom that other distributions lack. It’s not only about diverse support, it’s about allowing the user to choose upfront. Arch does this as well (mostly). It’s easy to choose your own bootloader, WM/DE, audio server, and etc. I just think that Gentoo takes this philosophy one step further. If you want to install an alternative init system on most distributions you can… but you have to uninstall SystemD and all its weeds first. Gentoo’s solution is simple. Give the user a choice on EVERYTHING upfront.


> there is a certain philosophy of non-intervention which gives Gentoo a sense of freedom that other distributions lack.

While I was untangling update dependencies one day I had it uninstall libc. There was a big warning advising me not to do that, but it was allowed.

Everything broke immediately, of course, but I recovered the system and the dependency problem really had been fixed, too!

This gave me a permanent positive impression of Gentoo. It's better to be allowed to do things than not.


Debian and Ubuntu will let you do this, too. Pop_OS may have patched it out downstream, adding an additional barrier before you can uninstall 'essential' packages since Linus Sebastian unintentionally blew away X11 (in the face of a very dire warning, which he ignored) on camera.

Idr how this is handled on RPM-based distros, but on Debian it is part of the standard procedure for crossgrades.

(More impressive than this sort of flexibility, imo, is how NixOS and GuixSD sidestep this problem altogether.)


Love this example! In general the way Gentoo handles masked packages also highlights this point. The system will kindly warn you if something is wrong, but it will never stop you from doing what you want. --autounmask also makes it dead simple.


> Debian, the so-called "universal operating system" just dropped 32-bit x86 support.

I would argue that Debian has support for at least as many architectures as Gentoo:

> https://buildd.debian.org/status/package.php?p=base-files&su...

I know that because I am maintaining most of the obscure architectures in Debian with regular installation snapshots:

> https://cdimage.debian.org/cdimage/ports/snapshots/

Debian even has a Hurd port for both i386 and amd64.


I feel like the Gentoo userbase has been stolen in part by Arch these days.


The one place Arch still is fairly opinionated compared to Gentoo is being all-in on systemd. That's not to say you _can't_ remove/replace it, and obviously there are Arch-like distros that give other options, but my sense is that GP's comment about Gentoo's draw being completely agnostic to whatever configuration you want makes Arch having even a single strong opinion feel pretty different to users who care about that. I think you're probably right that most people probably don't need that, but then again, even as an Arch user, I think most people probably don't really need the amount of customization that Arch lets you have; I just happen to like it.


I wouldn't say systemd is the "one place" where Arch has an opinion. Quite the opposite, Arch is just as opinionated as the average distro. On Void Linux you can choose between musl and glibc, on Arch you're forced to use glibc. On Gentoo you can choose your init system, on Arch you're forced to use systems. On Debian you can have `/bin/sh` be any POSIX shell, on Arch you're forced to have it be Bash. On nixOS you can choose between rolling and normal release, on Arch there's only a rolling release. I can't really think of anything you can customize on Arch that you can't configure on most distros, but perhaps I'm missing something.


My experience using Arch has very much been "it's just like all the other distros, except now you have at least 4 package managers to go through if you want to install any particular thing and you will almost certainly need to install utilities that convert packages from other package managers over to PKGBUILD." Other than that, it's felt very similar to just using something like Debian. Which is fine, and Debian is fine, but the praise other people give it as "the modder's distro," or something like it, seems a little over zealous.


Your experience sounds weird. Reading it remind me of people who come from programming paradigm X try paradigm Y and only use methods from X only to conclude that Y sucks.

Why would you use 4 package managers to do 1 thing ? Most people who use arch use 1 and it does the job. All of them are built around pacman + smth for AUR packages. Personally I use pacui. Why would you convert package from other distro, if another distro has it then it most certainly exists in the AUR, never once had to touch another distro's packaging.

I don't know about the "the modder's distro" part but I personally had much less headaches with it than I had with Ubuntu,fedora,debian, and CentOS.

The only thing that sucks for me (regardless of distro) is Nvidia updates.


> Why would you convert package from other distro, if another distro has it then it most certainly exists in the AUR, never once had to touch another distro's packaging.

A lot of software packages I've been using lately don't exist in the AUR. My tastes might be more niche than yours, I don't know.

> Why would you use 4 package managers to do 1 thing ?

Because all the software I'm trying to install has documentation like "install with pacman" or "install with pamac" or "install with yay" or "install from source" or "install from AUR" or... I think you get where this is going.

> Your experience sounds weird. Reading it remind me of people who come from programming paradigm X try paradigm Y and only use methods from X only to conclude that Y sucks.

Reading your rebuttal to my experience reminds me of people who answer questions on Stackoverflow like, "why would you do it this way? Your question isn't valid, your question _should have been_..." That is to say, rather than attempt to understand where I'm coming from, you thought to overwrite my experience with your own, as if yours is the truer experience to be had. My time using Arch and Arch-variant distros led me to having a seemingly fragmented experience over where to find software. That's got nothing to do with X being better than Y, it has more to do with X being mostly one or two overall places to find software, and Y being fragmented between 4-5 + conversions between apt/yum packages and PKGBUILD scripts. I'm not saying Y is inherently worse, I'm saying it's a fragmented and confusing user experience, in my humble opinion. My main dev machine still runs Manjaro because I've come to enjoy its opinionated handling of Arch, though if I were to re-image that computer I'd probably go back to either Fedora or Ubuntu.

> but I personally had much less headaches with it than I had with Ubuntu,fedora,debian, and CentOS.

Personally, I'm fine with using Ubuntu, Fedora, Debian, or even Arch. I'm not saying any one of them are bad or worse than the other. I'm just saying that, to me, Arch came with some additional challenges that I don't experience from other distros, mostly revolving around how software packages are found and installed.


My bad if my response came as a rebuttal, it was more of a confusion of the circumstances that led you to do things the way you did. Was it just a lack of knowledge about arch or some esoteric setup I've never seen.

I understand that a newcomer at the beginning can be confused about the relationship between pacman, pamac, AUR, yay, etc

similar cli programs exist in other distros though are not as encouraged.


> My bad if my response came as a rebuttal

That is how I read it, but it's all good. Sorry for misreading your tone.

> I understand that a newcomer at the beginning can be confused about the relationship between pacman, pamac, AUR, yay, etc

Perhaps that's the source of my confusion. So with:

Red Hat-like distros the tools are usually dnf or yum, which are evolutions over rpm.

Debian-like distros the tools are usually apt or apt-get (which are more or less the exact same thing), which are evolutions over dpkg.

With Arch, I kind of assumed that pamac and pacman are similar things, no idea if yay is related to anything else though. And I think pamac actually can install things from the AUR. My confusion definitely stems from loading up documentation from every different piece of software I tend to install, and each one of them listing a different utility for installing their package. If these are all the same thing with different UIs, installing packages from the same repositories, then that's definitely where I got lost.

And then I'd often be further annoyed to find out that the software vendor doesn't have a package for Arch, and nobody put one in the AUR yet, so I'd take the deb pkg and run it through debtap to get a PKGBUILD script. Doing which has netted me varied results.


i think you made a point for why being opinionated is good, which is not really the debate here


> On Debian you can have `/bin/sh` be any POSIX shell, on Arch you're forced to have it be Bash.

AFAIK you can use any POSIX shell as `/bin/sh` on Arch Linux. By running `sudo ln -rsf /bin/dash /bin/sh`, dash works well.


Definitely a lot of crossover. However, there are several things that make Arch unusable for me. The #1 thing is that Arch only officially supports AMD64. Additionally, while I find Arch’s documentation better than most, it’s hard to understate how amazing the Gentoo wiki is. Lastly, I prefer portage overlays (like Guru) over the AUR. Bigger !== better. Again, the beauty of Linux is choice. Nothing but love for my Arch neighbors!


Also a die-hard Gentoo fan (daily driver for 20+ years).

I tried Arch and what astounded me the most is that the docs didn't seem as good as Gentoo's. That surprised me because I end up looking at Arch's docs all the time and it's all of a very high quality.

But the Gentoo Handbook is really a masterpiece. I've never read a clearer explanation of how to go from unformatted disks to a working system.


I used to use Gentoo 20+ years ago; installing it was eye-opening. It helped me enormously to understand how Linux works.

I then moved to debian/ubuntu and recently switched my main server to Arch. And you are right - I got back this feeling of having all my 10 fingers in the system and living on the bleeding edge.

Now: this was probably not the brightest choice, as this server rund my docker containers and basically nothing more so it should have been Debian(a fire-and-forget OS)


> I used to use Gentoo 20+ years ago; installing it was eye-opening. It helped me enormously to understand how Linux works.

Same!

Starting with a Stage 1 install was what got me (with some guidance of a friend and the perfect Gentoo Wiki) into Linux circa 2004 (IIRC).

Now, some 20 years later, i much prefer to have not to deal with something as... dare i say.. fragile... anymore.

Yes, much of the breakage i dealt with was probably self-inflicted but always a good learning experience - but most of the time not at the most convenient time :(


The Gentoo to Arch pipeline is pretty real.

I did Stage 1's as well around the same time but eventually got tired of doing that every so often or fighting Portage and switched to Arch.

Then I got tired of dealing with Arch... and ended up on Fedora.

I always liked Gentoo and Arch, but I don't have the energy to put up maintaining them anymore.


Arch is literally the easiest thing to maintain. I've been running it for over fifteen years, update maybe once a year at most and 99% of the time it's a case of running pacdiff and updating a handful of config files.


In 1998, Debian was a great option for SPARC and Alpha, and that’s really where I cut my teeth with Linux.

Around 2004, I was building Gentoo up from stage1 just like many other commenters.

For 2024 and personal stuff I’m back on Debian and it’s safe enough I pull security updates daily - entirely automated and no review on my part. It essentially never causes a problem. Updating once a year sounds terrifying to me given the rate of vulnerability discoveries of late.

As ever your mileage may vary and your systems aren’t my systems, but Debian truly is a wonderful OSS project and as of August 2023 it celebrated its 30th anniversary!

Gentoo going binary feels weird :) I’ve still got a soft spot for Gentoo and what it enabled me to do 15 years ago when I had more time for personal hacks.


I also got in GNU/Linux in 1998. Dipped my toes with Mandrake, then Debian, and around 2012 switched to Kubuntu.

Now I'm in the process of moving my desktop computers to Archlinux, to enjoy better control and up-to-date software.

Servers I keep in Debian for the same reasons you mention: stability and safe automated updates.


I did the hardcore path of Linux From Scratch and then discovered Gentoo which was a (very welcome) step down for customizability :) Highly recommend LFS for anyone who has a couple of hours per day to waste (i.e. young with no responsibilities)


> Now: this was probably not the brightest choice, as this server rund my docker containers and basically nothing more so it should have been Debian(a fire-and-forget OS)

I dunno, there's a fair amount to be said for having an up-to-date kernel, systemd, and Docker. :)


It depends on the kind of "up-to-datism", at least for me. Security - yes. Features - for docker yes, the rest not really.

The main problem is that if I provide a specific repo for docker (to keep a fresh version) it can request some dependencies and bam! I have a Debian-Turend-Arch system to maintain :)


I mostly included syetemd due to the shiny new feature where you can enable kernel samepage merging for arbitrary applications. Could be very useful for certain use cases.


And NixOS. The fact that Nix has had a binary cache was one of the reasons I chose it over Gentoo.


Exactly my case. Used to use Gentoo, got tired of every other update breaking the system, was told on IRC that if I didn't like "free" updates (because apparently my time and serenity have no value) perpetually breaking the system I could go use something else, and that's exactly what I did. Arch is solid.


I've broken Arch before but I've never broken Gentoo. Probably because I broke Arch first and learnt from it. You broke Gentoo first.


Yes, but it's a shame. Especially now Gentoo has these binary packages there's really no reason to choose arch. Arch is like "choose any system you want, as long as it's x86-64, systemd etc"


It never recovered from the loss of gentoo-wiki


I use arch btw


I can't install Gentoo, specifically, the part where I have to install a BIOS or UEFI grub2 bootloader. No I did not use a musl and / or clang profile. With this being my first impression, unless Gentoo or a downstream fork comes with a installer, I won't consider it.

I'm going to install Solus OS / Solus Linux soon, had a good first impression with it's package manager, which claims to be reproducible but not like nix or guix are.


I have a work laptop that's pretty cursed when it comes to booting anything that isn't Windows or Fedora. I only recently got Arch to boot, and then only by using UKI and skipping the bootloader by directly registering it in UEFI. Maybe Gentoo has instructions for doing the same thing? I think it's actually simpler this way, it's just far less obvious and very new.


Gentoo did have a GUI installer in the past, what year was it?


I only ever recall Sabayon Linux having a graphical installer and being Gentoo under the hood. Was there an installer for vanilla Gentoo?


What part is failing? Do you see the grub bootloader or is it failing before that?

If you do systemd (which is what I'm using), then you might look at systemd.boot for UEFI. That's currently what I'm using.

I ask, because one issue I ran into was messing up the kernel config. Pulling the livecd config in and using that as the base config worked for me.


Grub2 finished with no reported errors (I tried UEFI first and then I reinstalled from the LiveCD Grub2 with BIOS). I was using OpenRC since it's the default for Gentoo. But when I tried to boot from what I installed it onto, grub2 doesn't appear and Gentoo doesn't boot.

Took a day but now I'm on Solus and the package manager is reproducible because I can rollback a generation and rollback a rollback aka move forward a generation.


"Personally, I enjoy compiling everything from source and the flexibility that comes with it."

Is this referring only to ports (Portage) or does this mean enjoy compiling kernel and userland.


Depends upon the device. I typically opt for compiling the Linux kernel myself but certain devices (like my Lenovo x60s) would absolutely melt if I compiled it from scratch. Thankfully, Gentoo offers gentoo-kernel-bin. Meanwhile my brand new GPD Win 4 Pro has a stupid nvme that requires 6.6.5+ to boot making compiling the only option (until a couple weeks ago at least).


Compiling a Linux kernel is too much for an x60. Certainly would not be the case with compiling a NetBSD kernel.

Is this even after disabling drivers and other code that is not needed for the x60.


Gentoo's big attraction for me is Portage. It goes beyond just providing a build environment and dependency management. Ebuilds (Gentoo packages) are supported by great tooling and Eclasses that handle a lot of corner cases in builds. Developing Ebuilds feel like doing a real software project, and is great for anyone who wants to experiment with packages that are not in the official repository. Coincidentally, I just published a tool to manage unprivileged chroots for testing ebuilds.

This development will make Gentoo more accessible for a lot of people. But I guess this isn't for me. My build configuration (like CFLAGS) are never going to match the official binaries and so they will never get used.


I agree on customizing the package flags, and features. When using Gentoo in production it became an important part of our security posture to omit the features and integrations with unused software.

That being said we've always had a build host dedicated to producing binaries, but the actual support for binaries in Gentoo hasn't been great. Unsigned serving over HTTP or NFS of compiled artifacts is about all you get. I'm really pumped to see that the new package format adds in cryptographic verification that really should have been there all along even for internal only serving.


> omit the features and integrations with unused software.

That's one of the most compelling cases I've heard for running something like Gentoo in prod.

There are so many plugins, connectors, protocols, and often the old neglected ones turn into attack vectors.


> There are so many plugins, connectors, protocols, and often the old neglected ones turn into attack vectors.

Practically speaking this is probably true but theoretically a distribution's job should be to somehow guarantee that a specific package built their way gets the security fixes for the way they built it.

This is anyway tangential to the fact that in security "less is more".


For me this is great news for less powerful devices I use that I don't feel like setting up a binrepo for (especially when it involves cross-compilation), but still want to reuse my portage config and custom ebuilds for.


For less powerful devices it has always been possible to install Gentoo in a chroot directory on a powerful computer, using a configuration appropriate for the less powerful device, compile and install in the chroot environment any packages, and then just copy all the files from the chroot directory, through Ethernet or using a USB memory, over the root HDD/SSD of the less powerful device.

I have used Gentoo in this way on many small computers with various kinds of Atom CPUs and with a small DRAM memory and small and slow SSD/HDD.

With multiple computers of a compatible kind, the files from such a chrooted installation compiled on a powerful computer can be copied over all of them and everything will work fine. If the chrooted installation is preserved, it can be updated later with other software packages and all the changes can be propagated with rsync on all other computers.

Linux is not like Windows, which will complain when run on a different computer than that on which it was installed initially.


I know this, I mentioned this as "setting up a binrepo" in my comment, as I think the binrepo approach makes more sense (esp. with regards to easy updates).

It's just an unwieldy amount of extra overhead, disk space, and time, which I'd rather avoid, especially for devices I'm not fully committed to maintaining over a long period. I've tried what you mention, it's just never convenient enough to be worth the pain.


Next you’ll tell me there are optimization settings other than -mtune=native


Well, duh. There’s also -funroll-loops.


Much better than -boringroll-loops.


Just don't use -O2 and -fe unless you want to end up with Rust.


I am not sure what you mean, but in the chrooted environment where you compile for a distinct machine you obviously use configuration options, including compiler flags, appropriate for the target computer, not for the host computer, so "-mtune=native" cannot be used.

My point is that you do not need to setup a binrepo or any other complication like this.

You can install easily Gentoo on a very weak computer, by performing the installation on a typical desktop computer, which may run a different Linux distribution, not necessarily also Gentoo, and then just copying the files.

The Gentoo manual has always included information on how to install Gentoo inside a chrooted environment.


> not for the host computer, so "mtune=native" cannot be used.

Right, I am sure they intended it to be absurd in an amusing way.


Indeed I did.

Although, I do somewhat think that working out good optimization flags for cross-platform compiles is a moderately unusual skill, even among people who compile things regularly. Hopefully I caveated that sufficiently, I’m not saying it is a hyper-advanced dark art or anything.

I have in the past set up code on a cluster to just compile on one node for the first run with -mtune=native because I’m lazy!


You always have to remember to fun roll the loops, though. :)

https://forums.gentoo.org/viewtopic-t-245041-start-25.html


Never forget -Ofast.


There are definitely going to be interesting use cases like yours. I'm in no way against this development. The more the merrier!

But to be honest, I haven't looked at binrepos so far. Perhaps your reply is a good reason to.


> Coincidentally, I just published a tool to manage unprivileged chroots for testing ebuilds.

You should check out what ChromeOS is doing. They are using bazel to execute ebuilds inside an ephemeral chroot: https://chromium.googlesource.com/chromiumos/bazel/+/refs/he...

This way it's guaranteed that no undeclared dependencies get used.


This tool [1] does the exact same thing - except that you nuke the chroot when you're done. And the reason is the same - to find all necessary dependencies. I had a small script that eventually became a Rust program. Then I kept adding features to it until it became what it is now. That's the reason why I never really got to explore the alternatives. Anyways, it's usable and nearly done.

Thanks for suggestion though. I'll take a look at it.

[1] https://crates.io/crates/genpac

[2] https://wiki.gentoo.org/wiki/Chroot_for_package_testing


Yeah, my CFLAGS won’t either. But I have to say I’m tempted to script something up so I can override all the plasma-related packages to use the common ones, or something similar, so I build everything where I care about speed myself and let the gui stuff be binary packages. Would save a whole lot of build time, and I’m not sure it would be much of a loss.


Can you please share the tool in question? I have been desperately looking for something like this for my sandbox project.


Here you go [1], [2]. It's not completely ready yet - but it's usable. It should be OK if you plan to just modify or reuse parts of it. It currently supports btrfs backend. Plain directory backend and packaging of the tool are not done yet - but shouldn't be too hard. I was keeping it for tomorrow. Meanwhile, you can use asciidoctor to convert the docs if you need to refer it.

[1] https://git.sr.ht/~gokuldas/genpac

[2] https://crates.io/crates/genpac


Say what you will about Gentoo as a concept, it was a lot of fun when I was 17 learning more about software packaging, distributed compilation, and the ins and outs of compile-time optimization, not to mention Linux kernel optimization. And their community had pretty nice docs from what I recall. I think a few of my patches are still knocking around in some releases.

I finally realized all the tweaking and optimization and bleeding-edge software wasn't worth it once I discovered my Slackware boxes ran as fast as the Gentoo ones. Maybe there's a few very specific applications out there that benefit from all the tweaking and custom compiles; perhaps a render farm or crypto miner? But my games got the same FPS on either distro.


CPU-specific optimization has waxed and waned in importance over the years. It used to be that a lowest common denominator compilation would not even include MMX or SSE, which could make a huge difference for some kinds of CPU-bound algorithms. However, it was always possible to do a runtime CPU feature detection and run the optimal version, so compile time feature selection was not the only option.

Then AMD Opteron came out and got everyone on a new baseline: if you compiled for amd64 then that meant a specific CPU again, there were no newer instructions yet. Now AMD64 has several levels[1] which can start to matter for certain kinds of operations, such as SIMD. A more mundane one I often run into is that v2 added popcnt which is great for things like bitsets in isolation, but in overall program performance I measure almost no difference on any of my projects between v1 and v3.

When it comes to games, it's more than likely your games were binary-only and already compiled for the lowest common denominator and maybe used runtime feature selection, and even then, they were probably GPU-bound anyway.

[1] https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...


Years ago, somehow, I wasted hours and days of computer and my time, compiling and fine tuning my gentoo system, god knows why, when next day I anyway format it to install newly arrived Ubuntu cd.


Everyone please realize that just because you did something and no longer do that thing does not mean it was wrong to have done that thing.

All of us who at one point compiled our own kernels and now no longer do, are the killers that we are partly because we did things like that at least for a bit. It only makes sense not to now, after having done it.

It's not true to suggest (or to read these stories as a new bystander and come away with the idea that) "if I were smarter I never would have wasted time on that"


I hear your point that the act or process of doing the learning is good, even if the end result is that you shouldn't do the thing again. Such as learning assembly but then only doing web development for a profession where you don't need to know assembly.

However, I think that the statement below might be better with a bit of nuance.

> It's not true to suggest (or to read these stories as a new bystander and come away with the idea that) "if I were smarter I never would have wasted time on that"

I would say its "not true always," in some cases doing the action really wasn't worth the time.

Related to this, I believe the sentiment people have about regretting wasting time on some endevour, is a misalignment of what their intention was to begin with.

For example, if someone wanted to compile their own kernal because they wanted to learn and understand more about their computer its unlikely that they would walk away from that experience with regret. However if they wanted to compile their own kernal because they believed that in doing so they would make 10x more money in the long run (through learning so much), and that goal failed to materialize. They would likely tell others to not waste their time learning to compile their own kernal.

Not trying to be pedantic, or argumentative, I aggree with your point deeply, however I wanted to discuss it a bit further. Let me know your thoughts.


Same, but I learned so much while doing it. Eventually I got tired and moved to an arch and got most of the same without always fighting broken packages. But I still use the knowledge I gained dealing with random low-level issues when they crop up.


I had a similar experience with the Gentoos/Arches of the world. I'd never use Gentoo or a Gentoo-like as my primary OS for anything, likely for the rest of my life, but it still ended up being one of the most valuable operating systems for me to spend some time on.


You and me both. But, we learned a lot. Nowadays I feel like Linux is my super power, OS, VM, Container, Nix-shell, WSL2. It’s all Linux. And you can drop me on any command line (even a BSD one to some extent) and I will feel at home and can solve problems. I’d like to think that’s where my happy time with Gentoo led to.


I agree with you, but we're also lucky (or unlucky?) that the variety of out there's dropped a ton.

If you can, try to get access to OpenVMS or Cisco IOS, it's an entirely different world in terms of user experience.


And no matter if its true (and it might well be!) the overarching tendency to look for reasons to explain time spent with Gentoo should probably tell us something.

At best, we are at least a bit confused about it all.


I was confusing because you can’t install it without understanding stuff like fstab, grub, user creation etc. It sets you up to be a sysadmin, it requires you to be a sysadmin. Ubuntu on the other hand, looks and acts more like an iPad than Windows.


I never used Gentoo, but the time I spend screwing around in Arch was more educational than the video games I would have been playing, anyway.


I had fun.


Gentoo’s biggest attraction for me was always the USE flags - being able to turn off the X integration of mpg123 where CentOS demanded an entire X install to get the command line player.

The flags were just icing on that cake.


I too used to obsess over customizing my OS. Now I just install Debian, a handful of programs I use daily and that's it. I can recreate my setup on another machine in 20 minutes.


They talk about the shell as an IDE. My entire desktop is a 14 year experiment in tuning productivity. My ~/bin/ folder has around 100 scripts and maybe 20 are little scripts i wrote in conjunction with i3. Pretty cool how it stacks up over time


What handful of programs do you use daily, if you don't mind sharing?


Nothing interesting. I spend like 99% of my computer time in the web browser, ide and terminal.

Chrome, git, ssh, docker, netbeans


Oh wow, I did not expect to see NetBeans here. What do you use it for and why NetBeans?


General web development. PHP, javascript, css. I used to do some java projects as well, but not lately.

I know it's not mainstream to use NetBeans these days, but I don't care, I'm just used to it and it gets the job done. Maybe I'm just getting old.


Thanks for answering. I'm not NetBeans fan myself but there's absolutely nothing wrong with using the tools you like that get the job done.


Same but with macOS.

The only cool thing about it is that it’s declarative: nix-darwin everything and a fully working and customized machine is up in 10 minutes with one command


Do you have some docs or writeups on your setup? I'm planning to move to macOS in a few weeks.


[flagged]


Of course you customize your Debian installation over time.

Over time is the key here. A package here, a small config there, and after some time, that installation becomes so unique that it starts reading your mind.

Non-breaking updates is the icing on the cake.

The biggest point is you install Debian once, or when your processor architecture changes.

--Sent from my a 6 year old Debian installation.


Not sure if you mean that sarcastically, but it's a really boring and stable OS, the setup is just a few clicks and hitting enter a few times. I think I haven't even bothered to change the wallpaper on my newest laptop.

There's nothing really special about Debian as well, it could as well be Ubuntu, Mint or anything else that's plug n play. I'm just used to Debian, and it comes with less junk I don't need installed by default.


Debian is my set and forget OS as well.

After running my home servers on it, one release wasn't too far behind to run as a desktop, and I've been happy here ever since.

Xfce desktop, move the panel to the bottom, install applications. Use it for the next days/months/years until I decide to look around again.


When every ounce of power mattered, fine-tuning your OS made sense.

Nowadays most people are swimming in CPU cores and gigabytes of ram and terabytes of solid-state memory, so fine-tuning is a waste of time (unless you play bleeding-edge games). But it wasn't always such.


> so fine-tuning is a waste of time (unless you play bleeding-edge games).

Unless you run javascript 'applications'. Games are already optimized.


> Games are already optimized.

Yeah, I'm calling bullshit on this one. At least, it doesn't line up with my experience. In my experience, games are optimized _just enough_ for a decent playing experience (and not always then). Games devs, as a whole, are the worst offenders of expecting their users to just throw more money (hardware) at the software to achieve usable/enjoyable experiences. There are, of course, exceptions. But, for every Carmack, there's 10s of thousands of developers scrambling to make their deadline, doing just enough to ship.


I have heard of people recompiling the kernel to improve gaming performance (mostly to use a different scheduler or what be it), but don't recall seeing anything beyond single digit percentage improvements in performance. Which makes sense, since you can only recompile the kernel and a subset of open source libraries that the game may use. Those are going to be fairly well optimized to start with.

The games themselves though are a different story. Outside of open sources games (which are usually less demanding than commercial ones), you don't have the source code to rebuild it. Even if you did, enabling optimizations beyond what the developer used risks breakage so you will have to do your own testing. Even then, simply rebuilding the software wouldn't address the quality of the code created by those developers who are scrambling to meet a deadline with as little effort as possible.


I'll be the first to admit that I'm not a game developer and my exposure to commercial games' source has been very limited. The most exposure I've had was to Civ4 due to Firaxis releasing the source for the core game DLL for modding. Civ4 also used Python for scripting and Python (undeservedly, here) gets the blame for the game being slow, especially during late-game play.

Back in the day, I spent a fair amount of time working on gutting the DLL because frankly, it was atrocious. My memory is a little fuzzy as it's been +10 years since I've looked at it, but things I remember seeing:

* over use of C/C++ preprocessor macros where an inline function would have been more appropriate to say, get the array/list of CvPlots (tiles) that needed to be iterated over all the time. * lack of hoisting loop invariants out of loops. It is common to see usages of the non-trivial macros above in the bodies of tight loops, often tight nested loops. Optimizing compilers are great, but they're not _that_ great. * the exposure of the C++ types to Python was done...poorly. It was done using Boost Python (which, while a great library for its day had a _huge_ learning curve). For every Cv type in the C++ DLL, there was a corresponding Cy type to expose it to Python. Collection types were _copied_ every call into Python code, which was quite frequent. The collections should have been done as proxies into the underlying C++ collections, instead of as copies.

Most of the changes I made were quite small, but over the course of a month of part-time hacking on it, I'd gotten late game turns down to a couple of minutes from 30-minutes and memory usage was extremely reduced; and I never did get around to fixing the Python wrapper as it would have too intrusive to fix it properly. I could have made more aggressive changes if I had full access to the source, but being constrained by DLL boundaries and C++ types being exported limited what could be done w/o breaking the ABI (had to be extremely careful about not changing object sizes, affecting vtable layout, etc).

Frankly, I doubt the developers spent very much time at all, if any, with a profiler during the course of development with the game.


Yeah but you picked the one game where some dude patched the civ 4 binary for a 3-4 x increase in rendering performance :)

Civ had been a 2d game until then, it was their first 3d title.

Not to mention that it was turn based strategy, and the main performance problem was AI turn length in the endgame.


> Games are already optimized.

What AAA titles have you played around launch in the last 5 or so years?


I think the biggest case where Gentoo still makes sense is when you have a large fleet of identical machines. In that case, the effort put into tuning your installation will be multiplied across the number of machines it's applied to.

For a single machine home install, the biggest value Gentoo has to offer is the learning experience. I ran it for about a year like 4 years ago, and I definitely learned a lot in that time. Hopped around a bit and I've since landed on GNU Guix, and I'm probably set for life now in the GNU world.


I'm a Gentoo daily driver and I'm also looking real hard at Guix. I already live inside of Emacs, having everything in Lisp seems kind of nice.


When every ounce of power mattered, fine-tuning your OS made sense

I used to believe that and was a huge Gentoo user for years back when it was initially released. Then one day I benchmarked Gentoo and a default RedHat install on every work load I cared about, and everything was within the margin of error.


I like gentoo because I have it set up to compile everything with symbols and the source code in the image, and I can gdb anything I am curious about.


Made an ancient computer with very limited CPU usable for my siblings with Gentoo. The secret is to do USE="-* minimal" and enable things that are required from there. Compiling a custom kernel was actually necessary because it had a really old NVIDIA card that was no longer supported and I had to patch something to do with MTTR. Installed XFCE 4 and it idled with 70 MB of RAM used. Could play Youtube videos without the whole thing freezing whereas Debian could not. Gentoo is great.


The time I wasted with Gentoo in 2004 was enough to never try it again.

A full day compiling stuff only for the base install, let alone everything else I would eventually need.

On my case, I decided to become another Scientific Linux tester.


The thing is, that was 20 years ago. I basically did the same thing, at almost the same time, as you.

But computing power is much higher now. The same compilation now would probably take 1-2 hours, max. Updates would be super fast.

Gentoo itself is considered generally stable and a pretty solid distribution, or it used to be.

I wonder if these days the flexibility and the engineering behind Gentoo might be worth taking another go at it.


For kicks and giggles, I just set it up on a new system a couple of weeks back.

It's really not much different than working with Arch in terms of complexity. Initial setup takes a bit, but if you've installed arch you are pretty familiar with everything you need (in fact, arch docs are helpful for a gentoo setup :D).

The docs are VERY good and easy to google.

Compilation time can be nasty depending on what you install but not terribly bad. I just rebuilt the world because a GCC update broke lto that I'm running. With about 2k packages that took about 6 hours to complete on a Ryzen 7950.

General updates take almost no time at all (especially using git for syncing). Usually less than 10 minutes often less than 1. As I write this, I'm currently rebuilding kde (if you are using your computer, rebuilding doesn't really get in the way. Especially if you are already working with a multicore system).


“But computing power is much higher now. The same compilation now would probably take 1-2 hours, max. Updates would be super fast.”

I’m not so sure. A lot of the power comes from multiple cores. Years ago I had one core, now I have eight. A lot of the compiles don’t use all the cores.

Software has also gotten bigger. rustc is huge, for example. It didn’t even exist when I used Gentoo years ago.

These days I’m on the Mac and I just switched to Homebrew after using Macports for years. It was for one of the same reasons I stopped using Gentoo: compiling takes too long. Whenever I upgraded Mac OS versions, Macports required me to recompile everything. This was no problem at all for, say, tree. But something was pulling in gcc (emacs needed it for some reason??) and this took ages to compile.

At least Macports worked though. When I used Gentoo, it took so long to compile things that I would leave it overnight, and of course often in the morning I would see that the compilation stopped halfway through because something was broken. Hopefully that’s improved. Or of course maybe the binary packages will help with this.

But if I wanted a build-your-own, rolling-release binary system, I don’t see why I wouldn’t just use Arch.


Even 1-2 hours is too much for me.

I rather use programming languages ecosystems that favour binary libraries for a reason.


A lot of the reason depends upon what you hope to get from the labour and the overall environment that you are working within.

I was working with a 486 around 1995. Compiling your own software was the norm and compiling your own kernel could have significant performance benefits (even if it was just to conserve the limited memory supported by machines of the day, to head off some of the swapping). By the time I learned of Gentoo, that was not really the case: most of the software one could obtain was provided in binary form and compiler optimizations were much less relevant (unless you had a special workload).

The tooling provided is important too. I was using NetBSD for a while. For the most part you just started the compilation process and walked away until it was done. (I imagine Portage is similar.) You didn't get the instant gratification, but it was not time intensive in the sense that you had to attend to the process. That was very much unlike my earlier experiences in compiling software for Linux, stuff that isn't in the repos, since it did have to be attended to.


It surely wasn't the norm for me, in 1995's Summer, I got my first Linux distribution via Slackware 2.0, everything was already compiled and when chosing to download tarballs I would rather pick already compiled ones.

Later on, to take advantage of my Pentium based computer, I would get Mandrake, with its i585 optimized packages.

Most of my Linux based software would be sourced via Linux magazines CD-ROMs, or Walnut Creek CD-ROM collections.


The time I "wasted" with Gentoo in 2005 taught me enough about how Linux works to land me my first real IT job. I will forever be grateful to that distribution.


Sure, but you didn't need to suffer compiling stuff from scratch to know how UNIX works.

I used first Xenix in 1993-1994, and naturally wasn't compiling it from scratch.


I had toshiba satellite with some whooping 96MB RAM... as main computer even... happily ran Windows 98... them I got the book "Linux from scratch" the rest is history... now I am happy mac user.


Have you try'd "MacOS from scratch"? It's even harder ;)


But then you have to deal with big upgrades that might break your system and old packages (or start randomly adding PPAs etc.) A rolling distro means you can continually keep up with small changes and only adopt big new pieces (like systemd, pipewire, wayland etc.) if/when you are ready to.

I've installed Gentoo literally two times. Once per PC. Been using it for years. It's not like you have to keep tweaking it. It does help if you run a basic system like me, though (no DE, simple tiling WM, don't use much apart from Emacs and Firefox).


I never used gentoo. Making sure the graphic and wifi card in a laptop worked after every upddate on Ubuntu was hard enough for me.


Yea, same here back in the day. Stage 1 installations for Gentoo really made me interact with the kernel and software in a different way. It did not just work, but while solving the issues, I learned a lot on how things worked internally. It's a great thing to get really familiar with the workings under the hood.

But yea, now-a-days I'm on Ubuntu LTS.


Same, this was around 2004-2006ish when I maintained a Gentoo build for my Pentium 4 box. There was this somewhat draw to me of compiling my own binaries highly optimized to my processor and that Portage mostly works. But my gosh, gcc build times are killing the fun. When Ubuntu arrived and saw my peers being productive, I switched.


On my first pc assembled from used parts, I was able to squeeze every bit of compute out of gentoo. Being able to build smaller binaries by excluding dependencies seemed to help a lot. I used it until the first ubuntu was released and it just worked and worked well. The only problem was that it was an ugly brown.


I ran Gentoo for a long time in the early 2000's. I learned nearly everything I know about Linux machines in general from that experience!

What was interesting about the USE flags was learning that a given package even HAD a particular integration with some other library / package. Realizing that the SQLite3 binary doesn't work the same when you don't have readline support linked to it lead me to understand what readline was as a whole. That happened over and over again for a lot of the "invisible" libraries that are always included on every other Linux system.

Absolutely invaluable learning tool at the right time in my life for sure.


Wow...I feel weird.

A lot of the comments are saying that they tried Gentoo or used to use it.

And here I am, using it as my daily driver and server workhorse.

I wonder what makes me different such that Gentoo is the best for me.

And I am not going to enable binary packages.


You’re not alone! I run Gentoo on all my devices. GPD Win 4, Pixelbook with coreboot, my NAS, and a VPS. Why (IMO)?

1. Best documentation I have ever used.

2. Infinite amount of customization. Your system can be as minimalist or maximalist as you’d like.

3. Portage overlays > AUR any day of the week.

4. Choose your own init system. Want SystemD? Fine. Want OpenRC? Go right ahead!

5. It’s hard to understate the power of a good community with well-described values. I feel safe and at home with Gentoo because I know that it will always value the #1 thing I care about on Linux: freedom. Gentoo is like a choose your own adventure. Want to run SystemD + Gnome + PulseAudio + binary packages? You can do that. Want to run OpenRC + Hyprland + Pipewire + NetworkManager instead of netifrc? Go for it!

Gentoo isn’t about compiling everything, it’s about choosing everything. Today’s announcement (although irrelevant to me) gives users more choices.


I would still be using Gentoo if NixOS hadn't come around. I like Gentoo just fine, but being a sysadmin for NixOS is much easier than being a sysadmin for Gentoo.


> I wonder what makes me different such that Gentoo is the best for me.

Ok, I'll bite: why is Gentoo best for you? (I'm not going to try to refute any of your statements should you reply, I promise. I'm genuinely curious.)

I'll offer my own experience why Gentoo is not for me: I don't run my own server, and at work I use servers in the cloud (Ubuntu based, I guess? Or AWS-flavored). No compiling any kernels or whatnot.

For the desktop, a friend convinced me to try Gentoo 10+ years ago ("it's compiled for your specific hardware, it'll be faster!"), I gave it a try, wasted lots of time getting it running, then saw it wasn't really noticeably faster for any task I did than regular Ubuntu, and took longer to get set up. So I ditched it.

Another common claim I found to be false (in my case): "I learned so much configuring Gentoo!". Well, no. I mostly followed the recipes like almost everybody else, setting flags and touching config files I didn't really understand, so I can safely say I learned nothing -- I just went through the motions.

But that's just my experience.


That's what is weird; I don't know why it's better for me!

I can understand that you didn't learn anything from installing Gentoo. I did, but in my case, I was not like most new Linux users; instead of immediately trying to learn the details when I started, I just wanted to install Ubuntu and get working.

It was only six years later that I installed Arch, then Linux from Scratch, then Gentoo, and in the process, I had many lightbulbs trigger. "Oh, that's why it's like that!"

As much as I love Gentoo, I agree with you: the "faster" argument is a myth.

I hope you can tell I'm not trying to evangelize. :) You said you weren't going to refute that Gentoo is best for me; well, I'll go the other way and not refute that Gentoo is really bad for nearly everybody!

Anyway, I'll take a stab at why Gentoo is best for me: extreme customization. See, there's another way I'm not like most people; others start out with extreme customization and ricing, and they back off of it over time. I started with little and have only grown my customization over time.

Maybe it's because I'm on the spectrum, but computers are annoying to use for me by default. On Gentoo, however, I can make my setup fit me more than any other distro.

Another thing that might contribute is that I like lean setups. Just checked with `ps --ppid 2 -p 2 --deselect | wc -l`, and subtracting the terminal, shell, ps, and wc, there were only 30 processes. And that includes niceties like redshift and picom.

Because I do heavy fuzzing, that matters to me. Also, it is responsive; slow responses bother me more than they do others.

But to be clear, that doesn't refute your experience that Gentoo isn't faster; it took a lot of work to get Gentoo to this point, and it isn't default.

So I am weird.


People obsess over the compiler flags (and there was a short time when everything else seemed to be x86-32bit and Gentoo was one of the very few distros where you could compile everything for amd64) but the real advantage of Portage is the USE flags; where you can turn off major things like X support, or disable IPv4 entirely, etc. Nothing else I've ever encountered allows that customizability.

And that's been much more important to me than cflags, which I set once and ignore, ever since trying to install a command-line MP3 player on an ancient RedHat brought in an entire X install.


Yes, I agree with this. I have used USE flags immensely.


Not the same person you're replying to. But I have some ideas to share.

> Another common claim I found to be false (in my case): "I learned so much configuring Gentoo!"

This isn't true for everyone. My general plan with anything is to follow the script exactly at the beginning. If that works out, start making incremental changes until something breaks or until you're satisfied. Gentoo taught me a lot of things that way - especially the kernel compilation (which is important to me anyway, due to my profession). Before this, it was Arch. And a regular Linux distro before that. All of them taught me something with the same strategy during the initial phase. Eventually the struggle gives way to familiarity and learning rate starts to fade.

> Well, no. I mostly followed the recipes like almost everybody else, setting flags and touching config files I didn't really understand

I don't know about anyone else, but Gentoo config files were the easiest for me to understand. I feel very much in control with them. The Gentoo wiki doesn't just give you recipes - they always tell you the exact reason for a configuration or flag (This is true for Arch as well). I have also created packages for myself - so I have a reasonable understanding of USE flags. I've been thinking about migrating away from Gentoo - but I like USE flags so much. Sometimes they allow you to access application features that are not available on other distros (because they have to choose build flags that suit the majority).

Something that helps me with this level of control over the configuration is that I maintain them as literate org-mode files with explanations and even diagrams at times. This might seem like too much work for a desktop system. And often that is true. But this workflow suits my interests and profession really well. In fact, at least 3 custom programs are part of my desktop - configuring Gentoo to my liking is the least difficult part in it.

> Ok, I'll bite: why is Gentoo best for you?

Finally. I said this in another comment. The most attractive part of Gentoo for me isn't the custom compiled kernel or software (I use flatpaks for most desktop apps). It's the package management system. Portage is by far the best system I've experienced for creating system packages.


> why is Gentoo best for you?

It has zero opinions about how my system should be configured or what options the installed software should have. It lets me build precisely what I want to build without any hassle or compromise.

> I mostly followed the recipes like almost everybody else

The thing I really like about gentoo is I have a virtual/akira package. It brings up any system precisely the way I want it. If I have a new system I can just install that virtual from my overlay repo and be in business within a few minutes.

I wrote this virtual 10 years ago. It still works perfectly. Ironically, I guess I could say, I choose gentoo because I want to spend _less_ time managing my system, and it enables that in a way that no other distribution has for me ever.


I use Gentoo on my personal servers because it offers the customization I want with the ability to easily install from source some various strange things I have laying around.

It's a rolling release so it never has a "major blow up day" - at worst I have to spend 5-10 minutes on a "breaking" change.

My desktop is just a Mac, but my Gentoo server provides the Linux I need.

(Work servers are Ubuntu because work software wants something well-known.)


> And here I am, using it as my daily driver and server workhorse. I wonder what makes me different such that Gentoo is the best for me.

Persistence like a Saint? Hard to say. I've been using it for 20 years too.


Do you have machines which "just work"? Not a daily driver machine or server that you mess with every week, but something you set up long time ago and that has been working even since. Something that you may only log in few timers per year, if that.

I stopped using Gentoo when I got my made my first "production" system: I made it, people started using it, and it was "done" - as in, there were no more any reasons to log into it daily, or even weekly. I'd still want my security updates and very very rarely there would be a new feature that required new package, but otherwise I wanted system to be steady. And Gentoo is a horrible fit for this usecase - even on my personal desktop where I updated weekly I had emerge build failures practically every time; and a system which might not be updated for months would be even worse. So I went to Debian, and it was good (especially with unattended upgrades), and I eventually switched all my machines to it.

If you use Gentoo, do you have any machines that you don't tinker with weekly? How often do you update them, and how often do you need manual intervention to finish the update?


Oddly enough, on my family's farm, we got a DeLaval voluntary milking system, i.e. milking robot. Very pricey piece of gear. And several of it's embedded computers for some reason run Gentoo.


I use mine daily and update them about weekly, yes.

Leaving a Gentoo system is, as you say, not something you should do. I'm lucky that I don't have that.


Well, here is your answer why so few people use gentoo.

I can count at least 6 linux computers in my household (this includes stuff like work laptop(s), fileserver, my family member's PCs, and that raspberry pi which only acts as a camera server for my 3D printer). And the life is just easier if all of them run the same, or at least very similar, systems.


Oh, I know people don't, and shouldn't, use Gentoo.

What's weird to me is that it works for me. And it's weird because I see how Gentoo is just not a good fit for most things, so fitting it feels weird.



Extensive binary packages would have been nice back in the years when Gentoo was a meme OS; CPUs were a lot slower and not multicore. I don't have a use for them now.

Anyway, the best OS is the one you already know how to use.


Too little too 15 years too late.

One of the reasons I moved from Gentoo 15+ years ago to FreeBSD was that it was mandatory to compile everything while FreeBSD provided binary packages.

It may be not that important today - but it was a game changer with single CPU core and 1GB RAM.


Sad to say that you are completely right, for the longest time after being on gentoo for a long time I switched to macos for a few years, and was using gentoo prefix, which is vastly superior than Homebrew. I added patches to fix upstream llvm to work in gentoo prefix on macos almost a decade ago [1]

I finally met someone at the GSoC reunion that wanted to get me maintainer status, but I never got them to follow through. He had already warned me that it would be a complicated task to accomplish. I kept mentioning prefix needing binaries as well. Imagine if gentoo prefix had been as easy to install packages as homebrew on a mac.

It's sad, but gentoo is a good example of why an open source project that is technically superior, cannot survive inferior solutions without good stewardship if they disregard some basic end user quality of life features. I would argue that that is also what killed opensolaris/illumos(which is basically on life support), because the people in charge could never get past their elitism and decide that for community engagement the kernel build needs something more simple that 100 layers of nested incomprehensible makefile/shell spaghetti.

[1] https://github.com/fishman/timebomb-gentoo-osx-overlay/tree/...


Is it too late? Gentoo has always been a niche within the already niche Linux community, but they seem to keep chugging along happily. Are they having some problems?


~Gentoo is probably the most installed "Desktop"-Linux in the world -> ChromeBooks

https://wiki.gentoo.org/wiki/ChromeOS


I guess what I’m trying to say is, in order to define “too late” we need to define an objective and then figure out if we’ve passed the point where it is possible.

From the outside it looks like the Gentoo community is happy and stable being small. And that’s good.

I’m not sure Google copying some of their software and putting it on Chromebooks is a huge win for the community, although I bet some of the Gentoo devs are proud.


> ChromeOS is built using Portage, Gentoo's package manager, and Gentoo-based chroots. ChromeOS uses the upstart init system.

Sounds like they are just using parts of it and not Gentoo itself.


Portage is like 90% of what makes Gentoo Gentoo.


They have had the binary packaging capability for a long time now. It just didn't make sense to use it on a global scale considering the vast combination of packages they could generate (due to USE flags, profiles, etc) and the infrastructure needed to distribute them. They seem to have decided to offer binaries for the most common configurations. This isn't a major change. Perhaps the infrastructure also makes more sense than before.


I shudder to think how many sky high electricity bills and greenhouse gases were released to needlessly compile the same software over and over again.


Probably orders of magnitude less than the amount of wasted energy and greenhouse gases emissions of needlessly compiling the same javascript software over and over again on the billions of devices used to access the web.


Not to talk about all the batteries we burn out from it or the billions of devices that is replaced solely because we run so utterly inefficient software.


compared to the average power draw of activities like gaming? probably not a significant amount.


Probably not that much. It's not like gentoo was widely used compared to Red Hat or Ubuntu. Also CPU power usage is a rounding error if the computer was on anyways compared to the spinning HDD, monitor etc.


As far as hobbies go, it's very far from the worst. Some people play games for 12 hours a day on kilowatt gamer PCs, or race their car around a track, or cruise around on a gas guzzling boat, etc.


Probably a fraction of a fraction of 1% of the power used by Microsoft to constantly collect personal data from Windows 10 users 24/7.


Do you think that many people were even using it and compiling regularly? I left it around 2010 for linux mint. Just last summer I tried it again just out of curiosity, and couldn't even get it installed. They had a broken release and of course there was some hack to re-update everything back to the last stable version, which gave me flashbacks of what Gentoo Life was all about and I stopped then and there.


Not to mention the wastefulness of general purpose CPUs. Most computer use is in the browser, why don’t we have a Firefox ASIC yet?


I'm happy to be annoucing my Program as a Processor product. Want to run another program? No need, it's an app.


probably less watts than the full screen videos and rendered play worlds that are drawn behind the main menu on any given AAA console game bought by millions of people.

given the behavior one can only think that software people generally don't care unless it bothers a user metric like 'battery life'.


Wanna talk about JS? Java?


Contrary to popular belief, Java is amongst the most efficient languages regarding power consumption.


I was referencing directly the amount of compilation, not necessarily the power consumption. But with regards to ecological impact, I would guess you have Java on a server in your mind.

But, the biggest Java user in term of number of devices is Android. And every time you install an app, you compile it. Including every time you update it (which nowadays is... everyday?). Also it'll recompile in background to use pgo. Nowadays [1] Google could pretty much compile it server-side for most devices, and save a lot of battery wear (batteries wear when getting hot, and compiling, weirdly, heats), in addition to power consumption.


It depends on how you use it. For long running applications like services, yeah, the JIT will get the bytecode down to some very high quality machine code. If you write lots of small applications and compose them using something like Unix, then Java is very inefficient.


That's so funny that Java was also the first thing that i had in my mind...also python and "LLM training".


nothing compared to msmpeng.exe, which spins several cores constantly 24/7

(microsoft's anti-virus)


I heat my home. The difference between heating via compiling my kernel and heating via whatever heating you use is almost certainly negligible. But I get a custom built system out of it too.

I don't cool my home, though.


The difference between a heat pump and resistive heating is certainly not negligible.


Well, even if everyone did have heat pumps, let's think about it a bit. I upgrade my system about once a week. I reckon on average it's about 1 hour of compiling per week but let's say 2 to be safe. That's no more than 1 kWh per week. A fridge-freezer will use that in a day. An electric oven will use that in 15 minutes. An electric car will use that travelling 3-4 miles. Most households use something like 30 kWh per day to heat. Even if a heat pump made that like 10 kWh it's still a drop in the ocean. And don't forget the resulting binaries run slightly more efficiently because they are compiled specifically for my CPU, plus I enjoy it.


What’s your standard build look like at work and how Many times a day does it run? Does it go all the way to deploying a dev cluster??


Probably less then they pay out for some TX based miners to shut down when grandma starts freezing.


Crypto mining, ML training, NSA servers holding all your data forever...


The thing that makes Gentoo fantastic is the fact that it's designed from the ground up to make it easy and maintainable to add the one little tweak you want for your system. In all the other distros I've tried (all the major ones, many of the minor ones), they tend to work better out of the box, but straying from the beaten path results in a flogging like no other. A tuned Gentoo system just works™, whatever "just working" means for you personally. That might be a python 2to3 name collision when Arch decides to overwrite upstream, or a system-critical latency issue when SystemD does too much unnecessary garbage in kernel mode, or whatever, but all of its flaws aside I'm a very, very happy Gentoo user.

Upstream binary packages are just another extension of that freedom. You already had binary versions available of a few major projects (or could roll your own build server), but making more of them easily available allows a lot more people to reap those benefits without having to worry about the huge time sink of building every little thing. If you need more flexibility (patches, use flags, ...) for a given package, that's still available and easy to maintain. This is a huge win.


There was this sweet spot for a while where Gentoo just worked like a breeze, Where there weren't too many and too few useflags and when I would recompile Open Office when my dorm room was too cold in cold winter nights.


Only on HN could any state that Gentoo was ever in be reflected upon as "just worked like a breeze" unironically and I mean that in a fond, loving way.


I know it's off topic, but I find it interesting how this sentence was so hard for me to understand. I struggled for many seconds but couldn't go beyond "was ever" because it felt like there was some mistake, like a word was missing here or there.

In the first pass, my mind decided that "state" was a verb, and, therefore, there should be a subject appearing before it. But I only found "any", instead of "anybody" or "anyone". Then there is "was ever in be" which, by itself, is a weird construction. It does makes sense in the sentence, because it is "[the] state that Gentoo was ever in" + "be reflected upon". But since I was (unconsciously) dividing the sentence in smaller parts trying to identify the subject, the predicate, the verb, the object, or whatever would make sense for me, cutting the sentence like that only confused me even more. I kept going back and forth trying to imagine which word was missing, and only after pushing through until the quotation, the whole sentence finally made sense.

Although I can't think of any example right now, I know that it is common to use sentences with structure similar to this one, and I see them almost daily, probably multiple times a day. However, as a non-native speaker, this one was an actual struggle, and I feel so good for having overcome it that I am willing comment on it.

For closure, if I was the one writing this sentence, I would probably use the active voice with an indefinite pronoun, which is also probably what my mind was expecting:

    Only on HN would anyone reflect upon any state that Gentoo was ever in as "just worked like a breeze".
And I ask: were there native speakers that also couldn't understand it in a single reading?


Native English speaker here. The sentence was definitely missing a word. I read it multiple times before I got it too. Your rewrite was easier to understand.


a garden path sentence, but it parses.

[only on HN] could [any state [that Gentoo was ever in]] be [reflected upon [as "just worked like a breeze"] unironically]_,_ and [I] mean [that] [in a fond, loving way]


But it does! Even with the weird packages I have installed (ROCm) and a ton of accept_keywords unmasks, all I have to do is overnight updated twice a month. I haven't touched /etc/portage in months.


You could start from a later stage, I think they had a mostly binary stage 3, and emerge was generally solid, albeit slower due to the compilations.

So at least once upon a time (10+ years ago) there was this option of just using it as almost another regular distribution.

Slackware on the other hand... (I say this is in a bad way, and I think it's changed since; for Slackware for anything more complex you had to manage the entire dependency tree yourself, and it was a pain in the neck for anything not part of the not-that-many-regular-packages; nota bene: for the "beaten path" Slackware was more or less just another Linux distro, but the "beaten path" was quite narrow).


I have this botched up Debian desktop installation that I rarely use but never quite got around to make a clean install because I need an installation medium for that and i don’t have usb sticks anymore.

Do change root installs still work? I would probably give it a try again.


Gentoo for me is not about compiling things from source, or "performance", or tweaking your OS for days, which seems to be the comment perception from people.

Now that there is an officially supported binhost, you do not have to compile anything if you don't want to. You can use a desktop profile with zero customization and have a system that works "out of the box". You do have the options to customize and compile but it's not required.

Gentoo's benefits for me include the tooling around portage and the PMS (package manager spec). Gentoo's software packaging tooling is (IMO) superior to what exists in Debian and other conventional (not nix) distros. The most similar would be Arch's PKGBUILDs, which are also pretty nice.

Packaging software for Debian, trying to create a .deb package is a fairly arcane process and the documentation was difficult to find and digest when I attempted this. Gentoo has a wiki section called "ebuild writing guide" that describes everything in great detail.

There are also benefits like being able to select on a per-package-basis whether you want "stable" or "unstable" versions of software. Gentoo is a rolling release distro, but requires packages to meet a certain criteria before being marked as "stable". You are not forced to use "stable" packages if you want the most recent releases from upstream (basically what Arch does) but still have access to them if you'd like.

Gentoo's community is the most important feature for me though. Gentoo feels like a proper open source project. You don't have to be a Gentoo developer to contribute, and you can interact directly with Gentoo developers when you need guidance or have questions. Gentoo's community seems to have a lot less elitism compared to other distros also which is very important to me.


I think NixOS has kind of taken over anything I would use Gentoo for.


you build "impure" (in nixos speak) packages for all your software? if not this is not a close comparisson. gentoo shines when you need system wide control of build flags (for perf or security)


I don't know what you mean by impure package. You can modify the build of any nix package and it will rebuild everything just like Gentoo.

I don't personally believe in the CFLAGS "performance" micro-optimization though so that's not really something that matters to me. Security is pretty hardened by default on Nix: https://nixos.org/manual/nixpkgs/stable/#sec-hardening-flags...


> I don't know what you mean by impure package.

Then you haven't read their own docs but are providing lessons online for it :/

I used quotes because it is their term for that procedure.


> gentoo shines when you need system wide control of build flags

I'm pretty sure nixos can do that, though?

There's this for CFLAGs on a single package:

    nix-shell -p 'hello.overrideDerivation (old: { NIX_CFLAGS_COMPILE = "-Ofoo"; })' --run "hello --version"
And this page seems to talk about overriding settings for all of nixpkgs, and even something kind of like USE flags (although I agree that it's way less powerful and generic than USE flags; that's one place Gentoo wins): https://nixos.org/guides/nix-pills/nixpkgs-overriding-packag...


It seems like a close comparison to me. The inputs to nix derivations aren't always directly named after the build flags they control, but ultimately they control build flags. If you disable the binary cache then you're building everything yourself with system wide control of build flags.

Purity has to do with whether you let those builds depend on things that found lying around on the system versus things that are explicitly in the derivation (i.e. a pure build can tolerate missing files by building them, an impure one may get stuck for lack of a dependency or because a dependency was not as expected).


Installing Gentoo was always fun for me, to get it working and the dream of a fully custom optimized machine (I guess Arch offers similar experience, minus the full optimization), but getting everything polished afterwards was always just too much for me and I switched to a packaged distro.

It seems like it would be cool for an SBC, but the compilation (or setting up cross-compilation) was always too much; now maybe it’s feasible again? But I’m too old to have the time to try!


> minus the full optimization

Back when I used to mess around with this more, I never noticed much speed advantage from compiling my software vs installing binaries. What did help was understanding what was running and cutting out the things I didn't need. I'd be surprised if Gentoo offered any advantages over Arch, Slackware, etc. for that.


> It seems like it would be cool for an SBC, but the compilation (or setting up cross-compilation) was always too much;

Back in the day I had a few relatively slow machines and used to compile my kernels using distcc to offload the tasks to them. I never used cross compilation but I see it is supported so it may be a possibility for small SBCs.

https://www.distcc.org/


So apparently it would be possible to use mostly binary installations but compile some libraries, interpreters and cpu hogging apps with -march=native - and that wouldn't cause any problems with ABI or anything?

Would it be hard to set up a build server on another machine? What if that other machine was running Arch or some other system?

I remember when running Gentoo ~20 years ago, it was quite often that package compilation failed and I had to go fix some package settings or something, don't even remember well except how disappointing it was seeing a failed compilation again after waiting a long time - is this still common occurrence?

edit: at least the build server doesn't look too bad and the running other arch could be solved with a WM / container.. Feeling really like wanting to give it a try, still remember the good times I had with Gentoo way back (even with the compilation problems)


Any background on why (now)? The post is a bit sparse.

(I'm also one of those who left Gentoo for Ubuntu, both because compilation made it needlessly slow to wait for a tool I needed, and because emerge was just so slow compared to apt. Ebuilds were awesome at a time when dpkg build tooling seemed to change completely once a year.)


Interesting that this is still going on.

I haven't done it since Slackware came on floppies and you mostly had to recompile your kernel to get the right drivers in.

Even did a linux from scratch once, later, to see how it goes. Then went along with my business.


The Pi (at least 1-4, haven't looked at 5) has an extremely weird set up where the GPU actually boots the system using a binary blob before handing it over to the CPU. When mainstream linux was first beginning to support the Pi this caused a lot of issues. One of which was that GPU acceleration was not available. It was in the Pi foundation's own distro (Raspbian/RaspberryPi OS) but not in others. One of the first to get it working was Gentoo. I really, really wanted to get Open Morrowind working so I tried it for a while.

Then I had to re-emerge a software package to change my timezone and I was just done. Thats way more complicated then it should be or needs to be.


wow. slackware is around since 93 and loadable modules since 95... i always remembered compiling drivers in to save RAM (i.e. i was more removing drivers than adding). But somewhere around kernel 1.2 we did have to compile driver modules in, indeed.


Linux was very new. I was in the last years of high school, 93-95 exactly. I couldn't even afford a 386 :)

A friend of mine got us access to one at the work place of a relative of his and we spent nights fiddling with Slackware. We even made a serial cable so I could log in via serial from the other computer in the office, which was only a 286.


Gentoo was the first distro that I grokked. The way they walked you through the build system (honestly can't remember if it was official docs, community forum or what) was just the right balance of teetering on the edge of everything crashing and burning, and learning the deep secrets of how an OS actually works that it kept me intrigued for years. I ran it all through college but then got jobs at non-nix (un-nix?) companies and it gradually faded. This seems like a great development but if it had existed when I was exploring Gentoo, I might have taken the easy way and learned much less.


> honestly can't remember if it was official docs, community forum or what

Not sure how long this has been true for, but the Gentoo Wiki (which includes the Gentoo Handbook[0]) is an absolute goldmine of information, and is probably what you were looking at if I had to guess.

[0] https://wiki.gentoo.org/wiki/Handbook:Main_Page


yes that was it. It really is an example of fantastic documentation. I had tried to run a linux distro that I had bought when I was 16 or 17 (in a box from a store!) but failed to see the appeal. The throw you right in the deep end with plenty of support (good documentation in this case) is still how I learn best and I attribute my experience with Gentoo for helping me find the hacker mindset. Anything is possible if you're willing to dig deep enough.


This mix of a binary cache for common compilation options with transparent fallback to source builds when flags are customized is how the youngest generation of source-based Linux distros (NixOS and GuixSD) work, and it's a really nice combination. Congrats to the Gentoo community! I think a lot of people will enjoy this kind of setup.


Does that mean they're building the package about 2 times and maybe even more if systemd support needs to be there? As far as I know, Gentoo supports musl/glibc and could support either systemd or openrc... (some packages link against systemd for some functionality)


They do mention that they support 3 different profiles - openrc, gnome/systemd and plasma/systemd. Nothing mentioned about glibc/musl split. But they seem to be aiming only for a reasonable coverage. So I'd speculate that only glibc is targeted.


I ran Gentoo on my desktop from about 2005 to 2008. The last year of that period I was without Internet access, and when I tried to update the system after being online again, it broke (not surprising, I heard similar stories from users of other rolling release distros since).

But it just so happens I set up a virtual machine running Gentoo but a few days ago, with no clear idea of why or what for. What a remarkable coincidence.

Gentoo requires the user/admin to put in a lot more work than mainstream distros like Debian, but in return you get such a high degree of control and choice that the system, once it's up and running, feels more like a pet than a piece of software.


Always have a soft spot for Gentoo. The first distro I ever used was RH 6.1, but Gentoo helped me actually learn how the system works (e.g., partitioning, the FS layout, how a bootloader works, what an init system is, etc.)


Back in the 2000s I would spend immense amounts of time compiling custom Gentoo on old SGI MIPS systems to obtain the best performance. However, even back then I remember thinking that as computing power evolved, there would be a point where Gentoo would either fade into obscurity, or release binary packages to allow users to quickly set up the distribution. As a result, I'm not surprised by this announcement in the least and half expected it to happen a decade ago.


> Back in the 2000s I would spend immense amounts of time compiling custom Gentoo on old SGI MIPS systems to obtain the best performance.

I'm curious, did you ran benchmarks? What kind of performance gains did you get, if you got any?


I didn't run any formal benchmarks back then, but the performance I would get was on par with the fastest x86 and x86_64 machines I had running Linux in the mid 2000s. For example, I would bring an SGI O2 with an R5000 CPU into my classroom running Gentoo with Fluxbox and let the students play with it. They would often comment about how fast it browsed the web and connected to our campus file and RDP servers, and were shocked when I told them the CPU only ran at 150Mhz!


> If you use useflag combinations deviating from the profile default, then you can’t and won’t use the packages

so.. they are basically never going to be used then? It has been a while since I run Gentoo, but I remeber USE flags to be the most useful and fun features of Gentoo, giving it the power other distributions cannot hope to match. I cannot imaging running Gentoo with default USE flags, might as well switch to debian in this case.


I remember Gentoo taking more than a full day to compile on my computer back then. It did teach me a lot though, will always have a soft spot for it.


Yeah, A day was pretty normal.

I installed gentoo on a workmachine at a new job, like a dual xeon something with 64gb ram back in a time where 8gb ram on a worstation was plenty. I had a blast. It took "only" like 3 hours to get my usual to go system.

Unfortunately I need haskell on my workstation (xmonad) and ghc can easily take alone like 5-20+ hours compile time on older computers. I compiled ghc a few times on an old t41 - it took more than a day.

Still, gentoo is my first choice. I run it on a few root servers and every workstation.


Yeah, IME it's always a tiny number of packages; IIRC my system only took a day to build because webkit took most of that time, and I didn't even try to compile firefox. (In fact, some of my setup retains the expectation of running firefox from a tarball from Mozilla specifically because I used to run gentoo)


Gentoo was great fun, when I was young and had a lot of time. It worked surprisingly well. But I often skipped installing security updates, because it could take days to finish on a slow system.

Maybe some day I will get nostalgic and try it out again, but I really don't miss it yet. It's also quite a waste of CPU power and energy to compile everything from scratch without a real need.


It may look like a waste to compile from scratch but it's great for reproducible builds.

That can important in cases where you need to be sure you are running the source code that you can see


I'll never forget my first time building Gentoo, in school in 2003, on the school laptop.

I started the build in the evening, and in the morning I waited for the current package to download, then closed the lid, put the laptop in the bag, took the train to school, connected to the school wifi, and continued the build in school.

It was fun to try but that's all I did, quickly moved on to something more sane.


Similar, I did this in my first year at uni, and I felt invincible. The whole idea of building your own OS distro from parts is the epitome of a so-called IKEA effect. Simply formatting that hard drive and saying goodbye to my “unique” setup (in reality as vanilla as they come) was difficult!


I went from Slackware -> FreeBSD -> Gentoo -> ArchLinux and haven't looked back except that I now run OpenBSD for my router.


Does no one remember the "funroll-loops" parody site? https://www.shlomifish.org/humour/by-others/funroll-loops/Ge...


So, they just hold more binary packages than before? How tame. They should push further and make it easy for people using the same CFLAGS to share their binaries through torrents based on decentralized trust.


I don't use Gentoo, but isn't the whole point that you compile everything? Could anyone explain some pros/cons of Gentoo other than squeezing out performance for your specific machine?


It's got very similar benefits to Arch in terms of system setup and configuration with a few extras:

- for the binary version the difference might not be as significant but bit for bit I'd say it's still more configurable than Arch

- in terms of choices they've made about conventional system defaults, there's more interesting options on offer: especially when it comes to Systemd -vs- OpenRC & networkmanager -vs- netifrc

- I'm not sure how true this last point is, but I get the impression the system of overlays & profiles is a little more expressive & powerful than equivalents in other distros. E.g. Manjaro is often considered to be something of bastardisation of Arch by virtue of some underlying design decisions made in "forking / extending" it, & compatibility is limited. On the other hand things like Funtoo & Pentoo are really just Gentoo at heart, using its core features for packaging distro customisations.


It offers more customization than Arch and the like, and it allows you to fix bugs you might find annoying more quickly than maintainers might. This is in addition to some nebulous performance gains from optimizing the builds you compile yourself.

The problem has always been that while you had all this choice, the one choice you didn't have was to just use regular old binary packages for the things you didn't have to customize. This complaint has finally culminated in TFA.


I tried Gentoo Prefix for a while on macOS, so I could use the same package manager across OSes, but always compiling from source got a bit tiresome on a laptop, so I went with MacPorts instead.


“ To speed up working with slow hardware and for overall convenience, ”


Wondeful, that means that you can use binary for most part of the system and just rebuild the parts you want to customize. A great saving of compile time !


Most distros are still not providing x86-64-v3/4 level builds yet, so gentoo is still relevant :P


Is the performance gain non-negligible?


Much less about performance gains but more about improved battery life on laptops


Isn't this just over three months early?

But this sounds good to make it more optional to compile every time.


Side topic, when did Gentoo (which is a Penguin) start using a cow for its "mascot"?


IIRC the cow has been there forever (apparently since 2004) - https://wiki.gentoo.org/wiki/Larry_the_cow

I always thought it's head kind of looked like the stylized G logo.


I feel old now, since I was using Gentoo before Larry existed, and Larry has existed "forever."


The years between 1937 and 1980 are the same as 1980 and 2023. :(


I never settled with Gentoo, but something about it made me interested. Not quite sure what.


> something about it made me interested. Not quite sure what.

Well, it is unique. It isn't another copycat or cosmetic derivative of Ubuntu.

Actually, for the most part, it's in a league of its own.


Gentoo was also one of the first accessible rolling release distros, which has its own advantages (and disadvantages, to be sure).


4chan meme, or perhaps because it's Linux from scratch for dummies.


I think this is unfair - Arch is also meme'd by linux enthusiasts and 4chan.

Additionally, Gentoo is the largest distro with the source-based package manager USP. I suppose you could take issue with that approach for reasons like the systemd maintainer does where he claims it wastes CPU-cycles and time. Personally I disagree with that assessment since reproducible builds are a vital part of FOSS.


Fun fact is that Gentoo is the most normie distro. ChromeOS is based on Gentoo and all have coreboot. I'm not disparaging it as much as seeing it as a similar learning tool.


This is partially true but I believe that the only component ChromeOS uses from Gentoo is the portage package manager during some of its bootstrapping protocols.


ChromeOS is a gentoo derivative, but I struggle to describe a system as being gentoo if portage isn't available in the final image.


I was never a gentoo user (I installed once, like I did for basically any distro back then), but I wouldn't knock the sort of "apparently-unproductive tinkering" that Gentoo epitomizes. Like building the umpteenth blog engine, or yet another Todo manager in $language_du_jour, that's how we learn.


I'm not calling it unproductive, it's useful to learn the OS and how things work. It taught me all I need is debian.


I use gentoo on my desktop and debian on an old refurb thinkpad I got for £100. Both of them are amongst the last of the large independent linux distros and I think both approaches are valid.

Never had a bad experience with Arch but the way they use the AUR as a crutch is a bit off-putting. It's a little bit like a giant gentoo overlay but pushes all the complexity onto the user.


Gentoo was a meme before 4chan - I remember it from early 2000s IRC culture.


Didn't they try this 20 years ago?


If someone could wrap the Gentoo build system with the Nix packager, and port Arch docs to Nix, you'd have the perfect Linux distro.


That's more like a reminder the functionality and binaries were there for the last 20 years...


The amount of packages for amd64 in Gb has doubled from 10Gb to 20Gb since November, so I guess your assertion is false.

https://www.akhuettel.de/~huettel/plots/mirrors/binpackages-...


The functionality in portage was there since forever the standard packages (think of it like a standard ubuntu install) were there for forever. They only branched out and offer more binary packages. Which is absolutely great but not a new feature... The headline is still misleading.


from the gentoo page

> we’re now also offering binary packages for download and direct installation! For most architectures, this is limited to the core system and weekly updates - not so for amd64 and arm64 however. There we’ve got a stunning >20 GByte of packages on our mirrors, from LibreOffice to KDE Plasma and from Gnome to Docker. Gentoo stable, updated daily. Enjoy! And read on for more details!


They've had a binary version of the full fat linux kernel available for a few years now. Other packages like firefox-bin have been available since before I started using gentoo in 2017.

Edit: Actually looking into this more the headline is accurate. This is the first time they've provided official binaries aside from stage3s when doing the initial installation.


>> Gentoo Linux Goes Binary.

they did decades ago.


> But hey, that’s not optimized for my CPU!

> Tough luck. You can still compile packages yourself just as before!


Is this an April fools joke?


According to the Mayan Calendar...


i used gentoo 15 years ago. went to debian than ubuntu and now on arch. arch is the best btw.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: