Hacker News new | past | comments | ask | show | jobs | submit login
I Love Arch, but GNU Guix Is My New Distro (boilingsteam.com)
297 points by ekianjo on Nov 20, 2021 | hide | past | favorite | 315 comments



> Guix System is an advanced distribution of the GNU operating system. It uses the Linux-libre kernel

It's worth pointing out that the linux-libre kernel is developed under the FSF doctrine that "binary blobs are bad unless you can't see them". This has been taken to its logical extreme here, where this Linux fork actively removes security warnings informing users that they need to update their CPU microcode, because microcode in ROM is fine but dynamically loaded microcode updates are not, in this school of thought.

https://lists.gnu.org/archive/html/info-gnu/2018-04/msg00002...

I have no interest in software that uses arbitrary religious dogma (that doesn't help users' freedom, as those users' CPUs have proprietary microcode whether they know about it or not) to justify censoring critical security vulnerability notifications for users. I regard this as actively evil anti-user behavior.


Lobbing a charge like "Arbitrary religious dogma" is pretty much the opposite of a reasoned look at what the goals are here.

I find the approach interesting. The goal of the Free Software folk was never primarily to "provide the best information to the end user," it is to "preserve software freedom." Guix looks like a possible technical path to do that. Will it work? Will it cause harm? I don't know yet.

Either way, the law-slash-tech has always come before "ensuring that it's public-understood and ready" as it should.


There is no freedom without information. Freedom requires being informed so you can take decisions that you believe are best for you. Lack of knowledge restricts your ability to make the right choices, and thus your freedom.

"Freedom" through lack of information is the kind of tactic that repressive regimes use to control their populace. It has no place in an organization claiming to further and support true freedom.


I don't fully agree with some of these decisions, but in the defense of Guix I must say as a user replacing linux-libre by the linux kernel is trivial if you are using guix tooling.


And exactly when did Guix or the FSF censor anything?

I'm not speaking for them, but I'm pretty sure they would say something in the realm of "the information provided by your updates is a false sense of security, so why WOULD we pass that on?"


Reading this from the guy who's trying to put Linux on a black-box has a certain flavor of irony, to me.


Yes, but how does static proprietary firmware blobs burned into ROM "preserve user freedom" vs. dynamic loading, even if the user has complete control? It doesn't, and that's why it's arbitrary.


Those two things are WILDLY DIFFERENT in real life. Broadly, "firmware blobs" are usually put there by companies and other entities that you can much better track (and sue if you have to). Dynamic loading stuff can come from any-darn-where. Apologies, but this should be really obvious.


Yes, that is obvious. It is in fact one of the benefits dynamic loading provides for people who care about free software. The blob can often be reverse engineered and replaced with free software.

And literally everyone signs their firmware.


The OP referenced CPUs and for at least AMD and Intel CPUs, firmware blobs need to be signed.


In theory a government could force AMD or Intel to sign a malicious firmware. That could be distributed as the normal firmware or used to target a specific person.

Firmware that ships with the actual CPU is a bit 'safer' because it has a lot more users and eyes looking at it (sort of). Depends on what your attack vector is.

Another aspect is that while this policy is worse for Linux-libre users, it is sort of a protest against needing these binary blobs. The hope is certainly that adoption of Linux-libre would result in AMD/Intel not having these non-free software requirements.


Nobody is looking at Intel's microcode mask ROM. It's in ROM. You can't even look at it.

Microcode is a bad example because the updates are encrypted too, but for the vast majority of the blobs that the FSF hates so much, at least you can look at them and audit them with a disassembler. Meanwhile, the devices with giant firmware ROMs that they openly endorse are not auditable, as you can't see the blob. This policy is making it easier for manufacturers to ship backdoors that will never be detected.


so after so many posts this is the only one that expresses your point clearly. notice how it does not contain any FUD

to the said point, this is definitely a VALID security concern. FSF needs to make these concerns clear. you seem to be very invested in this matter. have you raised these concerns with them?

EDIT: having thought about it some more. doesnt isolating blobs to ROMs restrict the problems to ROMs? i mean non-RYF certified hardware already has this problem. the strategy might them be to focus efforts on opening up ROMs. note that this is simply a question. i am not an expert in this field but i am trying to form an informed opinion


People have raised these concerns. I have personally raised this concern directly to RMS at one of his conferences.

It gets ignored.

I believe it is for the same reason religions ignore issues.

The FSF totally acts as a religion. The church of his Gnusance St. Ignutiutus. And just like religions it pretends to hold an ethical position while making compromises for practical reasons.

Compare religions claiming:

- Killing is bad, unless it's about opponents in war. - Slavery is bad, unless it's outsiders who are slaves. - Blobs are bad, unless they are stored on chip flash.

The FSF gets criticized precisely for this hypocrisy. And just like religions ignore criticism about their inconsistencies so does the FSF.


>The FSF totally acts as a religion. The church of his Gnusance St. Ignutiutus. And just like religions it pretends to hold an ethical position while making compromises for practical reasons.

so is FSF dogmatic or is it practical? surely they cant be both


It's definitely both -- it's just that the dogma is broad, and they appear to very carefully and intelligently choose battles. The dogma is "long term software freedom." This includes occasionally accepting that there are battles not worth fighting (or better yet, fighting strategically.)

A really simple example is GPL violations. Pure dogma would require that they try to fight a whole lot of them, since they occur all the time and they're clearly in the legal right.

But they don't, and that's the MUCH SMARTER way to go.

Why this Guix thing strikes me as smart is that it's about "reinforcing modularity." They can't free up EVERYTHING, but they can make the software work different so that it's harder to pretend that everything is all the same.


then you must disagree with alexvoda who said that FSF "pretends to hold an ethical position"?

to clarify my previous post, i was using the term dogmatic in the context of some people claiming that FSF is a cult


To go big picture, I think all of this is rooted in the fact that "capitalism as practiced" has addled our brains into thinking that there can only be two kinds of organizations, companies that go for profit at all cost, and pure-of-heart non-profits that must be on some monk-like religious stuff.

What it can't conceive of is an organization with big picture goals that aren't "making a profit" that require strategy and even "real life experimentation." That's what's happening here; y'all are just confused because the FSF feels like "religion" because it's not going for profits, but somewhat acting like a for-profit, in that it's picking and choosing battles.


i dont think FSF is a religion. actually i am arguing against this labeling


Lots of people have tried to raise concerns such as this one with them. They've stopped listening.


Cited from the article:

To respect these principles the Guix project (and others) asks to not discuss non-free software, hardware support, or related matters on official channels. These questions and non-free packages are best left to any number of other venues. Guix does not actively hamper a user’s ability to load non-free software or firmware (see freedom 0), but will not support this in any official capacity. That said, the community is very nice and will not kick you from IRC if it comes up (more likely you’ll get some private messages directing you or helping you out), but better to remember their rules.

I think that says all.

"Hush, hush, and don't look under the rugs!"


I don't think that's what is being "said" at all.

This is diplomacy. This is exactly the correct way to act when you are diametrically opposed to something that is widely used, popular, and prevalent -- when you hate something; but understand that other people don't and you're going to have win them over.


This is an enormous thread now - in case it hasn’t already been mentioned, Guix neither helps nor hinders users from using any kernel they like. There is a repository called nonguix, made available by community members, which includes the upstream, non-free kernel, as well as microcode updates for Intel processors.

The Guix project states clearly that it does not support these uses of the software, but that users are free to use it - because the software is released under the GPL.

So what exactly is the problem? The Guix maintainers consider their users to be competent and able to make decisions for themselves - the charge of ‘evil anti-user behaviour’ isn’t so inappropriate as it is laughable and immature.

Also, as other commenters have mentioned, I have never had a warning that I need to install new microcode from any distribution. As a matter of fact, it would not surprise me if other distributions were not to give me a choice in installing new microcode - if I’m running underpowered hardware (not uncommon for hobby Linux users, especially if the hardware is known to be well supported), and I don’t have much to lose on that hardware, having mitigations against things like Spectre/Meltdown is a calculated decision given the performance impact, so why shouldn’t I at least be able to choose? With Guix, this is a one-liner in your configuration.scm file.


The problem is the endorsement of linux-libre, which goes beyond deblobbing into actively censoring messages.

I'm sure Guix allows users to choose (it has to, thanks to Freedom 0, as you say), but by choosing a kernel by default that chooses to withhold information from its users, it is encouraging decreasing user knowledge. That is something that people should be aware of, and honestly, something that should be considered a bug to fix. We can't allow the proliferation of schools of thought that promote not educating users about their options, justified by ideology.

For what it's worth, disabling microcode updates in other distros is just uninstalling a package (or not installing it in the first place; my distro of choice, Gentoo, does not install it by default).


This is completely obtuse. Why isn’t your anger at the Guix or Linux-libre maintainers’ supposed withholding of information (‘censorship’) directed instead at publishers of closed-source, unauditable binary blobs, which constitute a fundamentally greater security risk than any auditable code repository ever could? How does it even make sense for Linux-libre to publish notifications about available binary blobs of any kind, including microcode? Why doesn’t Intel just release the source so this can be put to bed?

There is an ideological battle here, and despite the way you frame it, it is between people who believe that users should be empowered to know what the software they run does, and people who don’t. Linux-libre sits in the camp of the former. I don’t know into which camp you fall, but your argument suggests the latter. I personally use the upstream Linux because it works easier for me and the hardware I have, but I have no bones about the fact that I am running non-free software, that that ideally I would have hardware which supported fully free software.

Also, as I said, the Guix maintainers believe that their users are competent. Guix users know what they’re getting into when they run Guix, at least when they get to the point when they would drive it daily or run it on a system they see as critical. We know that microcode isn’t included when we use Linux-libre, because it is non-free - and that’s just the way it is. Saying that the Guix project is withholding information from us is incredibly patronising. It’s not a big conspiracy - the project, along with the rest of GNU, is openly in favour of maintaining repositories of fully and solely free software, to the exclusion of all non-free software. And it goes without saying, it is not the job of people in the GNU Project to educate people about the perks of running non-free software - there already exists a wealth of resources about how people can go about doing this, and it does not align with the clearly stated goals of the GNU Project.


You're ignoring the crux if the issue. This isn't about running proprietary microcode or not. All x86 Guix users are running proprietary microcode. Period. This is a fact, whether they know about it or not. It's not optional. Would it be great if they didn't have to? Sure. But that's not the world we live in.

This is about withholding information about an update to the proprietary microcode they are already running. There is absolutely no reason not to offer users this choice, given they're stuck running the ROM blob to begin with. The only reason the FSF and Linux-libre do this is because they want users to feel better by not knowing about the blob they're already running. There is no freedom gained, only the illusion of freedom. Users don't know what their ROM microcode does just as much as they don't know what the update does. The only thing we know is it fixes a bug.

I believe a world where all software, firmware, and hardware is free would be ideal. I also know that is not the world we live in, and so I believe in empowering users with the information about what options they have, what the security/freedom/privacy/etc tradeoffs are, and letting them make their own choices. The FSF's push for restricting information so people believe they are not running any nonfree code is diametrically opposed to my views for this reason. I believe having the illusion of freedom is harmful to the cause for software freedom, as it obfuscates the reality of the current state of affairs. Every single die-hard FSF fanboy needs to learn about the 27 blobs running in little ROMs inside their computer, so they stop believing they live in a fake freedom utopia and come to terms with the reality we live in. Maybe then we'll have even more people pushing for firmware freedom.

I'm not even saying Guix should ship the update. But actively censoring information about the existence of the update? That's just ridiculous. Your users are already running the blob. Stop trying to pretend they aren't and tell them it's broken already. How is it better when users are stuck running broken proprietary software when a fix is available because you refuse to inform them about it? That's completely irrational.

Incidentally, this is a falsehood:

> closed-source, unauditable binary blobs, which constitute a fundamentally greater security risk than any auditable code repository ever could

This kind of absolutist view is another problem with this school of thought. Blobs can be in a position where, by the nature of their access or lack thereof to the rest of the system, they pose minimal security risk even if presumed completely evil. This is evidently a much lower risk than, say, low quality open source code in a position of extreme privilege with a large attack surface. Auditability doesn't mean things get audited or all bugs fixed. I have no problem trusting that a blob in a harmless I/O microcontroller poses little to no security risk to me over, say, certain open source cryptography/key storage implementations I've seen which raise a million red flags. Context matters, and the FSF's refusal to consider any context or nuance is also hurting users by not empowering them to make the decisions that are the best for them.


>Every single die-hard FSF fanboy needs to learn about the 27 blobs running in little ROMs inside their computer, so they stop believing they live in a fake freedom utopia and come to terms with the reality we live in. Maybe then we'll have even more people pushing for firmware freedom

who believes this? why are you constantly being insulting? a lot of linux fanboys think that if you run linux you can't get hacked. does that make linux dishonnest? of course not! there are misinformed people everywhere. having been in this exchange for about two daya i learned that FSF does a very good job to inform users about what using their products entails


https://ryf.fsf.org/products/TET-BT4

No mention of the half megabyte of proprietary Bluetooth stack firmware (with significant security and privacy implications) that's in that dongle which supposedly "Respects Your Freedom".

https://ryf.fsf.org/products/VikingsX200

No mention of:

* Proprietary CPU microcode ROM (full access to CPU/memory, security critical)

* Proprietary embedded controller firmware (H8S, connected LPC bus, full access to all memory, security critical, might be able to cause physical destruction / a fire with the right GPIO abuse)

* Proprietary TPM firmware (connected to LPC bus, full access to all memory, security critical)

* Proprietary USB camera firmware (connected to USB port, very large attack surface for OSes)

* Proprietary Bluetooth module firmware (connected to USB port, etc.)

* Proprietary USB card reader module firmware (connected to USB port, etc.)

* Proprietary SATA HDD firmware (can access/modify all user data, security critical if you don't use FDE + integrity)

And those are just the ones I could quickly pick out from the schematic; pretty sure there's at least a couple more major ones and even more minor ones. All the other laptops they endorse are in a similar situation.

How, exactly, are the FSF informing users about the devices they endorse?


And if free software existed for any of those chips, FSF would require it. If there is any room for consistency in the criteria, it would be to deal with the fact that some of the computers have less nonfree firmware than others. Eg: the raptor computer has far less than the x200.


their certification process as oitlined on https://ryf.fsf.org/about/criteristates:

BEGIN

> there is one exception for secondary embedded processors. The exception applies to software delivered inside auxiliary and low-level processors and FPGAs, within which software installation is not intended after the user obtains the product. This can include, for instance, microcode inside a processor, firmware built into an I/O device, or the gate pattern of an FPGA. The software in such secondary processors does not count as product software.

>We want users to be able to upgrade and control the software at as many levels as possible. If and when free software becomes available for use on a certain secondary processor, we will expect certified products to adopt it within a reasonable period of time. This can be done in the next model of the product, if there is a new model within a reasonable period of time. If this is not done, we will eventually withdraw the certification.

END

this explains their decisions in terms of their software freedom ideals, and its perfectly clear. i keep trying to explain to you that FSF is not about doing their best in terms of security. FSF is about doing what they believe is best for software freedom. althought it would be nice, i just dont see why they MUST inform their users about security entailments

of course, it is perfectly fine to point out security flaws in such an approach, but FSF is not selling security, and since the original topic is GNU Guix distro, GNU Guix does not market itself as a security-centric distro. why is this so hard for you to accept

alot of your argument rests on failure of their logic and bluring of a clear distinction between hardware and software. fine, it is a VALID point, but it is a corner case. FSF has taken an approach they believe deals best with such corner cases and explained their reasoning. they might be right in their approach, they might be wrong. you cannot satisfy everyone and not accepting that is just simply infantile

FSF is a pretty large organisation. for large organisations some form of bureocracy becomes necesary to achieve normal functioning. however, bureocracy will always entail some ilogical elements, esspecially in corner cases. that is just life, i believe


> doing what they believe is best for software freedom

It is quite self-evident that not being open about the existence of proprietary code is not the best for software freedom.

I know about their certification process; my entire argument is that their criteria are terrible and the lack of transparency about specific devices and what firmware they contain deceptive. You bringing up their criteria repeatedly isn't helping you counter that point. This silly loop of "The criteria suck. / These are the criteria." doesn't get us anywhere.

If the FSF believe in software freedom, why are they not informing their users about the nonfree software that exists in the devices they sell as "Respects your Freedom"? I'm not saying "why are they certified here"; fine, they have their rules. Why are they not documenting how these devices interact with those rules? Why are they obscuring all of this? What do they have to gain by keeping people in the dark?

To me it is just very clear that they're doing this because they are trying to build a narrative that is different from reality, and the only way that narrative works is if people don't discover the problems with it. Your opinion may differ, but everything I've seen about the FSF's behavior in recent years points towards that. If they wanted to be honest there would be no reason not to document those firmware blobs.


arguing that FSF is some crazy sect or a religion is certainly not constructive to improving what they do

i think that how you framed your concern in this last point is valid from a security point of view. as someone who values knowing security implications i support the argument that they should improve their work on security matters. in saying that, to me FSF is certainly no worse (i would think alot better) than closed source vendors, including ms, apple, intel, nvidia etc as far as making their users aware of security problems is concerned. i actually think that every device should come with a cigarete packet style label that says "might include backdoor inside", not just FSF products. in fact, maybe if only FSF did this, it might create an impression that only their products have this issue


See, the funny thing is that Apple's M1 devices are less backdoorable than the FSF's "Respects your Freedom" laptops. That's because unlike those obsolete laptops, Apple's designs actually firewall off all the blobs and coprocessors using IOMMUs, and those IOMMU configurations are introspectable by system software so it can confirm they are correct and not granting too much access. That means if you boot Linux on an M1, and you check the IOMMU configs (most of which are not locked, but those which are are readable), you can say "yup, this isn't backdoored". There is also no nested virtualization support on those machines, and the design of ARM virtualization makes it impossible to "hide" a secret backdoor hypervisor. There is also no ME or any other hyper-privileged software. On top of that, the fact that Apple uses code signing for their entire boot stack means that only they can theoretically (though as I said, in practice detectably) backdoor your laptop - that's much better than the RYF laptops, which have no security and therefore anyone in the supply chain can backdoor them. Oh yeah, and all the post-boot blobs on M1s are stored in the filesystem or readable Flash memories, so you can actually audit them, unlike all those microcontrollers with LPC access to all system RAM on the RYF machines. That's just nasty, the perfect design for an invisible backdoor.

So yes, the FSF is actually worse than closed source vendors because they are promoting ancient laptops with poor isolation and no security mitigations, while Apple has spent the past 15 years building a secure platform. You may not like some of their reasons (e.g. locking down iPhones)... but in the end it results in significantly better end-user security than a "Respects your Freedom" device. And you can put Linux on those Macs and run a fully open source kernel and userspace - not very different from those laptops in the end. Think about that.

Of course, I am very happy to talk at length about the design of these machines with everyone, and I want all the users of my software to be aware of these things (yes, there are a ton of blobs here - the amount of damage they can do is less than the blobs in the RYF laptops, but they still exist), as well as the things we don't know (e.g. some of the IOMMU configs grant full access to a few hardware streams; we don't know whether those streams are actually controllable by a coprocessor in such a way that it would make it backdoorable, but we'd like to find out and if it is, that would be a firmware bug to report to Apple). I believe that in order to make an informed decision, users need to have all the information.


>FSF is actually worse than closed source vendors because they are promoting ancient laptops with poor isolation and no security mitigations

but they are promoting such devices on ethical concerns, not on security concerns. they are not deceiving anyone

>Apple has spent the past 15 years building a secure platform. You may not like some of their reasons (e.g. locking down iPhones)... but in the end it results in significantly better end-user security than a "Respects your Freedom" device

you mentioned FSF fanboys before. there are way more Apple fanboys who are convinced that their Apple products respect their privacy and that their devices are impenetrable. what is worse, Apple markets itself based on this grotesque misperception!

if security and privacy is a vital concern for people, Apple devices are NOT products that should handle their security concerns! you promoting Apple as secure makes you guilty of the very thing you accuse FSF of actually

i would absolutely love it if there is a public non-profit organisation like FSF that is security/privacy focused - eg Secure Software Foundation - that would employ security experts to analyse and audit security and privacy of all software, free and non-free, and of course always open their findings immediately. moreover, i would definitely hope that such an organisation would be as autistic and unwavering toward open security and privacy as FSF is toward software freedom :)

the fact of the matter is, while you can support both free and secure software seperately, they are seperate matters. it is possible to come to a conflict of interest type scenario. secure-software and free-software, although often sharing the same concerns, are simply not the same thing


> if security and privacy is a vital concern for people, Apple devices are NOT products that should handle their security concerns! you promoting Apple as secure makes you guilty of the very thing you accuse FSF of actually

You're mixing up software and hardware. I have no strong opinion on the security of Apple's (macOS) software from a user perspective. It's a proprietary OS. It gets some things right and some things wrong. There have been privacy concerns (e.g. the CSAM mess). I use it for browsing the web sometimes, but I wouldn't make it my main OS.

But Apple deeply cares about platform security, and notoriously, iOS devices are some of the most secure consumer devices available. This isn't marketing bullshit - their designs are actually that good, which is something I can say as a security professional. You may or may not agree with their motivation, which ostensibly includes both customer security and keeping an iron grip on their iOS devices. But the end result is they have built excellent silicon designs with advanced security features and a very security-conscious architecture throughout. The same stuff that makes it hard to jailbreak iPhones. And so now that they stuck them in Macs and unlocked the bootloader, would I buy one? Of course. And put Linux on it. And so should you*, if you care about security. There really isn't anything else done nearly as well as these things, at least not at a performance level we'd consider decent in 2021.

Yes, it might surprise you coming from Apple, but it makes sense because they did this for their own benefit. It just so happens that their motives end up with a result that aligns with what I want. And so I'll take it, thanks.

I still won't use an iPhone, though.

* Okay, maybe wait until we're done porting things and it runs well.


fair enough. i value your opinion on the matters of hardware security and i am definitely not going to pretend i am an expert. i know you know your stuff. my point is that fighting for free software and fighting for security (software or hardware) can diverge

i think FSF fights against non-free software because it considers it an evil for a society. i have no problems them fighting this fight, and i dont see any other candidates able to fight that fight on their level. i think that people who care about free software should at least respect them

on the other hand, i think security and privacy is a seperate fight, extremely important. if you form an organisation that defends security and privacy as much as FSF defends free software, i will definitely support it and you


> only they can theoretically (though as I said, in practice detectably) backdoor your laptop

No, no no. This is wrong. You don't need a nested hypervisor to make an undetectable backdoor. If you you audit all network traffic from a separate device, most backdoors are detectable, because a backdoor wants to have some effect that goes outside your system and that is the obvious route. But there are a hundred ways to create a backdoor which is very hard to detect and of course the proprietary bootloader Mac bootloader is a perfectly good vector for them.

So the M1 firmware is meant to be updated, right? Proprietary updates are the means by which a company exercises unacceptable control. The next update to the network card firmware could check the signature of the OS and stop working.

> some of the IOMMU configs grant full access to a few hardware streams; we don't know whether those streams are actually controllable by a coprocessor in such a way that it would make it backdoorable

Wait, so its all good because its protected by IOMMU configs, except where it isn't..., and you just hope its a bug that the IOMMU config was too open? Seems more likely that this whole theory of yours has a whole in it, that some firmware does have access to change important data.

Think of a keyboard. Now, imagine one that just has a simple chip that is not updatable. Well, it could have a backdoor in it. But it isn't a concern for your software freedom. Now, someone devises a keyboard where you load a proprietary firmware in it every time it gets plugged in, and you are dependent on the vendor for updates, but somehow it has better security properties. Well, you may argue that that is more important than software freedom. Ok, but, then that vendor can then make whatever terms and conditions it wants on those updates, and that includes breaking your security. So, one day, the vendor says: run our proprietary updating software, is every user going to reject it because they realize the security implications? No. And the vendor says: our firmware updates are only distributable through MacOS, so every time you update, you are going to have to install MacOS, then reinstall GNU/Linux. Sounds like a good way to kill GNU/Linux for 99% of users who don't got time for that. Wait, isn't that the situation for M1 laptop users? Riiight.


> No, no no. This is wrong

Look, I'm not a fan of pulling on credentials, but I've literally spent the past year reverse engineering these devices. If you're going to tell me I'm wrong about my security analysis, I hope you've done your own.

> But there are a hundred ways to create a backdoor which is very hard to detect and of course the proprietary bootloader Mac bootloader is a perfectly good vector for them.

Not when you can literally flash these devices from scratch (DFU mode) using a public OS image from Apple. That guarantees any preinstalled backdoors go away, since it's a complete wipe (you can do this from a Linux machine, by the way - I just added support for the latest M1 devices and OS to idevicerestore a few days ago). All the runtime components that remain booted while the OS runs are not encrypted, and thus Apple can't hide a secret backdoor in them.

> The next update to the network card firmware could check the signature of the OS and stop working.

The network card is behind an IOMMU and sees exactly what the OS wants it to see. It has no way to check the signature of the OS.

This whole argument is moot anyway, because of course Apple could release a new OS/firmware version that removes the bootloader unlock tools. I've had this discussion a million times already. Apple spent a significant amount of time developing these tools and the infrastructure to allow these unlocks, and I do not believe they would ever do this, as it would be a massive 180 and incur a huge PR hit, nevermind expose them to legal action. If you believe otherwise, then don't buy these machines, or just never update the firmware once you get one. Also don't buy any Android phones, any x86 PCs with Boot Guard, etc., as they all suffer from the same hypothetical retroactive lockdown threat.

> Wait, so its all good because its protected by IOMMU configs, except where it isn't..., and you just hope its a bug that the IOMMU config was too open?

We are still reverse engineering these machines. It's not just the IOMMUs. There are other layers of address filtering. We don't know what those streams do, therefore we can't say whether they're evil or not. Given how carefully Apple has designed these things to prevent this, I have no doubt that if there's a path for one of these coprocessors to access all RAM, that's a bug, and if I can confirm that, that'll be an email to product-security@apple.com with a 90 day disclosure deadline, and they'll fix it. It wouldn't be my first rodeo with Apple product security either. They're good, but they're human. They make mistakes.

But we can't say that right now because we literally don't know what's plugged into that port on the IOMMU. It could be a hardware block with an address filter or otherwise controlled by the main CPU anyway. Or it could be outright unused and a vestige of something they were doing on iOS. I can certainly tell you that the main IOMMU port used by the coprocessor subsystem in question does not have access to all RAM. They've been very careful to design the whole SoC like that. If there's a hole, it's a bug.

> Think of a keyboard. Now, imagine one that just has a simple chip that is not updatable. Well, it could have a backdoor in it. But it isn't a concern for your software freedom. Now, someone devises a keyboard where you load a proprietary firmware in it every time it gets plugged in, and you are dependent on the vendor for updates, but somehow it has better security properties.

You could just not apply the updates, and you'd be no worse off than with the non-updatable chip. The updatability gives you choice. It doesn't take anything away, certainly not any more of your freedom. In fact, most devices with the kind of little ROMs the FSF loves to ignore, like keyboards, would not use signed firmware if they had a RAM design instead. That means you absolutely gain freedom with the RAM version - the freedom to reverse engineer the proprietary firmware and write your own, or install an open version someone else has already made.

> Wait, isn't that the situation for M1 laptop users? Riiight.

I have a perfectly working installer that pulls the firmware updates from Apple's CDN and builds an OS container without installing macOS. You do need macOS for self-hosted system-level firmware updates, but only because we haven't built a process for Linux to invoke that updater yet. You can, however, already use DFU mode with another Linux machine running idevicerestore to apply these updates without wiping the whole system nor requiring a macOS install, if you really want to (though it's not the best method because it wipes some stuff that makes it not completely seamless, but it doesn't wipe your OS).

> so every time you update, you are going to have to install MacOS, then reinstall GNU/Linux.

Or you could just dual boot, which is how I expect 95% of our users to use the system. We recommend keeping a macOS install around at this point for various practical reasons. The machines natively support multi-boot and we take full advantage of that. Please learn more about the system architecture before making up FUD.


> Not when you can literally flash these devices from scratch (DFU mode) using a public OS image from Apple. That guarantees any preinstalled backdoors go away, since it's a complete wipe (you can do this from a Linux machine, by the way - I just added support for the latest M1 devices and OS to idevicerestore a few days ago). All the runtime components that remain booted while the OS runs are not encrypted, and thus Apple can't hide a secret backdoor in them.

You're saying that a backdoor can't be secret if it is in an unencrypted binary. That sounds wrong to me. Are you going to decompile and audit the entire OS to find a backdoor, I don't think so.

> I have a perfectly working installer that pulls the firmware updates from Apple's CDN and builds an OS container without installing macOS. You do need macOS for self-hosted system-level firmware updates, but only because we haven't built a process for Linux to invoke that updater yet.

Well, that is nice.

> You could just not apply the updates, and you'd be no worse off than with the non-updatable chip. The updatability gives you choice. It doesn't take anything away, certainly not any more of your freedom.

You "could". In practice, software vendors relies on the updates to abuse their users. For example intel microcode updates have a license that says: you agree not to reverse engineer it. Security updates for printers come with functionality to stop working with third party ink. Oh, you are a sophisticated user and handle it all. Fine. And of course, FSF certainly encourages reverse engineering: if you want to buy something for the purpose of reverse engineering, FSF does endorse that. For everyone else, think it's a perfectly fine position to simply say, we don't endorse opening yourself up to an abusive relationship.


Leaving aside the fact that binary blobs already existing in x86 processors doesn’t mean much for users because they can’t help that - I mean, GNU existed before Linux, and only became a completely free operating system when Linux was released, so it’s not like the position of the GNU project was to completely rule out the possibility of running any software or hardware if there was a sniff of non-free software within a 5km radius …

No, you’re ignoring the crux of the issue, which is user freedom. Why are you so insistent that Guix wave the flag for Intel and allow its users’ freedoms to be even more flagrantly violated by shipping updated microcode? Why does the FSF not openly publishing certain kinds of information amount to censorship? - do you think that, if they were to remove certain blobs from the kernel but leave in these nebulous notifications that microcode updates are available, it wouldn’t be censorship?

Talk about a bad user experience by the way - what you suggest is for the maintainers of Linux-libre to say ‘oh, by the way, there are critical updates to microcode running on your processor, but we’re not going to give them to you.’ That’s antisocial nonsense. Should they tell users about updates to all the other drivers in the kernel which they don’t ship either? Give me a break.

I am currently, at this very moment, running updated Intel microcode on my Guix machine. Every Guix user who has used the distro for more then 5 minutes knows that a) you can track any channel you like containing packaged software, and b) there is a popular channel which contains some useful non-free software. They know this because it takes about 5 minutes to use a search engine for, say, ‘How do I get Nvidia drivers on my Guix System’, which will lead you to nonguix. What exactly is your problem? Do you think Guix users need to be babied?

> Auditability doesn't mean things get audited or all bugs fixed.

Correct! But it means that they can.

Why won’t you respond to the fact that Guix users simply don’t feel hurt by this? You have taken it upon yourself to speak on behalf of a community which is perfectly happy with the existing arrangement - that the default kernel is built on Linux-libre, and they are free to use any other compatible kernel they like, including the upstream.

If you don’t like Linux-libre, don’t use it. If you don’t like Guix, don’t use it. Your pearl-clutching about these projects’ violation of their users’ rights is unwarranted, unwanted, obtuse, and very annoying. If I didn’t know better, I’d think you had it out for FSF.

I’ll leave that as my final word on this because this conversation is fruitless and pointless. You are moralising on the behalf of a community that you are not a member of and that doesn’t care for much of what you have to say for them.


very well said

>If I didn’t know better, I’d think you had it out for FSF

given the ammount of bad faith in the threads i think it definitely looks like a campaign


Well, for some people loading an arbitrary binary code without possibility to check what's inside it is a critical security issue as well.


Those people are already running arbitrary binary code without the possibility to check what's inside, it's just that it was loaded before purchase. If you don't trust Intel's updates, then you also can't trust their CPUs in the first place.


There is a bit more nuance here though. There may be users who trust their old systems but no longer trust the current state of its manufacturer or their binary only updates. Proprietary blobs go against the core freedom as defined by FSF so I can understand why they block by default but IMO they should allow informed users to override. Simply censoring without allowing a user to bypass is not user (or freedom) respecting. The power to choose should be with the user, whom the FSF claims to represent.


Just because some proprietary code exists doesn't mean you should leave the door open for them to add as much extra proprietary code as they wish.

You can regard it as two separate features: one that's needed for the CPU to function, and another that's the door for more code being added. In that perspective it's better to go with preventing additions.


You can't add much to a CPU via microcode. The space of what updates can do is extremely limited, with a limited amount of patch RAM and patch registers. It's designed to fix bugs. You're arguing against fixing bugs in proprietary software you're already running.


So why don't CPU vendor open source their microcode? Secrets... ok, let's use the ones which have no secrets.


Good luck with that...


This is a double edge sword.

If there are issues with what are initially released and you do not patch you do not get those fixes.

So they could add stuff but they definitely will fix stuff. Not updating could be more dangerous then updating.


Intel hardware definitely cannot be trusted. Probably "good enough" for most people, but it's honestly garbage, security wise.


And yet that terrible security situation has approximately nothing to do with the FSF's "no visible blobs" rule. ME could be just as bad running off of ROM, and then it'd meet the FSF's "Respects your Freedom" requirements.


The issue is that the FSF fine with that binary blobs as long as it's stored in ROM, but if it's in RAM, that's bad. To me that is completely backwards. If a driver loads firmware into RAM, that means I have easy access to the blob, can reverse engineer it, update it and change it. If it's ROM that's going to be a lot more difficult or impossible.

Sure it would be nice to have it all Free Software, but then we shouldn't treat hardware as a magic black box we don't care about.


Your theory makes sense, but practice strikes me as the reverse: Long-term: Stuff in ROM is significantly easier to track because it happens slower, mostly by large trackable entities and processes. Stuff in RAM? Always changing and "under attack" all the time.


Firmware in RAM is loaded from distribution packages, which also goes through a trackable process. There is a continuum here with frequent firmware updates being extremely dodgy (why can't they get it right and release a stable version), and infrequent updates (eg CPU microcode) being similar to shipping one image in "ROM".

Furthermore, I haven't seen any device that actually ships significant (ie non-bootloader) firmware in "ROM". It's usually in flash, meaning its contents are mutable but less legible to the Free system than if they were loaded every time by a Free driver.


> Stuff in ROM is significantly easier to track because it happens slower, mostly by large trackable entities and processes.

Assuming you're able to track it at all. If the hardware does not provide any way to read the blobs, then how do you track anything? Decap your CPU and stick it under an electron microscope? Much easier to inspect a file on my filesystem.


You're completely missing the fundamental point that all of this takes a proverbial village.

Free software isn't about each of us individually fixing the problems. Nearly exactly the opposite.


exactly. and people promoting closed source security updates should print this risk out in most clear fashion. if we want to hold GNU to account for security then we should definitely do so with closed source vendors too


Yeah. Then these geniuses get bitten by Spectre/Meltdown because they were too scared of running the microcode update. For real.

I agree, if that's the position of Guix, I don't want it in my machine.


I doubt that. While the attack was possible on large hosting providers, utilizing the same on workstations or hardmetal and actually get important data was basically impossible.

It had to be fixed, but your thesis that anyone was actually owned by these security issues because they didn't want to apply the mitigation rounds to at most 0.0% with an infinite amount of zeros before a 1.


Guix will never prevent you from doing what you want with your hardware. Nor will it give you software that is not properly free (as in freedom).

The "nonguix" channel mentioned in the article does have Intel and AMD microcode for users who want it. This is similar to Debian, where you have to opt-in by enabling the "nonfree" repository and "apt install" the microcode package corresponding to your CPU.


Good to know, but still

> where this Linux fork actively removes security warnings informing users that they need to update their CPU microcode

Is not ok


I believe that argument is based on the same FUD that I addressed here:

https://news.ycombinator.com/item?id=29290087

...at least, I don't see any such code in the actual deblobbing script: https://linux-libre.fsfla.org/pub/linux-libre/releases/5.15....

edit: since you called linux-libre a "fork", I feel compelled to point out that Linux-Libre is just the vanilla Linux kernel with that script applied. No more, no less.


I'm sorry, but this (and a bunch of other similar blocks) seem pretty intentional...

    # Do no recommend non-Free microcode update.
    announce X86_LOCAL_APIC - Undocumented
    clean_blob arch/x86/kernel/apic/apic.c
    clean_kconfig arch/x86/Kconfig X86_LOCAL_APIC
    clean_mk CONFIG_X86_LOCAL_APIC arch/x86/kernel/apic/Makefile


If the kernel can't load it without code changes and recompilation, due to the de-blobbing process, it doesn't make much sense to recommend to users that they load it.


You can often also update your microcode by updating your BIOS/firmware.


That's a good point! Adjusting the message to direct people down that route rather than simply removing it seems like a good idea.


Sorry, but you are wrong. GNU people won't run nonfree JS at all.

LibreJS is a good example in order to kill any potential Spectre/Meltdown attack. There is no attack when no code is being run.


At that point why just not power off their machines? That 3 websites that has “free js” is almost as useless as a brick. Also, free software in itself never protected against security vulnerabilities, many eyes is a fallacy.


You are really wrong, a lot of services (specially news) work either without JS or have a libre alternative, such as Twitter/Nitter, or Reddit/Teddit.


I use NoScript and I find very few sites that are really broken if I don't enable JS.


Your and mine definition of very few sites must be different than.

Can you buy anything at all on the internet?


Amazon not so long ago worked without JS, or Ebay, I can't remember.


"LibreJS is a good example in order to kill any potential Spectre/Meltdown attack. There is no attack when no code is being run."

If attackers who cannot add a comment to their exploit are in your threat model.

Personally, I've been using a browser extension that blocks JS unless it has a comment reading

> This code is NOT evil or malicious!

at the top. Haven't been hacked yet!


> There is no attack when no code is being run.

https://9to5mac.com/2021/03/11/browser-based-attack-affects-...

Turns out you don't need Turing completeness to perform microarchitectural side channel attacks. This is yet another way in which the "all my software is free, therefore I am safe from attacks" fallacy breaks down.

Nevermind that, as pointed out by other replies, LibreJS provides zero security. It relies on scripts voluntarily declaring that they're freely licensed, and if they do, they're allowed to run. The extension doesn't care whether the script is malicious or not.


Dillo has a nice CSS-less rendering. Also, Links+.

I am still safe.


But that is simply not the logical decision.


> "removes security warnings informing users that they need to update their CPU microcode"

If you update your CPU microcode to something that can't be checked, you're already sacrificing security.

EDIT:

But yes, I understand people complaining that FSF rejects proprietary software but is OK with some forms of ROM which in turn may be very similar to "proprietary software you can't change".

Well, I never asked their reasons to someone from inside the FSF, so I my knowledge about it can improve. Nevertheless, I can see some points:

  - AFAIK, the form of accepted code is only for "secondary processors" and can't take over the system or compromise it,
  - having things in ROM forces manufacturers to maximally simplify it,
  - having things in ROM forces manufacturers to implement more features in software that can be checked and
  - having things in ROM forces manufacturers to be extra careful when implementing it.
I don't think FSF would consider IME acceptable if it was ROM.

But indeed, it would be very good if someone from FSF could better explain it.


Non sense. I trust my CPU provider or I wouldn’t have bought this CPU. I would much rather use an updated microcode than one known to be insecure. I wouldn’t check the microcode anyway and I don’t really trust the people who might more than the company providing my CPU.

Sure it might contain code making me vulnerable to a state actor but that’s not a threat profile I care about.


If you have enough trust in proprietary software distributors to not care to use software that can't be checked, then you're probably not part of this discussion.

I don't check all the software I use myself. But I use open-source software with the peace of mind of someone who knows that the incentives to abuse me are simply not there and the fact that right now that are hundreds of people/bots checking it.

> Sure it might contain code making me vulnerable to a state actor but that’s not a threat profile I care about.

State actors are not the only threat. Think about arbitrarily disabled features, DRM, programmed obsolescence or simply not allowing anybody to improve/fix it after the device is abandoned by the manufacturer.


> the fact that right now that are hundreds of people/bots checking it.

As has been shown plenty of times, “more eyes looking at the code” is a fallacy as those eyes are very much not looking to find security vulnerabilities. Also, the lack of security in userspace GNU/linux codebases is something really worrying, much much more so than hypothetical hardware attack when you can just append to the end of .bashrc from a goddamn npm install and basically do anything you want, including streaming every single keypress?


> As has been shown plenty of times, “more eyes looking at the code” is a fallacy as those eyes are very much not looking to find security vulnerabilities.

There are also automated scans of a long list of FLOSS. Google runs OSS-Fuzz, you can check LibreOffice to see lots of commits to fix defects detected by OSS-Fuzz, coverity and other static analyzers. Access to the code, at the least allows determined users to more easily find where or why the bug happens. And yes, people do it.

> hypothetical hardware attack when you can just append to the end of .bashrc from a goddamn npm install and basically do anything you want, including streaming every single keypress?

Yes, this is more serious. But would need a compromise on packagers side and an uninformed (or automated) update on my side. Considering an update with such a vulnerability would affect initially a small fraction of users before being discovered, it is very unlikely that this hypothetical would have a big impact.


It was just one example, but running everything under the user’s user is just terrible.

Also, your automatic checker examples are related to the project’s size and importance, not to FOSS alone. It is very welcome but your average one-person C project that is installed on everyone’s system doesn’t benefit from such tools.


> As has been shown plenty of times, “more eyes looking at the code” is a fallacy

This is debatable: https://en.wikipedia.org/wiki/Comparison_of_open-source_and_.... Any links to your argument?


Your link says they are comparable, with another study claiming better quality for open source software. While I really don’t think that grouping together vastly different softwares based on this one quality is meaningful (is a proprietary CRUD app made in 5 months with an ad-hoc architecture comparable to an open source database with proper planning and domain-expert programmers working on it?), I also didn’t claim that they were worse off, just that more eyes won’t find more/all bugs, because in fact, they don’t actually look for them in the first place.

The fact remains, every type of software is ripe for bugs and security measures must be taken so those bugs won’t become exploitable.



I do not trust my CPU provider. Where do I find an alternative based exclusively on free software?


If you do not trust your CPU provider, your only option is a design that assumes an adversarial manufacturer. The only credible design I've seen for this scenario is Precursor, which works because generic backdoors for FPGAs (i.e. those that work with arbitrary randomized designs) are arguably impractical, since you'd need a huge amount of compute power to attempt to analyze the design and figure out how to backdoor it. It's likely even an unsolvable problem in the general case.

https://www.crowdsupply.com/sutajio-kosagi/precursor

Of course, then you get a RISC-V running at 100MHz. If you want something faster, you need to trust your CPU provider. There's no way around that; silicon is not end-user introspectable.



Every single laptop in the RYF list includes proprietary unauditable microcode. So do many of the other products there, e.g. that Bluetooth dongle which probably has about half a megabyte of proprietary firmware.



There is no reason to trust IBM not to include a silicon backdoor in POWER9 more than Intel not to include a silicon or microcode backdoor in their x86 chips. TALOS is a lot freer than most designs, but you still need to trust every manufacturer whose silicon went into that motherboard.


>There is no reason to trust IBM not to include a silicon backdoor

Yeah ok now we are in the religion side of things, since you cannot check the silicon...well i stop here, not worth my time.

BTW: The power microcode is opensource.


You can backdoor silicon just as well as you can backdoor software or microcode. Why do you only care about trusting the latter?


Because the perfect is the enemy of the good: https://news.ycombinator.com/item?id=27897975.


>>since you cannot check the silicon...

READ.

The microcode is opensource..


The silicon - the rest of the logic that makes the CPU - isn't. There's more to CPUs than microcode. Besides, how can you audit that the open source microcode is what the manufacturer actually put into the CPU? Microcode is usually ROM-based with only runtime patches.

Open source microcode does not change the fact that you need to trust the CPU manufacturer not to backdoor your CPU. There is literally no way around that that does not involve using FPGAs and restricting yourself to 100MHz CPUs.

By the way, I googled POWER9 microcode and was not able to find any source or reference to it. The source code for the firmware running on various auxiliary cores is open source. However, the CPU cores do contain microcode for executing more complex instructions, and I am not finding any reference to this being open source.


I understand that you have problems reading so i repeat it for you the third time:

>>>since you cannot check the silicon...

And since you have problems using google too:

https://www.sbir.gov/node/1620879

>>A small business has already shown that a completely open source solution (to include CPU firmware, CPU Microcode, Baseboard Management Controller (BMC), BIOS boot code, power management, etc.) based on a high performance CPU is possible.

Imagine who that "small business" is?


Where's the link to the source?

FYI, IBM loves to use the term "microcode" to include side CPU firmware and such. This is different from Intel "microcode" which strictly refers to the instruction dispatch part. Is the instruction dispatch microcode in POWER9 open source or not?

None of these are it: https://wiki.raptorcs.com/wiki/OpenPOWER_Firmware


https://git.raptorcs.com/git/

>>CPU Internal Firmware

>>CPU External Firmware


SBE, OCC, HCODE. Those are from the list I linked. None of them are the instruction dispatch microcode.

Maybe it's time for you to have the realization that POWER9 has proprietary microcode just like Intel. At least it seems they probably have less of it, and they do vaguely document it in the CPU manual (they have micro-op classes and counts) but it's there, and I don't see any real source code anywhere. They also have patch registers, so you can't even say it's not updatable. The POWER9 manual mentions six Instruction Mask Registers per core, but these registers are documented nowhere (I just spent a good 30 minutes digging through the giant register documentation PDFs and HCODE source, but I couldn't find anything).

Is this better than Intel? Yes. Is it "fully free"? No. Nothing's fully free. Which is why we need nuanced analysis, not the nonsense arbitrary lines the FSF draws in the sand.


why is it nonsense? i think not trusting vendors that dont let you verify their product is quite sensible. note this is not paranoia to think that they are doing something malicious. its just as simple as dont trust strangers


There is no way to verify silicon. This applies to every single chip. If you are using off the shelf hard cores in silicon, you need to trust the vendor. That's just how it is. It's not practical for end users to take silicon chips into a SEM, delayer them, and verify that the design is what they expect. Verifiability aside, there aren't even any high performance CPUs with fully open RTL/netlists available.


> If you are using off the shelf hard cores in silicon, you need to trust the vendor

if i dont have options for alternatives, i think its completely rational to use something without trusting it. i would say that this should be a default attitude


Sandboxing and fencing off untrusted parts is fine. I find the approach of Librem 5 understandable.

However the distinction between blob on chip flash and blob on system storage is nonsensical to me. I would much rather have a sandboxed untrusted part I can update rather than a sandboxed untrusted part that I can not update.

Unfortunately I have not seen anyone actually give a reason why the line should be drawn there instead of closed source blobs are not ok period. Doesn't matter where they live. None of us are arguing in favour of blobs.


>Unfortunately I have not seen anyone actually give a reason why the line should be drawn there instead of closed source blobs are not ok period

this makes it sound like FSF supports these blobs. how i understood the situation is that they tolerate them until an alternative presents itself. this is a valid position to take and does not make them hypocrites/cult/religion etc


They are literally endorsing Bluetooth dongles with half a megabyte of proprietary ROM as "respecting your freedom". That goes a little beyond "tolerating", don't you think?


'Respects Your Freedom' is a (trademarked?) label that comes with clear and readily available certification rules. according to FSF, these are products that are simply the best options available as far as FSF's free-software ethics are concerned. in this sense it is simmilar to 'fair trade' labels you find on products. since you guys love extreme examples, i could ask you if when you use an Apple computer do you expect to be able to eat it

actually since you work with macs, do you think Apple respects your freedom or do you think they are more ethical than FSF?


I will answer your last question. Apple absolutely does not respect your freedom. They infringe on your freedom. Depending on the definition of ethical, they are vastly less ethical than the FSF.

But they also never claimed to be ethical. They never claimed to guard a moral ideal. They have claimed many things (protect privacy of users, guarantee security of users) which they did not act upon, and they have received flack for that. But they didn't claim to be ethical. Not even in a aspirational way like Googles "Don't be evil" moto.

The FSF, however, claims that ethics is it's prime driver. And by endorsing hardware with proprietary blobs (placed on chip flash instead of system storage) they display hypocrisy. They choose the very pragmatism they criticise others for choosing.

And that is something religions do aswell.

I have asked you why is the line there and you have not answered.


I think Apple are a corporation with interests that happen to result in them building secure, high performance, quite trustable hardware. Since they have the motive to do so, and since everything I've seen suggests they indeed are, and since their hardware officially allows me to run my own software on it, I would much rather use their hardware (with my own OS/software) than whatever the FSF labels as RYF, which is a label that, in my view, says nothing I care about, not even about my freedom.

Whether Apple is ethical or not is a different question. There is plenty of criticism to be fired at them for various issues. That's a personal call for people to make. I'm not saying you should go buy Apple hardware. I'm saying it's significantly more trustworthy from a security and privacy standpoint than x86 machines. Do they respect my software freedom? About as much as the RYF machines. They both let me run my own OS and they both rely on proprietary firmware for various things. The FSF's certification criteria do nothing for my software freedom (which has nothing to do with whether blobs are in ROM or RAM), they just hurt security, which is something else I care about.

We all have to make our own decisions about what to purchase based on the information available to us. That is why having such information is so important. If you value repairability more than anything, you should probably get a Framework. If you value security above all, you should get a Precursor device. If you want a trustable machine that's still high performance, you should get a Mac. If you want to run Windows games, you should get a gaming PC. If you value your freedom... there isn't anything truly free out there. RYF machines certainly aren't it, nor more free than many others by practical measures, nor transparent about their design.

Hence why I criticize the program. It's not achieving anything positive. It's just a feel good thing; the FSF says it respects my freedom so I can feel good about being Free™ while running more proprietary firmware than many other off the shelf machines.

Just to put things into perspective, I believe Google have done more for computing device freedom than the FSF, because the Chromebook team is notoriously pretty much the only large team which actually pushes for open source everything pretty hard, and they're important enough that some vendors listen, and they have the money to develop things themselves. For example, if you look for an open boot/OS stack for the Tegra X1, the closest you're going to get is the Chromebook Pixel's. Only the RAM training blob is closed source (and there is a reverse engineered replacement these days). Everything from the low level bootloader to the GPU drivers are open. This is no thanks to Nvidia - for pretty much all other customers they offer proprietary bootloaders. Also, I'm pretty sure some Chromebooks even have open source EC firmware, which those ThinkPads the FSF loves so much don't.


Hardware operates differently to software - it's impossible to verify hardware after it's been manufactured+shipped to you, so you need to trust the ENTIRE manufacturing and ownership chain of EVERY SINGLE COMPONENT. This is true of both open- and closed-source hardware.

Look, don't take my word for it - take the word (and more importantly, the reasoning) of the guys behind the Novena open laptop: https://www.bunniestudios.com/blog/?p=5706


understood. but does not restricting blobs to a place where verification is impossible isolate the problem? to me this approach still seems reasonable from a free software (as per FSF) point of view

also i am confused why this issue isnt being taken with the hardware manufacturers? if i purchased their product, and if their user license did not forbid me from using linux-libre, and if this update is absolutely vital for my safety, then it makes much more sense to take issue with the manufactures. they should open source the update in this case


The question is why proprietary microcode that ships with your CPU is fine, but microcode patches released later are non-free? According to the FSF, you should never apply a vendor's microcode patch and instead you should buy a newer processor from them which will ship with the microcode patch already burned in.

You can't even argue that it is a difference of being stored in ROM vs. RAM because the existence of patches means that microcode is upgradeable.


> If you update your CPU microcode to something that can't be checked, you're already sacrificing security.

You're not sacrificing security, you're making a tradeoff.

If I update my CPU microcode to something I can't check myself, I'm explicitly choosing to trust my CPU provider, under the assumption that the risk of a microcode-based attack from my CPU provider is smaller than the risk of a cpu bug-based attack by an unknown attacker.


>You're not sacrificing security, you're making a tradeoff.

If you don't trust your CPU provider then you have zero security from them already. Even if the entire design of the CPU is open-source. https://www.bunniestudios.com/blog/?p=5706


> If you update your CPU microcode to something that can't be checked, you're already sacrificing security.

You are already running CPU microcode that can't be checked when you are running Guix on x86. That ship sailed when you decided to use x86.


It's not productive to call it "arbitrary religious dogma". The FSF chose a criteria for what binary blobs are acceptable. Your and my opinions are that they set this criteria wrong, but if we hope to change their policy, then we have to discuss it rationally.

I personally think the correct criteria is that of security boundaries and change control. If I've got a graphics card with proprietary code (regardless of whether its in ROM or loaded into RAM), with a proper IOMMU, the attacks it can perform on me are limited (eg TEMPEST). The video card itself is not Free or secure, but it's unfortunately something I have to use to interface with my Free/secure computer, just like my keyboard with its proprietary firmware. And as long as I can load any version of firmware onto that video card, then I retain administrative control where the manufacturer can't revoke functionality after I've purchased it.

Disk drives with proprietary software (which is all of them) are not an attack vector, because a drive should only ever be seeing encrypted data to begin with (eg LUKS).

A network card is a bit more worrisome with its direct network access (ie backhaul), but a Free/secure design shouldn't be trusting the network either (unless you have Free/secure switches), so this does not meaningfully change your security properties.

Obviously the above assumes Free/secure drivers, because drivers are running inside the security boundary of the OS.

CPU masks and microcode run afoul of my strict criteria, but are practically inescapable. Proprietary masks/microcode are required for every amd64 system (correct me if I am wrong), so it makes sense to say you have a Free/secure amd64 modulo those proprietary bits (as say Libreboot already does). And with so few versions of CPU microcode, the question of whether to trust a given microcode update is equivalent to whether to trust a given newly released CPU, and shouldn't be viewed as a software Freedom issue.


The problem is that they have long given up on rationality; they and their devoted followers are an echo chamber of "blobs bad", "my software freedom", etc. They've stopped listening to rational discussion :(


Huh? I've never updated my CPU microcode in other distros, nor did I receive warnings about it.

Do you have a better example?


... yes, because your distro did the right thing and does provide updated packages and loads them for you (it's not persistent, but rather done on each boot), and thus you don't need to do anything and also don't see warnings about your distro failing to do so. Whereas linux-libre doesn't want you to know if your distro isn't loading updated microcode, because it's "better" (according to them) to run vulnerable non-updated microcode than letting you get tempted to use non-free updates to the non-free firmware in your CPU.


Yup, it's all about controlling users' access to hardware and software to fulfill some imaginary "freedom" ideal that doesn't actually have any relevance in reality.

Stallman personally refused to certify bunnie's Novena laptop (a fully open hardware and software laptop) as "Respects your Freedom" because there were no free drivers for the GPU, and although it wasn't going to ship with GPU acceleration (that's optional anyway), Stallman said users might be "tempted" to install the proprietary blob. Instead he suggested it might be possible to get the manufacturer to cripple the GPU (permanently fuse it off of existing chips that already have it), and then that could be RYF-certified.

bunnie gave up on that, but had he actually shipped a crippled FSF-approved version... a few years later, open drivers for that GPU were developed, so that would've made the regular version certifiable, and everyone who bought the "respects your freedom" version would've been left with needlessly crippled hardware. But the FSF insists this is the way to go.

Meanwhile, they're endorsing "Respects Your Freedom" Bluetooth dongles that have about half a megabyte of proprietary firmware in ROM.


> a few years later, open drivers for that GPU were developed

So ... FSF made the right call? What is the point of the certification of a device if there are no drivers for it.


The wrong call, because you can only reasonably develop drivers for uncrippled GPUs. The FSF was so caught up in maintaining a freedom from being _tempted_ to use proprietary software, that their solution would've taken away the freedom to develop your own free software.


I read the certification is some sort of consumer protection/advice. As a consumer, you would want to know the state of the device at purchase when you pay for the GPU. Not some maybe driver coming up in years.

It would be a kinda good idea to have a "rms would almost use this" sticker they could hand out, though.

Looking at: https://ryf.fsf.org/categories/laptops

There is a sorry collection of refurbished laptops.


> linux-libre doesn't want you to know if your distro isn't loading updated microcode, because it's "better" (according to them) to run vulnerable non-updated microcode

This is FUD. The reason it's not possible to load microcode or other proprietary blobs from linux-libre is because of a limitation of the deblobbing process. From [0]:

Indeed, I became aware that some users have got the idea that blocking the loading of blobs is a feature. It's not; it's just a bug that's quite difficult to fix. The decision on whether or not to use a piece of software, be it Free or not, should belong to the users, and it's not our intent to make that difficult.

If you can make the deblobbing script leave an escape hatch for users to load their own blobs, at their option, I'm sure the pull request would be well-received.

[0] https://www.fsfla.org/ikiwiki/blogs/lxo/2013-11-08-linux-lib...


Then why is the reasoning in the linked email given as

> Another significant change in this release is that it was pointed out that there were error messages in Linux suggesting users to update x86 CPU microcode. Since such microcode is non-Free Software, such messages don't belong in GNU Linux-libre.

That reads very much as "we don't want to encourage users to consider updating microcode". Your argument also seems unlikely since distros ship the microcode as an extra package that gets picked up by the kernel, so clearly the ability to not upload microcode if the user doesn't provide it is there. (It makes sense that is that way and a different situation than the drivers your link discusses, since the device runs without a microcode update, whereas peripherals that need blobs often won't run at all without them)


The deblobbing script includes an explicit section that censors the microcode update messages.

http://linux-libre.fsfla.org/pub/linux-libre/releases/5.15.3...

Grep for "Do no recommend non-Free microcode update" [sic].

The actual censoring patterns are here:

http://linux-libre.fsfla.org/pub/linux-libre/releases/5.15.3...

Grep for "arch/x86/kernel/apic/apic.c" to find some of them.


You might have, without realising it. It's supplied as a package in nearly every distro.

Here's the Debian package for the Intel microcode, for example:

https://packages.debian.org/bullseye/intel-microcode

Debian hides it away in their "non-free" repo but it's in the default install in many other distros.


To add, you can do:

    dmesg | grep "microcode updated early to"
To see when it was last updated.


Yes you have, it's done automatically.


serious question: why is something that is updated without you knowing about it ok?


Same way that it's ok that if you update your distro it fetches newer drivers, a new kernel and patched versions of all the software you installed? Microcode is loaded at runtime, it's not permanently modifying your system.


The question is still: are "magic incantations" in packages ok, considering that they allow the issuer to control your hardware more than if the code was baked into firmware just once?

Also, these packages allow vendors to keep quiet about security issues, because they can silently fix them in the next update.


Is it any more of a "magic incantation" than the linux-image-XYZ package which controls which OS kernel is installed? Or the linux-firmware package which controls what firmware gets loaded on various devices?

If you want to see when Intel issues new microcode updates, it is all available on their GitHub: https://github.com/intel/Intel-Linux-Processor-Microcode-Dat...


but i dont do that automatically, as the parent described. as far as i am aware i need to manually issue a comand to perform the update, which is ok as far as i have that control


Distros prompt you when upgrading packages, not binaries. You get a prompt to update Firefox package, not for replacing the actual binary on disk.

Typically packages including microcode behave the same way - prompt to update the package, no prompt to implement that update (replace individual files).


"automatically" as in "automatically when you update the rest of your distro", not an extra step as the poster above clearly seemed to expect. That context also was clearly from the other comments already.


i dont seem to be able to reply directly. my question was meant as a clarification because simply saying something is "automatic" is very ambiguous. "automatic" to me means giving up control. to me whats being described is not "automatic"


It's updated when you update all your other packages. It's no different from updating your shell or your desktop environment.


but i am aware when those updates happen and what they are. they are not automatic or hidden


It's just a package. It's not automatic nor hidden. If you've never noticed it being updated, then you probably haven't been paying enough attention to the package list when you did choose to issue a system-wide upgrade command.


Worth noting that BIOS updates frequently ship with ucode updates that are applied at boot before UEFI executes the operating system. So if GP is diligent about keeping the BIOS up to date, it's conceivable that Linux's ucode update has never had any work to do. At the very least this seems to be the case with Thinkpads.


What is that package called? Is it the "linux-firmware" one?



The "updated microcode" package is installed on your system manually (like any other package). The "automatic" part comes in when your system boots up, after this package is installed. On boot, when the package is installed, your system will automatically use the installed package to update the processor's microcode.


if its not automatic or hidden then it is ok, as long as you have that control. to be clear i think GNU has every right to forbid non-free software just as anyone has the right to not use GNU software if they don't want to. what should be known about are consequences and responsibilities for either (any) decision. if GNU says we dont want to support this security update it needs to clearly state the risk their users face. if it does that then its ok as far as i am concerned


They do not state anything like that; that's the entire problem with that linux-libre patch. It is removing a message that informs user that their computer is at risk without an update, because they don't want people to know, because if they did most people would choose to install the proprietary update to the proprietary microcode they are already running, and that would expose the existence of that microcode, and the FSF's utopia only exists in the minds of people who aren't aware of all the proprietary firmware they're running anyway.

It's all a big lie. There's proprietary firmware everywhere. The FSF just doesn't want users to know about it, so they can live happily in their blissful ignorance believing they are freer than everyone else.


So is the microcode package and the loader that loads it. What kind of difference are you trying to make here?


> arbitrary religious dogma

"arbitrary political view"

there's a difference, it matters.


It is a religion at this point.

Political views have parties and parties have supporters. FSF is more like a cult that has followers.

How else do you explain their rejection to include Debian as a fully free distro?


Debian is rejected because it has an official, endorsed "non-free" repository. That does not fulfill the GNU FSDG:

https://www.gnu.org/distros/free-system-distribution-guideli...

Specifically, "A free system distribution must not steer users towards obtaining any nonfree information for practical use, or encourage them to do so."

This is a political stance based on rational arguments and has nothing to do with religion.


And by Debian's definition, some of FSF's own packages are non-free (notably, the documentation), so you have to explicitly opt-in to installing that.

I wish both of them directed their efforts towards more pragmatic problems, like making their software more accessible. In the rest of the world, freedom is usually a function of accessibility.


Consistent application of principles doesn't make a "cult". Its just so rare you may be unfamiliar with the difference.

Make a fork of their code with your preferred changes. The world will be improved and people who agree with you will be happy.


> Political views have parties [citation needed] Political views MAY have parties. I agree to some extent with the cult argument though.


Guix the package manager can work on top of a different Linux installation, and can even manage other things than distro packages. This is one of the things I'm planning to do.

BTW same thing is with Nix: you may not like the choices of NixOS but still enjoy most of the advantages Nix has to offer.


Guix is dedicated to ensuring user freedom. It doesn't provide non-free software so that users can make informed choices: they might choose to use third-party channels that provide non-free packages, or install non-free software via flatpak or some other tool, but they can trust Guix itself will never provide non-free software.

Part of allowing for informed choices is about raising awareness, for instance by explaining why some WiFi devices won't work out of the box (which is really the only practical issue one might stumble upon): https://guix.gnu.org/manual/en/html_node/Hardware-Considerat...


>FSF doctrine that "binary blobs are bad unless you can't see them"

your statement is misleading

why did you add "unless you can't see them"

did FSF said binary blobs you can't see are not bad? Seems opposite from what FSF would do


The FSF openly endorses devices with hundreds of kilobytes of nonfree blobs burned into ROM. They do this because they claim they can just "say it's equivalent to hardware". They only care about blobs when they're loaded from software, because at that point their existence becomes apparent, even though naturally a blob loaded into RAM gives the user more freedom (e.g. to inspect it or change it) than a blob loaded from ROM.


It has a lot of features that could make it useful in certain contexts like VM installs where you don't need to lean heavily on GPU. I'm going to try it at some point this weekend on a qemu VMM


Hello, author here, happy to answer questions and discuss! This is the first in more Guix articles coming from me, as I started when building my newest desktop [0]. Next will be more details on exactly how my system is set up, but you can find (somewhat out of date as we work on big updates in Guix) my dot files here [1].

[0] https://news.ycombinator.com/item?id=28628344

[1] https://github.com/podiki/dot.me/tree/master/guix/.config


What's your experience with Guix and Nix on laptops? Which among them are sufficiently supported to be daily drivers?

Have you tried running Proton on Guix?

For someone who is interested in the Guix/Nix ethos but wants to keep Arch, is there anything aside from sentimental reasons that I would be missing by just running the Nix/Guix package managers on top of Arch .


I haven't used Guix on my laptop much other than as a package manager or to work on patches when I'm not at my desktop. So I can't say much, but Guix does tend to be heavier on storage IO needs. You can always use substitutes to avoid much building, unless you are doing things you need to build locally, like custom patches or other changes.

Proton I've only ran through Flatpak Steam. As I mentioned, Nonguix's Steam is limited to older Proton, but I think we'll get that fixed pretty soon once some bigger updates on Guix settle down (today was actually a "sprint" day to fix up changes coming to the main Guix branch soon). But yes, worked great in Flatpak.

The main thing you'll miss from Guix in just using it as a package manager is the system configuration stuff. That will still be your host OS. You can play around with installing different things with Guix, package transformations, and even now trying out the (still in progress) Guix Home [0] to configure your dot files. Guix as a package manager will still give you a good feel of the advanced features it has for managing packages, without touching your host OS.

[0] https://guix.gnu.org/manual/devel/en/html_node/Home-Configur...


I'm using NixOS on my last two laptops without issue. One was a Surface Pro which required a custom kernel and right now I have an XPS 13. Proton runs fine as well.


I'm assuming the kernel you're referring to is the one from the linux-surface project? If so, which model and configuration of Surface Pro were you using? Were you ever able to get LTE, touchscreen, and pen to work? How is the battery life?

Is your XPS 13 the latest revision (i.e. 9310)? If it is, how does Nix handle sleep since Intel had removed S3 support?


> I'm assuming the kernel you're referring to is the one from the linux-surface project? If so, which model and configuration of Surface Pro were you using?

Yes I used the linux-surface project. I used a Surface Pro 2017 with an Intel Core i7-7660U CPU. It doesn't have LTE. But touchscreen and pen worked without issues after some fiddling with configs. Battery life was a little worse then on Windows but not dramatically so.

> Is your XPS 13 the latest revision (i.e. 9310)? If it is, how does Nix handle sleep since Intel had removed S3 support?

Yes it's the 9310 model. I use suspend (systemctl suspend) and it works fine. I'm not sure how I can check whether S3 is actually supported. Battery life is about the same story as the Surface. A little bit worse compared to windows.


If you're curious about XPS support in general, I use Arch and force S3 on my 9380 [1] and it works fine, also slightly worse battery when sleeping than ideal (loses about 5-10% in 24 hours, or so), but fine.

[1]: https://wiki.archlinux.org/title/Dell_XPS_13_(9370)#Sleep


Guix System works great on a ThinkPad T440p besides the default WLAN card. I removed mine for now. If you flash coreboot (to get around the hardware whitelist), you should be able to put in one compatible with linux-libre. There's also the USB dongle solution if you're into that.


Is there a reason neither Guix nor Nix have made an LTS-type repo?

A rolling release seems at odds with the stability granted by reproducible builds.

Maybe I have a miopic view, but what's nice with Ubuntu LTS is you know everyone and their mothers has built and tested their packages/libraries/executable against the lib versions provided by Ubuntu 18.04 or 20.04 or whatever. You also know those libs, at their given versions, will get patched as issues come up. If all your libraries are constantly churning then you have no idea of the quality of the libs you're linking to (and that lib's dependencies are also churning in turn). You can pin versions, but there is no guarantee what you're pinning to will be patched and fixed as issues come up


Good points made by others. But another thing is you don't have to? Nix(OS)/Guix installs packages and dependencies separately instead of replacing packages like other OS/package managers do.

This means you have multiple versions of a package installed in your system and that you can use them simultaneously for different applications if I understand it right. You can move back and forth between versions if one breaks without effecting the rest of the system. Also if I remember, nixos-update rebuild switch (or something very similar to this command) which is usually fired after installing new packages or when a major change is made to the nix configuratino file creates a new boot entry and hence a snapshot for you to go back to if something in NixOS breaks. So there is no necessity for an LTS version to be present for stability's sake.

Garbage collection is also left to the end user to deal with. There is a garbage collector command in Nix package manager which will clean up when you push the command so that new packages are not flooding your storage space.

NixOS's stable channel can be very loosely compared to Manjaro in the sense that just like Manjaro, NixOS's Stable repo does opinionated changes/interferences/fixes. And NixOS Unstable is like Arch linux with latest and greatest upstream stable versions of the packages.


Right, you can fix your versions if you care, but you don't have a million keyboards "tuning" a fixed version set to be play well with each other - so you don't get the same robustness

I know I can get any software to run on Ubuntu 18.04. Everyone supports those packages as their specific versions

Having multiple versions or the same package sounds like a disaster. That's the diamond problem. LibA depends on LibB and LibC. Those both depend on LibD at different versions... What then?

These rolling distros generally target one arch and one kernel bc the whole thing is fragile. Things like OpenSSL have bugs even between minor versions numbers once you start looking bigger


Rolling release cycle is enabled by this indeed. A major benefit what other distributions can’t have without containers.

You just nees to know which versions introduce major API changes etc. and you are fine.


Nix has a stable channel (six-month release cadence, with one month of support overlap), which isn't LTS, and even just the stable channel is "expensive" enough to be difficult for the project to justify. Supporting LTS has a genuine non-trivial cost in engineering man-hours. Nix has the technological underpinnings to make it easier to support LTS, in theory, but in all likelihood there won't be an LTS channel without corporate sponsorship providing funding to employ LTS maintainers, akin to Red Hat / IBM, Canonical, SUSe, etc.

Consider threads like https://discourse.nixos.org/t/what-should-stable-nixos-prior...


I guess what I don't get is, wouldn't an LTS be way way easier to maintain than a rolling, constantly breaking, release. Heck, id even prolly go through the effort of maintain my own software as a package if it's something I could do once every few years over a stable base. They could even make life easy and match version numbers with whatever the latest Ubuntu LTS is using. And people that'd need newer libs/bins could statically link whatever they need or provide whatever they want separately on top of that stable base (sorta like what PPAs do)


No. LTS is much more prone to breakage. LTS means you're porting yesterdays's security patches onto code that has been abandoned five years ago. And usually you have package maintainers doing this, not actual software developers.

There's no reason to run LTS unless for corporate insanity purposes.


> And usually you have package maintainers doing this, not actual software developers.

This is a weird thing to say. The package maintainers doing substantive backporting work for any distribution absolutely are actual developers.

> There's no reason to run LTS unless for corporate insanity purposes.

LTS releases also give you stability of behavior, which can be valuable even outside of corporate environments.

Plus six months is really short. There's plenty of space between that and the full lifecycle length of a major RHEL release, or an Ubuntu LTS. NixOS releases that lasted two years would be awesome.

I'd love to try to use a more long-term NixOS release for a downstream project, if it ever got the kind of corporate backing necessary to sustain that kind of release.


> The package maintainers doing substantive backporting work for any distribution absolutely are actual developers.

The maintainers doing the backporting are affiliated with the distro rather than being developers of the software they're doing backports for. That is a big distinction because it determines who has to incur the costs of compensating them/recruiting them to volunteer


Oh! On this interpretation, the GP comment is basically missing an instance of the definite article there:

> And usually you only have package maintainers doing this, not the actual developers.

It's not super unusual for the maintainers of some program's packages in several distros to also be core developers of the project, but yeah, that's a good point.


There are pros and cons of the LTS approach.

A big CON is that even on Ubuntu LTS, lots of softare is incredibly out of date and full of unpatched security vulnerabilities.

Consider Roundcube, probably the most popular PHP email web client.

On Ubuntu 18.04 LTS the last update is from, well, April 2018: https://packages.ubuntu.com/bionic/roundcube

Now consider the amount of CVEs (remote code execution and XSS) published for Rouncube since then, which are all unpatched in that Ubuntu: https://www.cvedetails.com/vulnerability-list/vendor_id-8905...

If you are running Ubuntu LTS on your server, that's a big problem.

The `roundcube` package is in the `universe` repository, meaning "community maintained". In this case for this LTS that meant "no security updates at all for 3 years". The newer LTS, 20.04, doesn't seem to have those CVEs fixed either.


Nix may not have a LTS release, but it certainly does have stable releases. The biannual releases are there precisely for this reason [1].

What's more, the issues you have with package pinning are more of a problem with traditional package managers. The situation is better on Nix and Guix because it provides you with more control and flexibility over packages.

With traditional system-level package managers, you can't really pin a subset of installed packages. Since all packages are installed into a shared location and depend on each other both explicitly and inexplicitly, packages in a distro release are tightly coupled together. It's just not possible to swap out or pin a subset of packages without the risk of breakage. As a result, a dedicated distro release consisting of old packages is needed to keep using older versions of packages.

This is not the case for Nix and Guix, which installs different packages in their own isolated locations. Packages are more loosely coupled, and you can mix packages from stable channels, unstable channels, and even specific git commits of those channels. Using pinned versions of critical system package is also less of a risk because different versions of the same package can coexist on a single system. Even if something does break, you can always roll back.

Finally, Nix and Guix provides ways to fix issues for pinned packages. With Nix/Guix packages, you're not stuck with whatever the distro provides you with. They're more flexible and allows you to create your own custom packages out of existing ones. For example, here's how you can backport patches for a pinned package on Nix:

    existingPackage.overrideAttrs (old: {
      patches = old.patches ++ [
        (fetchpatch { url = "..."; sha256 = "..."; })
      ];
    })
So while the lack of a LTS may be a bit disappointing, I wouldn't consider it a complete dealbreaker because the features and tooling makes up for it.

[1]: https://nixos.org/blog/announcements.html


I'd guess a lack of hands. Maintenance is the expensive and boring part of a distro. And seriously, why should an unpaid volunteer help to maintain a stable foundation for all the companies that rely on, e.g., Ubuntu 18.04?

I wonder if one could create a business out of supporting such distributions against a comparatively small fee.


I doubt it, folks interested in long-term support can already use Debian, Ubuntu, Red Hat, etc.


> Is there a reason neither Guix nor Nix have made an LTS-type repo?

Nix and Guix are already esoteric enough to scare people away, I don't imagine how these projects would be able to reliably manage LTS releases unless they get serious financial support or enough manpower willing to deal with backporting security and bug fixes.

You might wanna look at Fedora Silverblue and the OSTree technology making its way into RHEL/CentOS/Rocky/ALMA etc.


To add to what others have said, a benefit of this type of distro is you can always go to a specific point in time of your configuration and packages. So if it has worked, it will continue to work, while you can upgrade or change other parts in a separable manner.

In a more interesting note, I wrote an aside about grafts, which are a way of providing changes to packages without rebuilding (often to graft in a security fix to a library the package uses). So you can keep a configuration of packages you like and graft on security fixes while staying on the same version. There are some caveats about when you can do this of course (ABI compatibility mainly), but is some cool technology, to me.


> A rolling release seems at odds with the stability granted by reproducible builds.

GuixSD and NixOS aren't subject to some of difficulties of traditional rolling release Linux distributions.

You are exposed to upstream behavior changes and upstream bugs more or less as soon as they arrive, just like on Arch Linux or something like that. But most of the other risks and maintenance burdens of ‘running a rolling release’ aren't borne by GuixSD and NixOS users.

This is not so much due to the reproducibility of Nix builds as to the hermeticity and statelessness of Nix and Guix builds.

The hermeticity means that you don't have to worry about ABI breakages in the same way, since packages that need incompatible versions of the same library will each find their respective version of that dependency sitting safely isolated from one another in the package store. This spares you from a few maintenance burdens:

• installation of new packages causing/requiring upgrades of ‘unrelated’ packages because they share some dependency with the new one (this one affects standalone package managers like Homebrew as well as rolling releases of entire operating systems like Arch)

• having to upgrade before installing new things, every time you update your package sources

• having to rebuild some packages that are not part of your main OS configuration because you upgraded your main OS (e.g., Arch updates breaking AUR packages, or system upgrades breaking software you compiled manually or your Python virtualenvs, etc.). (You do have to inform the package manager that you don't want these externally-depended-on packages garbage collected, though. For Nix, for example, you can do that automagically by running fc-userscan against your project directories: https://github.com/flyingcircusio/userscan )

The statelessness is related to reproducibility in that it's a result of the functional package management approach to reproducibility. Since every version of your whole system is generated without reference to previous versions (indeed, from scratch!), the OS never has to navigate state transitions for its packages. It doesn't have to worry about converting configuration files from one format to another, or replacing defaults, or the implicit dependencies of your system undergoing name changes or being replaced. Nix and Guix don't need Debian-style transitional packages and similar tricks. That means you aren't punished, on a per-package basis, for not updating your system constantly.

For example, I recently took some neglected, non-internet-facing NixOS servers and updated them from an early 2018 release of NixOS to the latest NixOS unstable rolling release. While I did have to first work a forward incompatibility issue in Nix itself, the rest of the upgrade was a single step, and I didn't have to worry about finding a valid ‘upgrade path’.

It's worth noting that in a strict sense, the reproducibility is all still there, even for NixOS releases that no longer receive updates. If you need to use an old version of some piece of software for compatibility reasons, in a safe environment, you can use the latest and greatest Nix to install packages from NixOS releases that are 2 or 4 or 6 or 8 years old— including on top of a bleeding edge system running NixOS Unstable.

But you have a point: it would be awesome if there were long-term releases, you would get a different kind of reproducibility— one which is less strict but more useful in some ways. For example, you could take a Nix expression that someone posted in a Gist on GitHub 4 years ago, for what was back then the latest NixOS stable release. If that release were also an LTS, you could not just reproduce what they actually had, but apply it against the latest version of the same LTS to get a system that should be totally compatible in terms of behavior, but suitable for running in production without modification, thanks to up-to-date security patches.


> In short, GNU Guix is both a package manager you can use in any distro and a full-fledged GNU/Linux distribution, that is modern and advanced

Why not just use Nix, which is more battle tested?


As a NixOS user who has (recently!) played with Guix, I don't think ‘battle testing’ is a great reason to prefer Nix.

Guix has an excellent CLI and awesome docs. It's stable and plenty usable. All being in one, high-level language like Scheme makes it seem really easy to hack on, and I think that's part of why its CLI is so good already.

Nix is faster, it supports macOS, and its package collection is much bigger because it's older. But Guix seems great to me, too! If you think you might like it, try it.


What about number of packages available? Does guix have something comparable to Nix Flakes?


I think Guix has like 1/3 the total number of packages available, probably less. This may not be a huge deal— when I started using NixOS, Nixpkgs was much smaller than it is now, too, but it still felt worth it for me. As another user pointed out, packaging in Nix and Guix is pretty easy for anything that doesn't have a bespoke or ill-behaved (requiring network access, trying to write to the directories of other packages, etc.) build system. But it can make a big difference for usability if packaging work is cumbersome for you, or something very large or complex that you want is missing. (I think KDE is still missing, for example.)

Nix flakes are kind of a lot of things: a version pinning system, the switch to pure evaluation mode by default (which impacts caching in a good way but makes configuration slightly more annoying), a distribution mechanism for code written in Nixlang, and a collection of Nixlang schemas which enable a richer command line experience (that, e.g., power the new `nix run` command). It's not clear how many of those functions Nix flakes will retain in its final form.

There's no singular feature that attempts all that in Guix as far as I know, and I don't know the general technical story for evaluation caching with Guix. But Guix does have an integrated, first-party form of package pinning. Where Nix flakes replaces the old 'channel' system, Guix has a richer notion of channels which support pinning in normal Guix expressions.

(In Nix, channels are an output that have a certain structure and get managed in a certain way via fhe nix-channel command to manage them— they're like another type of Nix profile. In Guix, channels are defined by the user directly in Scheme code, just like the rest of their configuratiom, which is more similar to how Nix flake inputs are defined than to Nix channels.)

I've never used Guix channels in anger, so I can't tell you how nice that notion of version pinning is to use.


"guix describe" and "guix time-machine" make it rather easy to pin a Guix revision (and thus the whole package set) and to restore it anytime:

https://guix.gnu.org/manual/en/html_node/Replicating-Guix.ht...


I really like the design of `guix time-machine`. It's pretty cool how it just reuses the existing syntax for other subcommands, instead of modifying them.


Guix has lots of packages in the default channel, and you can add any odd git repo as another channel providing extra software.

There are quite a few popular channels, such as non-guix (with things like vanilla Linux), guix-science, guix-past, etc.

As a maintainer of R packages in Guix I'd also like to point out that many R packages in Nix actually need more work to build them, so the number of packages in Nix is rather inflated. Guix also goes to great lengths to actually build things completely from source, such as Java packages, or to minify JavaScript from source files.


Currently in nixpkgs ~3% of rPackages fail to build.


To be fair, the appeal of Nix (and Guix) to me is that it's very easy to create binary packages of applications I use. Coming from Fedora, it takes minutes to create new packages compared to hours with RPM.


Author is a lisper, is a good reason.

And while nix may be battle tested that doesn’t translate to a good experience, the learning curve is high and the documentation while plentiful is not really good or helpful to beginners. Plus the entire thing is in flux right now between flakes, home-manager, and a desire to kill nix-env.


> Plus the entire thing is in flux right now between flakes, home-manager, and a desire to kill nix-env.

This is my big gripe with Nix. There are so many things that are almost ready, or almost integrated, and advanced users are typically already using them. It makes it feel like next year will always be a better time to recommend Nix to newbies.

And a lot of the more ambitious contributions to Nix and Nixpkgs that are really, really exciting as a user tend to sit in pull request limbo for a very long time, sometimes dying on the vine. Guix doesn't seem to have that problem yet, but I don't follow it as closely.

It's painful to feel sort of totally married to it but also like I can't whole-heartedly recommend picking it up to most people I know who might enjoy it once they got going.


Not sure what level exactly a beginner is, but I have my problems with Guix docs too. Try making a package for example. Took me a long time and multiple questions on mailing list and irc to get it done. People also recommend more than 1 different strategies. Then try upgrading a package, when there are some files you want to exclude. How to do that? Seems I can not find answers for my questions in the docs and always find myself asking on mailing list or irc. There people are helpful and mostly you get an answer. I like Guix as package manager, but their docs can definitely be improved with loads of examples and tutorials.


> Not sure what level exactly a beginner is, but I have my problems with Guix docs too.

Oh don't get me wrong, I'm not saying guix is better (I have absolutely no idea), just that the experience with nix is extremely rough so nix being a bit more popular is not necessarily that much of an edge (or one at all).

> I like Guix as package manager, but their docs can definitely be improved with loads of examples and tutorials.

In fairness I'll say that especially if you're a long term user it is very easy to be blind to the early user experience. Sadly most projects don't push new users towards really reporting their experience or even contributing to the docs, but if you have the time and inclination to do so I'm quite convinced your experience would be extremely valuable to those who'll come after you, even if the project doesn't necessarily value them that much (but even then it can be useful as evidence of issues with the early experience / uptake, and possibly efforts to rectify them later on).

It's also useful on a personal level, because memory is a fickle thing and a year from now you may not even remember your struggles.


I wrote down all I learned here: https://notabug.org/ZelphirKaltstahl/gnu-guile-gnu-guix-pack... (or use org-mode file in same repo)

I've not yet had the energy or patience to learn the TexInfo format, which is a standard for GNU projects. But if anyone wants to put what I have in the Guix docs, even as merely an example or tutorial, I wont mind.


Out of curiosity, did you try exporting from org mode? https://orgmode.org/manual/Texinfo-export-commands.html


I know that is possible, but I did not try, because I do not know, whether such an export would be a "drop-in-and-done" for the structure or nesting depth, if such things exist, in the actual documentation, or would have to be modified a lot to put in the docs.


I found the easiest way to answer those questions is to look at how other packages do it.

The issue here isn't just that the documentation is lacking, but that Guix wants to have everything done in Scheme, so you often run into the issue of having to translate code you already have running in Bash into whatever Scheme equivalent Guix wants.


> Author is a lisper, is a good reason.

If I read this on any other website I’d assume it’s sarcasm. Only on HN, folks.


I don't understand why it would be sarcasm on any site. That is the original divergence, Guix was a version of Nix built on Guile instead of Nix.

If you're a long-time lisper interested in the ideas embodied by nix/guix, that alone is a lot of points for guix.


It's curious how the world rejects lisp so fast, I guess they have a lot of energy to waste.


... why would other sites not count "Guix uses a language the author might already be more comfortable with" as an argument for why the author might prefer Guix?


Short answer: I prefer Lisp (Guile Scheme in this case) to anything else really. Maybe the Nix language is fine, but I can already read Guix code and prefer working in the land of parens.


Nix is another Linux package manager.

https://nixos.org/


For my part, I think Guix has a much nicer CLI experience, and I find the documentation easier to comprehend. I also prefer the Guix DSL in Scheme to the Nix expression language.

I haven’t used NixOS though, which could make things a bit different.


I moved to NixOS and came back to Arch because setting up development environments for general purpose computing felt tricky. I was on a schedule so had to drop it. Is it the same for Guix? I don't know why it should be any different. But still asking.

Also, am guessing people will likely have hardware compatibility issues for Guix which isn't a problem for NixOS since they bundle non-free drivers etc.


It's tricky because both There is more than one way to do it, and many of those ways aren't well documented. Here's what I use:

- steam-run lets most binaries targeted at Ubuntu "just work" with no setup.

- buildFHSUserEnv lets you work with things that expect a typical /usr /bin /etc tree

- For cases where I just need an LD_LIBRARY_PATH and PATH setup, see[1]. This runs an emacs with my development environment. Since it uses buildenv, if I find I'm missing a library, I can add it, and rebuild the environment without starting emacs and it works fine.


For “tmp-like” codes, I had a folder with debian installed. I could systemd-nspawn into it for a basically zero-overhead “standard” unix system.

But for slightly larger projects, creating a package description was always worthy. I especially liked having a shell config where entering a directory automatically put the necessary packages in scope.


Not sure what kind of development environments you mean, but one of the great features of Guix (and Nix I think) is to easily provide isolated environments. This can help you control dependencies and other things separately from the main OS. In guix this is as easy as `guix shell some packages` or add a `-D` for development libraries needed for those packages instead. You can then easily reproduce the same environment on another machine.


Setting web development environments and mobile dev environments or anyway in which different languages are involved. NixOS have some documentation on it. Like I said, this wasn't because they (NixOS and Guix) implement it bad, but because of my time constraints as mentioned before.

But it also felt like, apart from learning the new way of using my system which NixOS/Guix works, now I have to work with how to deal with different languages. To me this felt like a big overhead. As a n00b of NixOS (not a n00b on Linux BTW), I have questions like doesn't this mean I will have lesser support on setting up something new development when something new comes along? This honestly have made me take my time on going back to NixOS. Cos I cannot just compile the packages directly like I could in normal Linux distros. Right? Or do correct me I am wrong. Like I said, the time constraints (+ pandemic) have made it hard for me to make time. Maybe you could correct me if I am wrong somewhere and make me understand it better. :)


A question that nags me every time Guix comes up, or nix, is about the benefits relative to a normal distro on something like ZFS. Are reproducible builds ever going to be all that important to a user? Rollbacks seem like the key feature here and that seems much better left up to the filesystem, not the package tools. This way your storage is also aware of what you're doing.


Reproducible builds are an important part of efforts to secure the software supply chain. Ideally you want multiple independent parties vouching that a given package (whether a compiled binary, or a source tarball) corresponds to a globally immutably published revision in a source code repository.

That gives you Binary Transparency, which is already being attempted in the Arch Linux package ecosystem[0], and it protects the user from compromised build environments and software updates that are targeted at a specific user or that occur without upstream's knowledge.

Once updates can be tied securely to version control tags, it is possible to add something like Crev[1] to allow distributed auditing of source code changes. That still leaves open the questions of who to trust for audits, and how to fund that auditing work, but it greatly mitigates other classes of attack.

[0] https://github.com/kpcyrd/pacman-bintrans

[1] https://github.com/crev-dev/cargo-crev


The main selling point for me, on NixOS is the ability to switch machines easily, and know that I'm basically using an identical build. Throw your config files onto GitHub or a flash drive, then do whatever you want on your desktop. Want to switch to your laptop? Pull any changes down, do a quick rebuild, and you have an identical system. If you buy a new machine, just install nix, pull down your config files and you're good to go. It used to take me hours to set up a new device, often relying on carefully crafted bash scripts, to try and replicate builds, that needed to be constantly maintained. Not any more.

BTW, I rarely rollback, and if I do it's because I've monumentality fucked something up, which is a great feature.


Rollbacks are a minor feature, I never found any use for in either Guix or Nix.

The main advantage of both of them is that installation and 'making available' are decoupled. Meaning you can install five different versions of the same app and it'll be totally fine, as they don't sit around in '/usr/bin/foo' and clashing with each other. They sit conflict-free in '/nix/store/${HASH}/bin/'. For making them available for use you have to add them to your profile/environment (i.e. $PATH and symlinks that point '/nix/') or just run them from their '/nix' directory if you prefer. This makes development and testing new versions much easier as nothing ever really breaks to begin with. You can just spawn a new environment, fill it with whatever you need and use it. And it all happens at the packaging level, so it's pretty quick and you never end up with snapshots that capture far more of your filesystem than you planed.


rollbacks are pretty useful when your are experimenting with boot process

everything else Is unlikely to break your system enough to prevent you from switching profiles


A couple of things - if the system can handle rollbacks it will be much more reliable than using the fs, as the fs knows nothing about actual state. It knows about blocks commited to disk. Usually they look the same, but not always.

Then there’s the question of how exactly you reached this state. Having nixos generations is like having an event stream of all changes. Apply your backup to a new machine, what happens? Who knows.

In nixos/guix it’s not about only ”package tools”, it’s about treating the complete system and its state as a coherent whole.

Once I got the taste of it I see no way back to opaque packages, managed by config tools with no concept of state.

And if you’re a dev - shell.nix and declarative containers ftw.


> if the system can handle rollbacks it will be much more reliable than using the fs, as the fs knows nothing about actual state. It knows about blocks commited to disk. Usually they look the same, but not always.

Could you elaborate on this? If I have state that works on v1 and after upgrade to v2 the state is now non backward compatible to v1, how nix (or guix) can help? As far as I can tell, in such case a fs rollback is better solution.


Depends on how the state is stored. If it's in configuration, Nix generated it and it lives immutable in the Nix store, so Nix will just point out it to the old version on rollback.

If it's something like the content of a SQL database, which lives outside the Nix store and which Nix did not generate, you need some other tool (like a filesystem snapshot, maybe) to perform the rollback. I think CoW filesystems sometimes have performance issues with DBs, though, so I'm not sure that's always the approach you'd take.

The Nix ecosystem does have a fairly mature tool for managing stateful components that live outside the Nix store, though: https://github.com/svanderburg/dysnomia

It's been around for a long time. Idk who all is using it


What state do you mean? Let’s say you have a backup of your system partition but your home folder is separate. In that case a rollback with the filesystem is the same as one through nix — the latter will place symlinks to all the previous versions and the finished result is identical.

Of course if you have in the meanwhile used the new system and your home folder contains some backwards incompatible changes, both solutions will fail. Rolling back your home folder may not be a good thing as you may have backwards compatible changes you prefer.

Also, nix can also manage installed application’s configs, so those could also be rolled back, on either a per-app basis or however you prefer.


Sure you could, like Solaris did IIRC, zfs snapshot before updates was applied.

What I mean is that a fs snapshot is ”dumb” in and of it self.

If you couple zfs with the nixos rebuild command, then… sure, I guess - but the previous generations are already more or less directly available, unless GC’d.

This is what the fine manual has to say:

Since Nix is good at being Nix, most users will want their server's data backed up, and don't mind reinstalling NixOS and then restoring data.


How does Nix work with less sophisticated package managers that run on top of it, e.g. Python's "pip"?


If you just use pip & co, nix will be unaware of it and won't care one way or another. It will basically treat your pip-installed stuff like it'd treat your sourcefiles or pdfs.

Alternatively you can create derivations instead in which case the resulting artefacts will be fully understood and manageable by nix (I think there are integrations to do that e.g. tools like carnix which can automatically create derivations from existing language-specific packages, not sure if there's one for pip/pypi).


> If you just use pip & co, nix will be unaware of it and won't care one way or another. It will basically treat your pip-installed stuff like it'd treat your sourcefiles or pdfs.

Also if you run pip as root (to install for all users at once)?


It fails, because NixOS mounts /nix/store (the Nix store, where all managed packages are stored) as read-only (with some trickery used by Nix itself to bypass this for its own builds).

And if you bypass that, `nix-store --verify --check-contents` can detect the issue.


This is very reassuring, thanks.


Guix, and I think Nix also, package the things from these other package managers so you can manage them all with the same package manager. This also means features like rollbacks can apply to your emacs packages.

You can run pip on Guix System, but I don't think you'd have to, ideally. Same for rust's cargo and so on.


Ok, makes sense. Does that mean that you don't get the latest updates that are available in pip?

Also, what happens if you incidentally run pip on a Nix system? Will it mess up your installation?


You can update packages to newer commits ahead of guix itself updating the package with a simple command[0] as long as dependencies and such haven't changed. So, if guix were behind, you should be able to easily remedy it.

[0] guix install mpv --with-commit=mpv=cc4ada655aae06218b900bb434e3521566394cde


I can't speak for pip, but cargo, rust's package manager, packages disappear on reboot. Doesn't matter for development since libraries are stored with your project, but any tools you install via cargo are gone


Rollbacks are a convenient benefit that you get from using Nix.

Nix (and guix) allow for a declarative description of a package, (or, e.g. a collection of packages). In addition to rollbacks, I think some of the other benefits are neat.

e.g. Usually for running some project I see on GitHub, I have to copy-paste the "apt install <whatever>" command; or maybe even run a "curl https://example.com/install.sh | sh". Some projects provide a Docker image, allowing the program to be run without changing what's installed in the system. -- Nix allows for the advantages of each of these (e.g. being able to run the program without installing it into your system, or allowing for a simple command to install it).

e.g. VSCode has a Remote Containers plugin which allows for quickly getting started with a project by using a Docker container as an execution environment. Or things like GitHub Code Spaces or ReplIt aim to provide quick-start environments for developing code. -- I think nix-shell offers similar benefits. (Nix can even be used to describe a Docker image format, instead of using a Dockerfile).

e.g. something like "install this package, but with this different set of build flags enabled" is relatively straightforward in Nix.


> Rollbacks seem like the key feature here and that seems much better left up to the filesystem, not the package tools.

It's not, at least not necessarily.

Let's say you try to change your system configuration and you fuck it up, you revert, with zfs your attempt is gone, or you have to go and hunt it down in the snapshot if you remembered to store that.

With nix/guix it's still there to be updated.

An other component is the intentionality of decisions: in nix/guix you can reach a point where the setup of the entire system is fully described and reproducible, in fact there are people who rebuild the machine "from scratch" (really from the nix store) on every boot, to ensure the system does not accumulate transient cruft.


> Are reproducible builds ever going to be all that important to a user?

Being able to take your setup with you to a new computer is pretty cool, right?


Yes, but not sure that is enough value for non developers. For developers being able to easily use different versions of tools in different projects reliably is a game changer


Just an anecdote, I accidentally broke something in my Guix system config but I couldn't figure out how to debug the scheme errors, and I had to reinstall


That's my experience as well. Guix error messages can get quickly completely unreadable, as you are five layers depth into some Scheme macro expansion and wherever that code originally came from that caused the problem is long forgotten by the time the error message is created.

Having a whole system, including the package database, build out of raw Scheme really doesn't feel like a good idea. It sure is flexible, but also really brittle and the performance is quite horrible.

I am currently in the process of switching over to Nix, which handles that all a bit more sanely.


We can definitely improve the error reporting (e.g. just the other day there was a patch to better improve error reporting in a system configuration to go directly to the configuration line rather than the system that uses that code). Submitting bugs with unhelpful error messages to Guix and/or Guile developers is much appreciated I hear.

However, it should be hard to break a Guix system, since you can always roll back. Of course you can find a way (I changed an ext4 flag that turns out Grub doesn't like, but that has nothing to do with Guix), but as long as you let Guix do its job and try not to manually work around it, you should be able to undo everything. Of course nothing is perfect, but I hope you reported it. In my experience the Guix devs and community is keen to improve.


Yeah, that always seems to be the problem with embedded DSLs. Eventually you need to know your way around the host language.


You couldn't boot an older system generation and then fetch its version of the config to get back to a working state?


I'm not sure how to get an old version of the config from the store. I tried searching for *configuration.scm but nothing showed up


Guix is excellent, however, I use CUDA heavily in my work, and it is hard to plug nVidia drivers into Guix.


Can you find out if these support your graphics card? If they do, you can just add a channel pointing to this git repo.

https://gitlab.inria.fr/guix-hpc/guix-hpc-non-free/-/tree/ma...

How to add a channel https://guix.gnu.org/manual/en/html_node/Using-a-Custom-Guix...


I tried installing Guix System maybe about a year ago, but had problems with LVM on LUKS with full-disk encryption and missing and outdated documentation. Has this situation changed?


I haven't done that setup but I know various people that use Guix (on the IRC channel) have an encrypted setup. I'm not sure their exact file structure, but it in general full disk encryption is supported; any issues should be bugs. You can always ask on the IRC channel to get a feel of what people have done and anything that might be tricky.


As much as I love lisp and hate the Nix expression language, I'm interested in actually getting things done more than I am in preserving software freedom, so I'm on NixOS.


Guix makes it very easy to add whatever non-free software you want via channels. You just won't get it with the default installation.


I suppose I (perhaps unfairly) pre-judged the Guix community based off of past experiences with libre-only distros. I have memories of listserv threads and IRC conversations where it felt like people were trying to catch people daring to run proprietary software on their distro just so they could shut down the conversation with "This distro only supports running Free Software"


I just wanted to say: Great article! It made me wanna try out Guix!


Thank you! It is fun, try it as a package manager, in a VM, or as a distro. Lots to learn I found, which is fun for me.



To save you a click: Global Command and Control System (GCCS)


If it wasn't for AMD GPU closed firmware I could use GNU Guix with linux-libre kernel.


> being on the bleeding edge as 64-bit went mainstream, compiling kernels (and everything else) on Gentoo, to more recently VFIO and then Proton

That’s a very unique definition of “fun”.


This is why many of us are in the profession.


I guess that in their mind, they're producing code for some utopic x86 system that does not have non-open microcode baked in, in which case their approach would be right.

This is a complete denial of reality, of course, and at the cost of the user. Religious dogma, as you put it, seems like an apt description.


why is it a religious dogma? i dont see the point of hostility. they have every right to hold to those principles as long as they are not deciving anyone and state their values clearly. who are they harming?


They are deceiving people into believing they are not running proprietary software, while they are, and that software is just not evident because it doesn't live on their filesystem. Then they actively withhold information from users so they will neither find out nor be tempted to find out for some other reason.

If they were being honest, they would tell people that this dongle has a good half a megabyte or so of proprietary Bluetooth stack built-in (with runtime patching/update ability; they all do), and they wouldn't deceptively have a "TET-BT4 source code" link that makes it sound like the firmware is open, while it's actually just a tarball of the Linux kernel (which contains a generic Bluetooth controller driver, nothing specific to this device).

https://ryf.fsf.org/products/TET-BT4


i just looked through this website and found a link to their certification process at https://ryf.fsf.org/about/criteria

it states:

> However, there is one exception for secondary embedded processors. The exception applies to software delivered inside auxiliary and low-level processors and FPGAs, within which software installation is not intended after the user obtains the product. This can include, for instance, microcode inside a processor, firmware built into an I/O device, or the gate pattern of an FPGA. The software in such secondary processors does not count as product software.

>We want users to be able to upgrade and control the software at as many levels as possible. If and when free software becomes available for use on a certain secondary processor, we will expect certified products to adopt it within a reasonable period of time. This can be done in the next model of the product, if there is a new model within a reasonable period of time. If this is not done, we will eventually withdraw the certification.

END QUOTE

According to you, what is deceptive about this?


The existence of that exception, the way it is implemented, the way they work with vendors to help them fit into it, and the way they do not require informing users of such secondary processors are all deceptive.

Just look at the Librem 5. That CPU needs a blob to even boot (to train the RAM). Normally that would just be embedded into the bootloader. But that would make it evident in the build process for their boot stack that there is a blob involved, and they can't certify that as "Respects your Freedom". So instead they worked with the manufacturer, and came up with this contrived interpretation of the "secondary processor" rule where, as long as the firmware in the "secondary processor" is at least two steps removed from the main CPU and never handled "directly" by it, it's okay. Then they had the manufacturer put the blob in a Flash ROM (Flash, so updatable, remember? just not that easily), and then they had them write a little loader code that runs on another secondary CPU. So the main CPU (running free software) boots a secondary CPU (running free software) that loads a blob from Flash and then boots a third CPU, which now runs proprietary software. According to the FSF, all this pointless obfuscation and extra levels of indirection makes the device magically compliant with their criteria. And so it got certified.

It is completely evident that absolutely none of this helps end-users' freedom in any way, shape, or form vs. just having the blob in the normal bootloader where it can be more easily inspected and analyzed (and also lets users ensure that it hasn't been tampered with). It's just adding obfuscation so users won't find the blob, and therefore will feel better believing they aren't running any blobs.

By this interpretation of the rule, I could ship an x86 PC with an Nvidia GPU that runs its proprietary driver on one of the CPU cores (isolated from the main OS), loaded by the UEFI firmware through ME or something, which communicates with the rest of the cores via VirtualGL or some other RPC, and that would make this PC eligible for RYF certification. Tell me that's not a farce.


not knowing about https://ryf.fsf.org/ previously, i managed to find and understand their certification process within a matter of ten minutes. if i was a user of these products i don't think i would feel decieved


Is the poster you’re replying to saying anything about the ease of parsing their policy? He doesn’t seem to be calling it confusing. Rather, he seems to be attacking the supposed (il)logic of its contentions and the resulting consequences.

At the moment this back and forth feels like you’re talking past his actual point(s).


no. he is calling it deceptive, which is even stronger than confusing. his statement was:

> They are deceiving people into believing they are not running proprietary software

edit: as regards FSF's logic i am not informed enough to comment so i didnt. but the conversation (this thread) was definitely about deceptiveness


The FSF’s principles have always permitted the use of non-free software when it advances the goal of software freedom. GNU was initially built using non-free software.

Given the pejorative yet inaccurate references to “religion.” I can’t help but think some people are deeply disturbed by the very concept of moral principles and and cognitive dissonance is forcing them to hallucinate that the FSF doesn’t actually have principles but is instead a cult. Very odd.


No, the argument is about the pragmatic criteria used to implement agreed upon principles. Bringing this back to concrete discussion, here is a quote from the Libreboot KGPE page:

> AMD Opteron 6200 series (Fam15h, with full IOMMU support in libreboot - highly recommended - fast, and works well without microcode updates, including virtualization)

> AMD Opteron 6300 series (Fam15h, with full IOMMU support in libreboot. AVOID LIKE THE PLAGUE - virtualization is broken without microcode updates.

"Avoid like the plague", yet there is little philosophical difference between compromising to trust AMD's 6200 masked microcode, and compromising to trust AMD's microcode update that fixed Spectre on 6300. The main possible distinction is if you want to argue that AMD became less trustworthy in the time between those two releases.

Obviously if AMD releases new microcode for the 6300 going forward, it's a software freedom/security question of whether that microcode should be installed (automatically or even after review). But as it stands, slow changing microcode updates are in the same security/freedom realm as new CPU releases.


One of the nice things about the FSF's free software principles is that if you disagree with how they think you should use their software, they're not going to stop you. Nonguix[1] provides solid non-free support if that's what you want. In fact it has a helpful section on microcode updates.

The FSF even condones non-free software (in a rather dorky way) for people whose machines require it[2]. I understand the FSF's principles and am glad they hold to them so strongly, but I would use non-free graphics drivers if I were to install Guix. I do fundamentally agree with the principles of software freedom and I am honest with myself that I am in fact making a moral compromise. Similarly I'd probably compromise over CPU microcode patches, even though I believe I have the moral right to view, understand, and change those microcode updates if I wish to and am displeased that my rights are being violated.

I believe in this day and age where the right to repair your own equipment is under serious threat, the principle that we should be free to modify the machines we own as we see fit is more important than ever.

[1] https://gitlab.com/nonguix/nonguix

[2] https://www.gnu.org/philosophy/install-fest-devil.en.html


I fully believe in software freedom (including favoring the GPL), and am trying to push it forward with this argument. I just see using a "6300 with microcode 2019-12-18" as the exact same compromise as using a "6200 with microcode 2011-11-14", regardless that the first blob was loaded at runtime while the second blob was loaded at the factory. Neither one lets me audit or modify my processor. There aren't many performant processors that do let you do such things, so the FSF is willing to compromise on systems with the second type of processor. I argue that they should extend that same pragmatism to the first type of processor as well.

Ultimately, the goal of a GNU/Linux distribution is to create a fully Free GNU/Linux environment. A fully free system would be a worthy goal, but Guix is not attempting such a thing (say by refusing to run if it detects hardware that has non-free firmware in flash). Rather they preemptively compromise by ignoring blobs stored in flash, but refuse the same compromise when those blobs would be loaded at runtime. This is completely backwards given that blobs loaded into auxiliary processors' RAM by Free software running on the main processor are actually more under the control of Free software.

And sure, nonguix exists. But I've gotten the impression that when you interact with the Guix community (eg irc), they will give you a bit of a cold shoulder for using nonguix because it is "not free software", even though you're making the exact same compromise as anyone else with a non-Free microprocessor or non-free auxiliary processor firmware. So ultimately I'm arguing that community norms, as led by the FSF, need to change here. They're stuck with an outdated model that simply ignores embedded firmware, rather than engaging with the nuance of labeling each part of a system as "free" or "non-free"


It sounds like we have very similar beliefs. I think the FSF should acknowledge that microcode updates and such are odious but tolerable moral compromises and that we should continue to work for a future where we have complete freedom to modify, repair, and otherwise use our machines as we see fit.

However, for fun, I'm going to do my best to steelman the FSF position: The use of non-free software when no free alternative exists is tolerable. The material difference between firmware that comes with the hardware or microcode that comes with the CPU versus a downloadable update is that the update is voluntary, and thus involves a willful violation of the principle of freedom. By doing so one becomes actively complicit in the erosion of freedom.

I also agree that they shouldn't be jerks on mailing lists and IRC, but have some empathy for persons that aren't so fortunate that they can eschew all non-free software.


I think the FSF position comes to down to 1. It's what they've always done and 2. They don't want to be distributing any nonfree software, even when it wouldn't become part of the Free environment

Whereas having a background in embedded design, I can't ignore that I have many more devices running nonfree software than Free software, despite trying to use Free software wherever I can. In particular, my fully Free desktop relies on non-free { monitor, monitor remote, keyboard, mouse, USB hubs, USB hub PS (power supply), USB switch, UPS, network power switch, circuit breaker, ethernet switch, ethernet switch PS, GPON terminal, GPON PS, nonfree BIOS on router }, in addition to the contentious non-free { video card, network card, CPU }. Any and all of those things could be replaced with a Free equivalent, but at the cost of attention that would be better spent elsewhere. In a world being eaten by software, the best we can do is hope for well-defined interfaces with nonfree systems, for our Free systems to interoperate with.

Perhaps a good way forward would be to split (Nonguix, Debian non-free, etc) into two separate categories depending on whether a package runs in the main security domain (drivers, system software, utilities/applications), or is to be loaded into an auxiliary processor (firmware blobs). Then after this distinction became widely accepted, the Free-first distros would hopefully become more comfortable including the firmware blobs, making a better user experience for their Free environments without impinging upon the freedom within.

(FWIW your steelman isn't it - it would seem to indicate that buying a computer with MS Windows preloaded is good from a software freedom perspective)


And here is the error of your logic: "is voluntary, and thus involves a willful violation of the principle of freedom".

Principle of freedom, in the context of the FSF, has always referred to user/recipient freedom.

A user voluntarily (of their on unconstrained freedom) making a (informed) choice for themselves, by definition does not violate their own freedom, they are exercising their freedom. You are contradicting yourself.

Complicit-ness can be debated with regards to purchasing decisions, not with regards to updating firmware.


i am not affiliated with FSF in any way. yet it seems to me that there are plenty of people arguing against them in very bad faith. here is the full excerpt in question:

BEGIN

>CPUs supported:

>AMD Opteron 6100 series (Fam10h. No IOMMU support. Not recommended - old. View errata datasheet here: http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf)

>AMD Opteron 6200 series (Fam15h, with full IOMMU support in libreboot - highly recommended - fast, and works well without microcode updates, including virtualization)

>AMD Opteron 6300 series (Fam15h, with full IOMMU support in libreboot. AVOID LIKE THE PLAGUE - virtualization is broken without microcode updates.

>NOTE: 6300 series CPUs have buggy microcode built-in, and libreboot recommends avoiding the updates. The 6200 series CPUs have more reliable microcode. Look at this errata datasheet: http://support.amd.com/TechDocs/48063_15h_Mod_00h-0Fh_Rev_Gu... (see Errata 734 - this is what kills the 6300 series)

END

source: https://libreboot.org/docs/hardware/kgpe-d16.html

the Errata 734 is quoted here as reference:

BEGIN

>734 Processor May Incorrectly Store VMCB Data

>Description: Under a highly specific and detailed set of internal timing conditions during a #VMEXIT for a virtual guest that has multiple virtual CPUs, the processor may store incorrect data to the virtual machine control (VMCB) reserved and guest save areas and may also store outside of the VMCB.

END


If you are referring to me, I don't see how my excerpt is incomplete or could be seen as bad faith. Libreboot is steering people away from 6300 processors because using them requires explicitly loading a proprietary blob, while encouraging the use of 6200 processors that have an analogous blob baked in at the factory. The real difference is that the former makes you more aware of the compromise.


reading the full excerpt seems to put the reason for rejecting 6300 onto Errata 734 and it was strange to me that this wasnt addressed in your post

however i wasnt referring to you specifically as arguing in bad faith but that seems to be the attitude of some very vocal people here. i included the full excerpt in case the point is relevant. i am not an expert in this field

does the issue in Erata 734 apply to 6200?


> reading the full excerpt seems to put the reason for rejecting 6300 onto Errata 734

Well there are two reasons. The first is Errata 734, and the second is that the fix for Errata 734 requires loading different microcode than what was baked into the processor at manufacturing time ("6300 series CPUs have buggy microcode built-in, and libreboot recommends avoiding the updates"). I didn't mention Errata 734, because I'm focused on the second reason.

Working back from their reasoning, Errata 734 seemingly does not apply to the 6200 series.


mindslight, i cant reply to you directly so i will do it like this. thank you for clarifying about Erata 734. if Erata 734 applied to 6200 then libreboots logic would make no sense

i take no issue about with critically discussing someones logic. fud and attacks are annoying and dont contribute to a healthy discussion


I'm not the FSF, but one argument could be things like the Intel Management Engine that I know the FSF have strong opinions on.

If that's isolated to a separate CPU, it's easier to track the signals going in and out, and the bad things it can do are limited.


The FSF don't actually care about such details - sure, they'll deride ME, but they make no attempt to inform users about how it compares with alternatives and which options are better for users. That's because their criteria are not based on technical analysis, like determining what the access surface of the blobs is, but instead on the mere existence of the blobs. To them, all visible blobs are equally bad, regardless of whether one can completely compromise your system and another one is completely harmless and requires no trust.


>To them, all visible blobs are equally bad, regardless of whether one can completely compromise your system and another one is completely harmless and requires no trust.

For a company that values software freedom above all else this is completely fine. If they are called Secure Software Foundation then your arguments would hold more weight. For example, I really doubt that FSF would claim that GNU Guix is more secure than Open BSD


The FSF have repeatedly associated software freedom (by their definition) with security and privacy. This is just one example, there are many others:

https://www.fsf.org/bulletin/2020/spring/privacy-encryption


but security is associated with free and open source software. i think this is a common position of a vast majority of security experts. to make your claim that FSF deceives or misleads people you need to do a LOT more. for example, can you provide an example where someone claims that GNU Guix is secure by design[0]

i think that taking a position that free software supports security and also that free software principles come before security considerations is not contradictory let alone deceptive

[0]EDIT: i just searched the RYF site and did not obtain a signle result for the term 'security'

https://ryf.fsf.org/search/node?keys=security


The factors that actually impact the upper boundary of achivable security are availability of source code (open or not) and reproducible builds. The 4 freedoms do not actually affect any aspect of security, they are orthogonal.

Also, just because the 2 factors above impact the upper boundary of achievable security does not mean an open source software is automatically more secure.

It is conceivable for 2 comparable pieces of software to exist one open source and the other closed source and for the closed source one to be more secure.

There are many reasons why open source software is in practice considered more secure, among others being faster availability of updates and the aforementioned higher upper ceiling of security.


>does not mean an open source software is automatically more secure

well my point is that FSF never anywhere claimed otherwise. if they did THAT would be wrong and irresponsible

>It is conceivable for 2 comparable pieces of software to exist one open source and the other closed source and for the closed source one to be more secure.

sure. well a simple example is that security by obscurity is a valid concept in a right environment


Security is part of protecting your freedom from being compromised. I read this entire thread and wholeheartedly agree with marcan_42. FSF's position to draw a line where none exists is foolish wishful thinking and potentially dangerous.

I prefer knowing that I live in a world where COMPLETE software freedom is close to unachievable and it (COMPLETE software freedom) is a worthy goal to strive for compared to deceiving myself into believing it has been achieved by ignoring anything below a certain level.

Just because I choose to amputate my ability to update firmware does not mean a malicious party might not be able to do so. Anyone with physical access to hardware will still have that ability by using extra hardware. Handwaving the firmware away does not work against an evil maid attack.


>I read this entire thread and wholeheartedly agree with marcan_42

and you are free to do that and i would not say that you are a part of marcan-worshipping-cult or following some dogma

>deceiving myself into believing it has been achieved by ignoring anything below a certain level

if you are stating that this is what FSF believes then you are in fact spreading a falsehood and fud. this is what marcan has been doing regarding FSF the whole time during this engagement

>Just because I choose to amputate my ability to update firmware does not mean a malicious party might not be able to do so. Anyone with physical access to hardware will still have that ability by using extra hardware. Handwaving the firmware away does not work against an evil maid attack.

Unless FSF is claiming that GNU Guix is secure by design, or is free from such attacks, this is just a strawman argument


The FSF is deceiving themselves and others by believing that just because a user no longer has the ability to update firmware on a device, that device is acually no longer running non-free code.

I really do not understand what is so hard to understand that from a free software POV there is no distinction between a chip loading a blob from system storage and a chip loading a blob from it's own tiny updatable flash. Both load a non-free blob. Neither fully respects your freedom. Drawing the line of Respects Your Freedom TM between those 2 is stupid and deceptive.

The users ability to update firmware is also the ability to revert firmware changes (to a old trusted even if closed source version) made by a malicious party. Users do not gain any freedom by giving up that ability. They loose freedom.

Being able to choose between MS Office and Lotus and Star Office and WPS Office (1) gives the user more freedom compared to being stuck with just MS Office (2), even if none of those respect your freedom. Being able to also choose Libre Office (3) is obviously better. But 1 is still obviously better than 2. The existance or absence of 3 does not change that.

With regards to firmware, the FSF believes that 2 is better than 1. That is stupid. How do you not see that?

It is a valid form of protest but Respects Your Freedom TM certified hardware does not truuuuly respect your freedom.

This is harmful because the goal should be hardware with FLOSS firmware with reproducible builds and with the option for the user to add their own signing keys, NOT unupdatable (by the user) closed source proprietary firmware.


>With regards to firmware, the FSF believes that 2 is better than 1. That is stupid. How do you not see that?

your comparison with MS is ridiculous. FSF software is open. do with their software what you like. FSF believes it should not help you with 1 or 2 due to its principles, and that is OK. why wouldnt it be? the source is there so help yourself if you really want something that they are not willing to help you with (even if they often do apparalently)

anyway this whole exchange is becoming tyring to me. a lot of these comments by people seem to be more about vaging a crusade against FSF than it is about discussing issues in good faith. its somewhat dissapointing, esspecially since i only just realised who marcan is. as far as i am concerned, i am completely unconvinced by marcan and co that FSF is a deceptive organisation and that their work is somehow bad for free software. quite the opposite, i am happy that they exist. i say this simply as a spectator. to me the following comment on their website just clearly shows that they are aware that products they certify run nonfree code:

"If and when free software becomes available for use on a certain secondary processor, we will expect certified products to adopt it within a reasonable period of time. This can be done in the next model of the product, if there is a new model within a reasonable period of time. If this is not done, we will eventually withdraw the certification"

(source given elsewhere in the exchanges)

i do not feel deceived in the slightest. if deception is happening it seems to be regarding FSF's position. take care


Or have a proprietary RTOS running on a GPU, working hypervisor-like for the tacked on ARM-multicore, like some very popular fruity berry SBCs.

(giggle)


Librem 5 is not RYF-certified.


You're right, not yet (I wonder why? Maybe they cut too many other corners, or the FSF gave up on the program?), but that entire nonsense was squarely aimed at gaining RYF certification and done with the FSF's blessing.


Look, its all very simple, "firmware that is not normally changed is ethically equivalent to circuits" https://www.gnu.org/philosophy/applying-free-sw-criteria.htm...

Ya, monitors and hard drives all have very complicated firmware too. It is very simple, it is not denying you a freedom which is unethical to deny. FSF is not saying: it's totally great and fine, I'm sure the FSF will be happy to promote any of those devices if they have free firmware in them, celebrating them as more free. It's the same reason they focus on software and not on hardware designs.


Their approach is firmly stuck in the computing paradigm of the 70s and 80s, much like Stallman is personally stuck in the social narrative of the same era. The FSF and him refuse to change and adapt to the times.

But since reality doesn't care about their refusal to adapt, and they can't just throw their hands up in the air and say nothing is free any more and you should just live off the grid and reject all technology, they instead have built a deliberately obtuse set of rules to declare certain things out of scope, so they can maintain the illusion of freedom for their followers while making concessions behind the scenes (and even actively working with manufacturers to devise silly workarounds that fit into that framework, see e.g. the Librem 5's ridiculous secondary CPU core and external flash dance so they can claim the RAM training blob doesn't make their device non-free).

Then they're very careful to never talk about this unless prompted; the fewer people know about all these secret blobs they're running anyway, the better. It's basically cult-like behavior - this kind of control of the information and narrative that followers get is a defining characteristic.

None of this helps users, of course; what would help users would be being informed about not just exactly what blobs exist, but what the risks are, how they might affect their privacy and security, what update options exist, and whether they can be audited or replaced with free versions in the future. But the FSF doesn't care about any of that. They just want to pretend they live in a blob-free utopia.

Edit: Ah, the downvotes have started. I guess the FSF fans have showed up. I hope you're not using an off the shelf mouse to click on the downvote button; those all run proprietary USB HID firmware.


Please don't take HN threads further into flamewar. It lowers the quality of discussion noticeably.

Also, please omit inflammatory swipes like your "Edit", which also badly broke more than one of the site guidelines. Would you mind reviewing them? We're really trying to avoid this sort of hell to the extent possible.

https://news.ycombinator.com/newsguidelines.html


God doesn't exist, that doesn't prevent him of being.

The quest for freedom is, of course, an idealistic one. The important thing is that, in their fight to promote freedom, they meet obstacles. Those friction points reveal the lack of freedom. And so, although they don't reach freedom, they actually show that freedom is limited.

IOW, refusing the statu quo is one of the way to change it.

You should look at history and look at how much freedom you have, how much protection you have, etc. and then ask yourself : where does it come from ?


> IOW, refusing the statu quo is one of the way to change it.

And yet they aren't changing it. The FSF has had exactly zero success in changing the direction the world is moving in with regards to firmware and deep proprietary integration.

In fact, they've done very little for freedom in the past 10-20 years; most of the real breakthroughs have come from much more pragmatic people, such as those developing reverse engineered open source drivers for complex hardware like GPUs.

The FSF shows you how much freedom you lack according to their own bizarre definition of freedom... and then does amazingly little to actually improve your freedom.


>In fact, they've done very little for freedom in the past 10-20 years; most of the real breakthroughs have come from much more pragmatic people, such as those developing reverse engineered open source drivers for complex hardware like GPUs.

Whose efforts get completely circumvented through employment of cryptographic firmware signing, which gate keeps necessary functionality out of said pragmatist's reach.


> The important thing is that, in their fight to promote freedom, they meet obstacles. Those friction points reveal the lack of freedom. And so, although they don't reach freedom, they actually show that freedom is limited.

Well said. If problems are not openly demonstrated and complained about, things will not improve.


> how much freedom you have, how much protection you have, etc. and then ask yourself : where does it come from

Game theoretic behaviour in society that has advanced beyond zero sum.

Same goes for reasonable approach to FOSS, like Marcan is doing himself vs the cult and the arbitrary zero tolerance rules.


i strongly disagree. i think cult, dogma, and religion accusations as well as misrepresentation of FSFs stance (i have zero association with FSF yet i could see thatbtheir position is being crudely misrepresented) do much to discredit any valid points marcan holds


what does adapting to the times mean? FSF and GNU seem to be about fundamentals of computing and software. have these fundamentals changed?


The line between hardware and software has been heavily blurred in the past 30 years. The FSF continue to draw an arbitrary line where none can be drawn, and then say only one side needs to be Free. Since there is no longer any clear line, this gives them the freedom (ha) to deceptively do so in a way that is convenient to them and makes their followers believe they are getting some kind of special Freedom, when in reality the FSF is just tweaking the definitions to make it work.

Then they spin narratives about how this is important for not just freedom, but also security/privacy/etc, while their policies have absolutely nothing to do with improving users' security or privacy, as is made evident by the linux-libre issue, among many others. Actual assessment of the privacy/security impact of proprietary firmware on users is a much more nuanced topic, but the FSF are not interested in nuance, they just say "blobs (that you can see) bad".


There is no nuance there. Without free software there is no software freedom. If blobs are allowed at all that is already a measure to have a system that works in practice, but in no way makes it the position against closed software and blobs wrong.

The people criticizing the FSF here act as if Stallman were wrong about these issues because he said it back then already. While in reality again and again he was right about how user freedoms are limited when the principles he outlined are not followed.

To give Intel a way to distribute closed source software updates to your processor is definitely a security risk. And we know for certain the actors in the USA that try to use those security risks for their surveillance programs. Don't act like this world does not exist.


But you already run a processor with the very real possibility of backdoors from that very same entity. And let’s be honest, a zero day due to a CPU bug is orders of magnitude more likely to cause real harm than the fantasy of state actors deploying some blobs to your computer and have a look at my meme collection. 100% is never achievable but I much prefer having 85% than nothing.


I too settle for less bad dictatorship as means of governance.


They aren't taking away Intel's ability to push updates to people's computers, because they never had that ability. What the FSF and that Linux fork are doing is taking away users' right to be informed about security vulnerabilities in their system so they can make the choice whether to trust Intel's update or not. By withholding that information they are effectively eliminating the choice, and restricting users' freedom.


That's not true. If distros package closed source firmware updates the manufacturer has the ability to provide updates to people's computer via that system. I mean, that's the whole point and in a trustworthy environment that's a good thing. Maybe have a look at https://wiki.debian.org/Microcode#CPU_microcode_non-freeness (I'm not sure whether I'm just misinterpreting your comment or whether there is a knowledge gap, just to make sure we talk about the same thing :) ).

> By withholding that information they are effectively eliminating the choice, and restricting users' freedom.

You are mixing up agency and software freedom, it's clear that you won't see eye to eye with the FSF as long as you do so.

I'm not even vigorously defending not showing the note about existing firmware updates, if Guix really does so. I'd prefer a note. Just what it would mean and how problematic closed firmware would be seemed like it needed a clarification here.


> If distros package closed source firmware updates the manufacturer has the ability to provide updates to people's computer via that system.

That is not what linux-libre is doing/refusing to do. What linux-libre is doing is censoring a message to their users that their microcode is out of date and their CPU has security vulnerabilities. They could've just left that in and let users make the choice whether to manually install the microcode updates or not.

> You are mixing up agency and software freedom

Agency is more important than software freedom. The FSF's problem is precisely their blind focus on "software freedom" when the definition of "software" can't even be precisely defined any more, to the detriment of everything else that affects users.


>Agency is more important than software freedom. The FSF's problem is precisely their blind focus on "software freedom" when the definition of "software" can't even be precisely defined any more, to the detriment of everything else that affects users

your refusal to accept FSFs right to adhere to its principles is bordering on the extreme. yet you constantly lob insults toward them as an organization. it does not help your points at all. they obviously hold different values to you. i think FSF is ok as long as they are clear about what they are doing and they are not trying to trick anyone. you have done absolutely nothing to demonstate otherwise in this whole exchange and instead you keep slinging FUD. i will repeat my question that i have asked so many times: has GNU or FSF anywhere claimed that they take a security-centric approach? as far as i know they have always taken a free-software approach. that they refuse to bend their principles to infantile screams is a big plus in my books


>Then they spin narratives about how this is important for not just freedom, but also security/privacy/etc, while their policies have absolutely nothing to do with improving users' security or privacy

i think i have always held the opinion that for FSF and GNU their concept of security was "security through free software". that is free software (according to how its understood by them) comes first


> while their policies have absolutely nothing to do with improving users' security or privacy

IIRC Stallman criticized Ubuntu for collecting users' search info by default-- something they used to do[1].

It's unfortunate to see a call to nuance paired with an exaggerated claim that is so easily disproven.

1: https://www.gnu.org/philosophy/ubuntu-spyware.en.html


This comment was strong until the last paragraph about the downvotes.


You broke the site guidelines with this comment and it started a flamewar. Please don't do that; we're trying to avoid it here.

https://news.ycombinator.com/newsguidelines.html

We detached this subthread from https://news.ycombinator.com/item?id=29286715.


Who maintains this and why should I trust them.


You can inspect it freely, the entire distribution is maintained as a single git repository: https://git.savannah.gnu.org/git/guix.git


How is this case different than for any other distro?


It is a GNU project. The maintainers are listed on the website and of course all code, commits, etc. are publicly available.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: