Hacker News new | past | comments | ask | show | jobs | submit login
Debian GNU/Hurd 2023 (lists.gnu.org)
197 points by jrepinc on June 12, 2023 | hide | past | favorite | 127 comments



Having been quite interested in GNU/Hurd a long time ago (the years have become decades, I fear) and having lost track of the project, I wonder whether somebody can comment on where GNU/Hurd is being used nowadays.


> I wonder whether somebody can comment on where GNU/Hurd is being used nowadays.

Simply put: nowhere. It's an experimental kernel with limited hardware support (x86 only, very few device drivers) and a bunch of critical features missing (e.g. no multiprocessor support, no power management).


> x86 only

And limited x86_64 :)


I'm not sure how far that support even goes. There's one page on the wiki which claims that "A 64-bit GNU/Hurd is also coming soon", but all of the downloadable images are 32-bit only, and Git history suggests that it's still a work in progress.


I don't think this release technically includes the 64 bit support, but there is an image here: https://lists.gnu.org/archive/html/bug-hurd/2023-05/msg00484... that can supposedly get you a minimal Debian 64 bit system running on Hurd after some configuration.

Also I just found that they have a directory for packages here: http://ftp.ports.debian.org/debian-ports/pool-hurd-amd64/ but none have been added yet.


Obligatory xkcd: https://xkcd.com/1508/


The pre-iOS Android was a blackberry clone


Did you just reply with random OS names? Android isn't iOS, the first release version looked and was nothing like Blackberry devices.

(If you're making sense, clarity might help for me/others?)


OP is just saying that before iPhone/iOS was announced in 2007, there was an effort to build Android, but it was something that resembled Blackberries. After the iPhone was announced, there was a big project reset, which shifted Android to what eventually got launched in 2008.

https://www.theatlantic.com/technology/archive/2013/12/the-d...


And in the xkcd household Android was used starting in 2009, and some years later iOS. Seems consistent with history.


Google initially was working on Android as more of a Blackberry clone, and then when Apple announced iOS they realized that a full screen touch system was the future. So they scrapped the Blackberry clone OS and scrambled to get something more like iOS, which is what you’re now familiar with as the Android OS.

This story has been floating around in various tech areas. Here’s a podcast about it: https://corecursive.com/android-with-chet-haase/


Exactly. I remember the alpha screenshots. Although not many seem to somehow


I just read on info guix that they have a setting to use GNU/Hurd as kernel, maybe that could be fun virtualized.


This micro kernel vs. monolithic kernel still puzzles me. From a computer science perspective micro kernels make so much sense.

But when I compiled the Linux kernel, I was always baking everything into one file. It seemed much more practical to do so.


The debate is sort of obsolete really.

* Linux has microkernel-like functionalities like FUSE. You can do filesystems in userspace now.

* The modern approach to High Availability is to have redundant hardware. A single machine being fault tolerant isn't that important anymore.

* Hardware became far more complex. A modern video card or anything else is its own computer, with a very uncomfortable amount of state and access to the host. If your video card driver does something wrong and crashes it's by no means a given that the situation is recoverable by rebooting the driver -- the video card itself may be left in some weird state.

* Software is also far more complex. Great, your system theoretically can survive a video card driver dying. Too bad the compositor can't survive that, and the applications can't survive the compositor crashing, and at that point you might as well reboot anyway.

* Modern testing and debugging is excellent and having a system kernel panic is something that happens very, very rarely. It's not really worthwhile to change the system architecture for the sake of something than happens less often than I accidentally unplug my desktop's power cable.

* For HURD specifically, if you're in need of extreme reliability today, C is probably not something you want to use. Rather than dealing with stuff crashing you probably want to write code doesn't suffer from such issues to start with.


> > The modern approach to High Availability is to have redundant hardware. A single machine being fault tolerant isn't that important anymore.

You're making a lot of assumptions with regards to the operating environment. What about a probe being sent to Mars, the Moon or wherever? In fact let's just generalize to space-faring craft. That environment demands incredible resiliency and the ability to continue running even if the hardware is having problems - at least enough so ground control can assess the situation and send updates.

Weapons systems are another environment you'd prefer to have resiliency. You'd hate to lose control of a missile while in mid-flight.

Automotive systems are another environment that comes to mind.

There are a lot of embedded/remote environments where the "redundant hardware" solution isn't applicable.

Edit: I'm fully-aware these systems have redundant hardware in modern implementations. I'm also fully-are that monolithic kernels are unable to cope with a failure of any of the hardware comprising the system. Hence my calling out this point.


Which is why in aerospace and safety critical systems, RTOSes are used and not fully fledged OSes.

Also, redundant hardware is quite common (and mandatory by most standards), look up TMR, triple modular redundancy


>RTOSes are used and not fully fledged OSes.

The idea that "RTOS" and "fully fledged OS" are incompatible with each other is outdated.

seL4 advanced the state of the art with its formally-verified support for mixed criticality, years ago already.


There’s actually a real-time microkernel that millions of children use daily. It’s called the Nintendo Switch.

(But seriously, it’s a homegrown microkernel RTOS. Look up “Nintendo Switch System Software” if that sounds bizarre.)


The idea might be outdated, but as I have yet to seen a system running seL4, in practice it is very much true still.


And yet at some complexity level you have things like ARINC 653 where the usual implementation is to have a classic RTOS running on each of the partitions provided by the "application executive", and today's serverland is hypervisors running VMs running containers. I believe we are too centered on the "Linux vs Minix" debate while in actual practice both tendencies are reflected in most systems.


>Weapons systems are another environment you'd prefer to have resiliency. You'd hate to lose control of a missile while in mid-flight.

https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...


You are describing environments where it is very common to have hardware redundancy. I mean, sure they may have hardening for each component to reduce the likelihood of losing one of them. But hardware redundancy is a standard tool in safety critical systems.


These applications indeed have special requirements which general-purpose components almost certainly won't be able to fulfill. Not should they - the tradeoffs in complexity and cost are rarely worth it for the hyperscaling or embedded world where Linux is most commonly used these days.


On a plane or missile I'd assume several computers, with the majority winning the decision


One of those computers just went out. Now how do you form your majority? Besides which, in such systems there's a governing system coordinating the election. What happens if your governing system goes out?


Your governing system isn't a single system, it's multiple using a suitable algorithm to establish quorum. This is a well studied problem in distributed systems.


The question is increased resilience not perfection.

At some point, a military device / weapons system recognises that it's not operating to spec and either disables itself or diverts and destroys itself in a non-target region.

Keep in mind that for most weapons, not activating when not specified is far more useful than failing to activate where and when specified. See the case of several "broken arrow" nuclear weapons incidents in which failsafes prevented unintended detonation. In once case with only a single fail-safe preventing that detonation which would have been in a civilian populated region of the United States.


So one computer is out and the other 2 are giving different answers to the same question?


No, there's plenty of single purpose systems.


> any of the hardware comprising the system

You probably mean the other way around: the system comprising hardware.


Funnily enough windows actually manages your two points about videos cards pretty well. I've experienced full graphics driver crashes which were recovered from cleanly with only the process which triggered the crash as a casualty. But this needs more than just "slap it in a microkernel".


I’m impressed, as my RTX cards and drivers are the only things crashing my windows 11 machines recently.


YMMV...video drivers (esp 3rd party) are one of the things that still mange take the whole machine out for me.


> * Hardware became far more complex. A modern video card or anything else is its own computer, with a very uncomfortable amount of state and access to the host. If your video card driver does something wrong and crashes it's by no means a given that the situation is recoverable by rebooting the driver -- the video card itself may be left in some weird state.

The rest of your points make sense but I don't understand fully how this is an argument for monoliths? Isn't recovery from an unknown state an equally hard problem in both cases?


>The rest of your points make sense but I don't understand fully how this is an argument for monoliths? Isn't recovery from an unknown state an equally hard problem in both cases?

He's not saying that monolithic kernels make recovery easier in this case. He's saying that microkernels don't make these failures easier to recover from in practice, so you may as well take the performance gains that you will get from the monolithic kernel instead of giving up those gains for theoretical recoverability advantages that you won't actually usually see in practice.


Ah ok, that's fair


> The debate is sort of obsolete really.

Microkernels are not obsolete (see seL4), but HURD unfortunately is.

> Linux has microkernel-like functionalities like FUSE. You can do filesystems in userspace now.

FUSE is far cray (capability, security and performance-wise) from real microkernels. Linux is a monolithic kernel and unashamedly so.

> The modern approach to High Availability is to have redundant hardware. A single machine being fault tolerant isn't that important anymore.

This would imply operating system developers (in any) OS are not that much interested in reliability of the system they're working on, which I don't believe is true. Also, to give just one counter-example, I don't think many people have redundant mobile phones in case one of them crashes.

> A modern video card or anything else is its own computer, with a very uncomfortable amount of state and access to the host. If your video card driver does something wrong and crashes it's by no means a given that the situation is recoverable by rebooting the driver -- the video card itself may be left in some weird state.

This was more-or less true from the moment discreete GPUs started shoping up. Modern PCs have dozens of independent processors (some of them running their own opertaing systems)! If anything, that's more reason for microkernels.

* Software is also far more complex. Great, your system theoretically can survive a video card driver dying. Too bad the compositor can't survive that, and the applications can't survive the compositor crashing, and at that point you might as well reboot anyway.

Microkernels aren't written like that. In a MK architecture, your system survives video card driver dying by restarting it and taking over serving its clients (in this case, the compositor). Of course, Linux doesn't work like that, but we've already established Linux is not a microkernel.

> Modern testing and debugging is excellent and having a system kernel panic is something that happens very, very rarely. It's not really worthwhile to change the system architecture for the sake of something than happens less often than I accidentally unplug my desktop's power cable.

Yeah, I agree Linux will never changi its system architecture, but that doesn't mean thinking, researching and building other operating systems using different architectures is obsolete.

> For HURD specifically, if you're in need of extreme reliability today, C is probably not something you want to use. Rather than dealing with stuff crashing you probably want to write code doesn't suffer from such issues to start with.

Ah yes, ye olde "just don't write bugs!" argument :-)

More seriously though, I would agree with you that replacing GNU/Linux with GNU/Hurd will never happen. I just want to emphasize Hurd is not be-all and end-all of microkernels (even open source ones). In fact, it never was.


> Microkernels aren't written like that. In a MK architecture, your system survives video card driver dying by restarting it and taking over serving its clients (in this case, the compositor). Of course, Linux doesn't work like that, but we've already established Linux is not a microkernel.

That does require the compositor (and its applications) to help the new driver instance to re-establish its full state (which will typically consist of GBs of VRAM, among other things).


> In a MK architecture, your system survives video card driver dying by restarting it and taking over serving its clients (in this case, the compositor).

If your video card driver died, something is deeply wrong with your system, hardware or software, and blindly (literally) continuing is dangerous to the integrity of the data on that system. Things don't crash for no reason, this isn't a canned test where we're sure that only the one component is being fuzzed, this is reality.


>continuing is dangerous to the integrity of the data on that system

Not really. IOMMUs have been a thing for a while. As for the GPU's hardware, it can be reset w/o affecting the rest of the system.

Even Linux and Windows can do that, it isn't a microkernel-exclusive thing. The latter just handle it better architecturally.


IOMMUs that can individually handle every device seem fairly rare still.

You can check with this script: https://pastebin.com/4t3WD5R1

On quite a lot of hardware you can't really isolate devices from each other properly.


> Microkernels are not obsolete (see seL4), but HURD unfortunately is.

Why HURD specifically?

But I didn't mean microkernels, but the debate. Basically my view is that nothing stays pure. If microkernels have significant upsides a monolithic kernel like Linux doesn't have, then the logical outcome is that Linux copies the required feature like eg, FUSE.

Yeah, it's not a true microkernel, but who cares? It can do the cool thing. Nobody cares about purity. Computing is about getting stuff done.

> Also, to give just one counter-example, I don't think many people have redundant mobile phones in case one of them crashes.

A modern cell phone has a tendency to forget stuff anyway. I mean Android will just randomly stop background programs when it pleases, so every app has to deal with that possibility. So the phone rebooting has little effect, other than taking time.

> This was more-or less true from the moment discreete GPUs started shoping up. Modern PCs have dozens of independent processors (some of them running their own opertaing systems)! If anything, that's more reason for microkernels.

How so? I have two overall points here:

1. With hardware being complex and stateful, rebooting the driver may just not help. If something inside the video card is locked up, no amount of rebooting the driver is going to fix that.

2. There are complex webs of dependency, which often aren't easy to resolve in a fault tolerant manner. Most stuff will probably just crash anyway even if recovery is theoretically possible. Recovery paths are very rarely exercised.

> Yeah, I agree Linux will never changi its system architecture, but that doesn't mean thinking, researching and building other operating systems using different architectures is obsolete.

No, I mean it's not worth it to the average person to switch to a different OS for the sake of a benefit that might materialize extremely rarely. I'm not itching for a better OS which can survive a driver crash because the time when that happens for me is essentially never.

I think in the last 3 years I might have seen one kernel panic. Switching to a very different OS for the sake of having a system that can survive such a freak occurrence would be wasting my time. I'd spend far more time learning HURD than I'd gain in productivity.

> Ah yes, ye olde "just don't write bugs!" argument :-)

This, but unironically. If you're concerned about reliability, then the better approach is to avoid failures to start with. Rather than having a system that can tolerate something overflowing a buffer and crashing, how about a system that doesn't allow you to even compile something that does that?

Huge advancements have been made in creating languages that lack many of the stupid pitfalls of C.


> Why HURD specifically?

Because Hurd is (in my understanding) exactly the solution in search of a problem you're talking about. It's microkernel for the sake of being microkernel, but tied to the past (POSIX/UNIX model).

Actually innovative microkernels can do things Linux or other mainstream monolithic OS can't just copy (and that's why they're interesting).

Again using L4 as an example, have a look at discussion here: https://news.ycombinator.com/item?id=35842240 (but there are also other interesting designs, eg QNX)

Nobody forces you to switch to a differnent OS, but that doesn't mean it's pointless to research and build different systems.


Do you have any actual (and professional) experience with sel4 and qnx?

Because we have shipped products with those and boy I wish we had used Linux instead!

Limited, expensive, developer unfriendly and surprisingly, quite unstable under certain loads.


Not professional experience (I wasn't paid to build stuff on it), no, just hobby use (of earlier version of L4, L4Ka::Pistachio, and QNX once they started opening up before selling to RIM).

I believe you the developer experience was years behind Linux. That doesn't invalidate my claim that microkernels are interesting (and that these two are innovative), unless your pain was explicitly due to the microkernel nature of things.


Interesting, yes. Not questioning that.

But just last month we have had some _serious_ issues with QNX (which is supposed to be the top dog ukernel in 2023). For example IPC performance takes a nosedive in certain (not very uncommon) situations, which is not good for an RTOS. Talked to other people working with QNX and they have had similar problems.

Linux in a similar product has worked far far better. I understand the hype around microkernels and all that, but there is a reason they are not used in anything but the most basic products.


Hurd is never going to be more than at best a research project, more likely a hobby project. All semi-modern hardware requires proprietary blobs to function properly.

It would be great if all firmware and drivers was open source, but that's not going to happen anytime soon.

FSF is largely a joke in the way they certify RYF hardware. Proprietary blobs are all good as long as they're loaded from a separate flash that can't be updated by the main CPU, but if you load exactly the same firmware from the filesystem it's not okay.


What I don't understand is why people are so willing to install and run opaque binary blobs, even is it's from/supporting free software-hostile Evil Corp. I've been running GNU Guix, which uses Linux-Libre--i.e., no blobs-- on my Dell XPS-13 and Lenovo x270. The only change I needed to make to buy my freedom was install an atheros wifi card (yeah, proprietary bios sucks, buy puri.sm).

And while Guix/Hurd isn't there yet as a daily driver (Debian/Hurd is much more polished), progress has been pretty exciting lately and it now runs on my IBM x60 -- https://todon.nl/@janneke/110451493405777898


I thought proprietary blobs were only "okay" if they were burned into ROMs and thus presented as hardware and not software?


> The debate is sort of obsolete really.

Says who?

> * Linux has microkernel-like functionalities like FUSE. You can do filesystems in userspace now.

That has very little to do with real microkernels.

> * The modern approach to High Availability is to have redundant hardware. A single machine being fault tolerant isn't that important anymore.

Not at all. The "modern" approach you are describing is many decades old and single systems are still extremely common.


> Says who?

I'm writing my comment, so me, based on observations over time.

> That has very little to do with real microkernels.

It doesn't matter if it's "real" or not. My point was that any selling point a microkernel can come up with can be grafted into a monolithic one.

So one of the usual selling points of a microkernel is that you can run a filesystem in userspace and eg, can allow an user to mount a network drive. Well, no need to switch kernels just for that since they added FUSE to Linux.

As an user I don't care if it's pure, I care that the job gets done.

> Not at all. The "modern" approach you are describing is many decades old and single systems are still extremely common.

Modern compared to the very old idea of the microkernel, I mean. I can see that the idea had a lot of appeal back in the era of rare, room sized computers. Keeping a single machine going is far less important in modern times, where most serious uses try their best to treat any single machine as disposable.


Your grasp of history is terrible. The era of rare room-sized computers pre-dates (say) Minix by decades. Basing an argument upon that premise is highly fallacious.


Linux is not a microkernel, whether you compile it with modules or as one file.

In a microkernel, the various modules run as separate processes rather than as part of the kernel. For example, if your network driver crashes, your kernel keeps running. Whereas in a monolithic kernel, a driver crashing crashes the whole kernel.


It's not quite that black/white, because drivers in Linux can and do crash without bringing down the whole system. I don't really know the technical details on how this works exactly, but I've seen drivers crash while the system remained running.


Yeah but fact a part can crash without bringing rest down doesn't make it non-monolithic.

The fact that driver bug in theory could overwrite any other part of the kernel is what makes it monolithic.


Sure, I'm just saying that there is some nuance to "a driver crashing crashes the whole kernel".


Yes. But a Linux driver can crash the whole OS. In micro kernel land, most driver will not be able to do that.

As I see it a micro kernel has simply a narrower definition of what should be in "kernel land" and what should be in "user land".


You're almost there. Yes, there's a narrower definition. Yes, that means that crashes are outwith the kernel. But no that only means that the circle around what bit is named "the kernel" is smaller. Importantly, it does not mean that it is impossible to "crash the whole operating system".

Microkernels aren't a magic bullet against crashing an entire operating system, because a crash in a non-kernel component that the entire operating system relies upon, such as a central shared rendezvous server, or a server that handles "magic" values, or a central local security subsystem server, or a shared fundamental "personality subsystem" server, still crashes the entire operating system, no matter that the crash isn't in the part of the operating system that's labelled "the kernel".

* https://jdebp.uk/FGA/microkernel-conceptual-problems.html


I think now we need to define what's the "whole OS". From a user's perspective: the whole OS crashed (became unresponsive). But from a developer's perspective the OS worked, as it was still logging, responding to network, etc.


With Linux it makes sense to go with the one file approach because it's a monolith.

The difference would be if there was a seperate "kernel" that handled disk devices. Imagine being able to upgrade it without swapping out anything to do with process control.

I mean; we already do this with userland programs of course.. but I think we're not used to this in kernel land conceptually.

The closest we have is kernel modules which are simply not in the same arena.. like comparing function calls to RESTFul RPC; they just operate too differently to compare mentally.


We have something like this already on mobile phones. NFC chips and the radio are pretty much independent systems and the host system has to use message passing to talk to them. The OS as a micro kernel doesn't really yield benefits in this case.

Graphics cards and other accelerators are trickier since they can also access main memory. Therefore, restarting its driver might not resolve the problem, and the whole system has to be rebooted anyways.

Anyways, the system might not be that usable anymore if a critical driver keeps crashing and restarting...


The L4 folks showed that it could be done with acceptable overhead. Since then I have been a microkernel fan.


you run some microkernels on a regular basis ?

then there's the minix3 approach with process respawning, lots of good ideas


Can I run Firefox or Redis on L4? What's the performance of that compared to Linux?


check out genode


As I understand it Genode "just" runs Linux for many real-world applications, but with extra steps (via a VM). Either way, that doesn't really answer the question what real-world performance looks like.


>"just" runs Linux for many real-world applications, but with extra steps (via a VM).

You can run a modern web browser, with 3d hardware acceleration, on Genode itself.

It has Virtualbox support, which is super convenient for covering any functionality gaps, but you don't need that for the above.

Virtualbox simply enabled the developers to dogfood Genode years ago already, instead of today.


Alright; neat. Last time I tried Genode it didn't run in QEMU for whatever reason so I have to go on the documentation.

So how does the real-world performance compare? That's what I'm interested in, because I keep hearing "performance is not an issue" and every time I ask for details, measurements, or something I never really get an answer.


I believe it can run chrome natively now and has a fairly robust POSIX layer, no idea on performance though


> From a computer science perspective micro kernels make so much sense.

Reality beats theory every day of the week. Communication and isolation is never free.

> But when I compiled the Linux kernel, I was always baking everything into one file. It seemed much more practical to do so.

Whether it has modules or not have no bearing on whether kernel is monolithic or not


A microkernel is has more processes, and those processes have their own state and their own MMU configuration.

They have to pass data between themselves to communicate by exposing data and mapping memory regions/structures across, which otherwise could be a function call with a pointer.

A monolithic kernel is either running or panicked, with either valid or invalid memory. The microkernel adds more failure modes - such as previously always present parts of the system going away, resetting their internal state, hanging, being upgraded on a live system, and so on.

The costs of the protections and of IPC recovery are what push towards monolithic designs. Modern microkernels have had a lot of focus on those two areas - often being little more than memory management and IPC at the core.

I'd say even though new microkernel systems haven't displaced the current ones, research on these newer "nano kernels" have borne fruit in various hypervisors.


Our current hardware doesn't support microkernels because there is no demand, and nobody is creating general purpose microkernels because all the important hardware doesn't support it. So, I'm not sure we will ever see that change.

A proper microkernel isn't like Linux with a bunch of modules. It's something that can your game direct access to the GPU, while restricting access to a pseudo-root on your disk, and to an edited raw RAM that pretends the spyware from the game's DRM is running on the main system while it's actually sandboxed.

You can do something like that in a microkernel because you can replace it piecewise for a single application. You can technically do something like that on Linux, but you will never manage to.


>Our current hardware doesn't support microkernels because there is no demand, and nobody is creating general purpose microkernels because all the important hardware doesn't support it. So, I'm not sure we will ever see that change.

RISC-V and seL4 cooperate closely, ensuring this does not apply anymore.


It's just classic worse is better


@dang this is maybe a better url, since it provides more info and download links: https://lists.gnu.org/archive/html/bug-hurd/2023-06/msg00038...


Please email suggestions to hn@ycombinator.com as mods don't monitor comments for such actions.

I've done so in this case.


Thanks to you both! Changed to that from https://www.gnu.org/software/hurd/news/2023-06-11-debian_gnu....


The Hurd website is a mess. Where is the code?


Contributing -> Source Repositories lands you at https://www.gnu.org/software/hurd/source_repositories.html



Hurd should be able to load linux-libre drivers in the userland for a compatibility boost.


If you only knew how bad things really are. That's, probably, not possible.


I assume that would require such an insane amount of "framework" around it as to be practically impossible, not literally impossible?


From my understanding - yes, the kernel APIs never meant to be portable.


No arm or x86_64 love?


I guess 64bit needs some more work still:

> * APIC, SMP, and 64bit support was improved a lot: they now do boot a complete Debian system, but some bugs remain to be fixed.

https://lists.gnu.org/archive/html/bug-hurd/2023-06/msg00038...


I'd be much more interested into RISC-V support.


Most people are generally more interested in functioning software running on existing hardware. Focusing on that might actually make Hurd actually relevant at some point in the future.


RISC-V SBCs are on the market in abundance and they are affordable.


Which come with well documented peripherals?

The CPU is boring -- x86 instruction encoding is a bit ugly, but it works. It's all the stuff hanging off them that make the difference, and as far as I'm aware, risc-v hasn't done anything interesting here. It hasn't even enhanced discoverability (eg, by hanging internal peripherals off a virtual PCI bus).


Risc-v is interesting due to legal/licensing reasons rather than technological reasons. It certainly seems a good fit with ideological software


> It certainly seems a good fit with ideological software

I don't see how could it be more open that ARM longterm since there are no incentives to release high-end core designs for free. I mean I full see the potential for more competition between different companies designing proprietary cores which seems like an improvement over x86 from the consumer perspective but that's it..


And the peripherals are far more interesting from a license perspective.


>Which come with well documented peripherals?

Refer to OS-A Platform.

RISC-V is going beyond anybody else, standardizing peripherals that are common and very much solved problems, but aren't standardized elsewhere.

Things such as GPIOs, watchdogs, uart, timers and what not.


Quite a few RISC-V boards on the market now, no?


Maybe? I don't think there are many practical uses cases for it though..

I'l admit I'm not really sure what market segment Hurd is targeting. Presumably the goal is to replace in the GNU stack Linux at some point? If so only supporting niche/hobbyist CPUs (unless we're talking about really low power embedded applications) seems not like the best approach


There is an x86_64 port being started now:

https://wiki.debian.org/Ports/hurd-amd64


It's the anti-Alpine, in terms of GNU-ness.


[flagged]


HURD is does not have a stable release. It's unlikely to even boot on real hardware, which is why they have QEMU images. I don't know what screenshots of a kernel would look like.

Generalising this to GNU and the FSF is daft. Claiming this somehow shows their irrelevance is even more so.


> Generalising this to GNU and the FSF is daft

I disagree - this isn't some isolated example, this pattern is seen across many other GUN, FSF and other free as in freedom projects.

A lot of free as in freedom projects are completely out of touch with the modern world and waste their (already thin) resources on irrelevant, half-baked projects furthering some ideology rather than practical projects that would bring freedom (even if incomplete) to more people.

Software (or hardware) should be considered a means to an end, with freedom being a bonus. But for a lot of free as in freedom projects, the "freedom" is the key with the actual utility of the artifact being an afterthought (as a result it doesn't catch on, because nobody cares how free the thing is if it's useless or unusable in practice).


You're using "free as in freedom" very deliberately here...

Probably to pretend you're not also talking about open source. But you are, because there's essentially no distinction in most cases. But hopefully you don't mean to say that most open source software is practically useless, because that would be pretty silly, right?

So what exactly are you saying? That there are very niche open source projects around? Sure, but that's fine. There's also open source code that runs on almost literally everything though, some of it even from the GNU project. There's also proprietary software that is incredibly niche and only a small handful of people would know what to do with it or have even heard of. So what?

Name some examples, I genuinely don't understand what you're trying to say.


I am using that expression to separate them from the broader Open Source movement. I do think there is a major difference between general open-source and GNU/FSF "free as in freedom" ideology.

I see the broader open source movement as "hey I made this thing to get stuff done, oh btw it's open source under a permissive license so feel free to take it/improve it".

GNU/FSF is more like "here's a list of reasons why the way you do your computing is wrong, the right way is to go back to the dark ages of computing, oh and we have some half-baked software to help with that - and don't you dare use that other open-source thing over there, it's not free enough".

> Name some examples

The hundreds of different Linux distributions, none of which can match the stability, consistency and user-experience of proprietary OSes even from a decade ago? Imagine the collective effort that has been wasted reinventing the wheel and bikeshedding.

The PinePhone, which despite there being a permissively-licensed, battle-tested mobile userspace (Android), would rather try to adapt a desktop-focused userspace to it (and the effort is obviously spread across multiple distros, because why wouldn't it?), and as a result you have a device for sale today that's less usable than a PocketPC from 2003.

The whole "Respects Your Freedom" list from the FSF (https://ryf.fsf.org/index.php/), which looks like a low-budget computer parts catalog from a decade ago, or their absurd position against CPU microcode updates (https://ariadne.space/2022/01/22/the-fsfs-relationship-with-...)? The RYF thing in practice just seems like a major blow to whatever credibility they have left (are they really not understanding that nobody in 2023 cares about mostly decades-old generic computer parts?), and they'd be better off scrubbing every mention of it and pretend it never happened.

I could go on and on. To recap, I think GNU/FSF is pushing some impractical and unrealistic ideology that is ultimately harmful as it wastes effort that could instead be directed to provide practical and pragmatic solutions that offer software freedom, even if partial (which is still better than no freedom at all).


The bulk of the work done for such free as in freedom projects is by unpaid volunteers. The only resources being expended are their own time and effort. Even if the work doesn't cater to the tastes of the general populace, there is nothing wasted. Unless you think someone making art for themselves or baking cookies for their friends is also wasting resources.


QEMU images? I don’t know, maybe I am oldschool myself but I immediately found my way around.

It’s a big, hairy project that is not easily reduced to a simple feature list. Also consider the audience. This project is not meant to attract us hipsters.

I found your conclusion a bit unfounded.


Confusingly if you follow QEMU images there's a link to the LiveCDs:

https://www.gnu.org/software/hurd/hurd/running/live_cd.html

I love that they're referred to as CDs, I built my computer a couple of weeks ago and am running one with an optical drive for the first time in more than a decade. Might burn one for old times' sake.


It's the new debian stable, with a different kernel. What screenshot do you expect to see? And the linked official announcement has the url for downloads. I'm not sure what site this is, but it seems to focus on information, not marketing.


If you're actually interested in running Debian/Hurd then go to the Debian website.

https://www.debian.org/ports/hurd/


> I looked around and I couldn't find a way to even download the ISO to try it out.

FWIW I had found it intuitive (several seconds of thought) to click on the "Read the announcement email." link, which contains the link to the ISO images.


If you don't know what GNU/Hurd is then it's kind of useless for you anyway. I think it's more of a research project?

Anyway it's just a snapshot of Debian GNU/Hurd so you can see info here

https://www.debian.org/ports/hurd/


> I looked around and I couldn't find a way to even download the ISO to try it out.

On the right side, under "Hurd", the "Running" link can get you there.


> really shows how irrelevant the GNU foundation (and even the FSF) have become

I've seen several posts here in the last month about proprietary software being used to basically exploit the users: steeply increase fees, hold files ransom (upgrade for money or loose access to files), spying on users.

My conclusion is very different from yours: GNU Foundation and FSF are very relevant!

They seem to lack in the PR dept, which makes them look (by simply looking at the website) outdated and hence "possibly irrelevant". Looks can be deceiving though.


We have Linux though, and mountains of non-GNU projects.

It's not really about "proprietary vs. free software", it whether the FSF and GNU are still a driving force for Free Software. And to me this seems ... not really? They're still trying to create their anno-1985 Unix replacement. That's great and all, but also not really driving anything forward.


In the mean time FLOSS now also encompasses the BSD licenses and we have a much larger percentage of software that is not copy-left.

I see copy-left as a form of hacker activism, and this getting less. I'm quite sure it's not a good thing. FLOSS has been commencialized.

I prefer AGPL for the simple reason big corps avoid it like the plague.


The cause they fight for is very relevant.

Now whether the organizations are actually doing a good job on that front is more questionable.


Open source and software freedom are relevant, the FSF is not.

In an era of insane surveillance, AI technology, mass corporate insanity, the FSF is still arguing about firmware blobs on hardware vs loaded in by the OS, or advocating for switching from .docx to .odt when the world has moved on from even having document files.


Good luck stating that in any modern office. I dare you.

Most gen-zers have no clue about how the real world works outside your smartphones. Hint: not as you think. At all.


I work in a modern office in a multi national company. I haven't seen actual document files sent around for years. It's all links now which enable collaboration and always show the latest version.


Not having offline backups with discrete data it's a recipe for a future disaster.


If you can do nothing without network connectivity, then it seems to be an acceptable disaster.

Or do companies generally have better backup policies than Google dRive?


> switching from .docx to .odt

Also, isn't the whole .docx / OOXML thing just a published standard? The process may not be as "open" as one might like (i.e. not everyone can contribute) but that seems like a detail. IIRC there was a lot of FUD simply because it came from Microsoft.

That said: I don't think it's entirely irrelevant; e.g. my country's parliament publishes a lot of their files in .docx. But I can read them on my Linux machine and you don't need a byte of non-free software for it, and I had a lot more trouble with the .doc files they published 20 years ago, so...


> IIRC there was a lot of FUD simply because it came from Microsoft.

It was annoying because we already had odt as open format and OOXML was reported to be more of a text dump of what Microsoft did in binary than a standard meant for interoperability.

> But I can read them on my Linux machine and you don't need a byte of non-free software for it, and I had a lot more trouble with the .doc files they published 20 years ago, so.

OpenOffice was already better at reading old doc files than Microsoft Word was well before OOXML was a thing. Meanwhile my last few attempts to deal with power point or excel sheets where still a gigantic mess of missing content.


> text dump of what Microsoft did in binary than a standard meant for interoperability.

Isn't it all XML based?

> OpenOffice was already better at reading old doc files than Microsoft Word was well before OOXML was a thing.

It's been a long time, but that was certainly not my experience from what I recall; I had a lot of trouble with this. And writing files and then expecting Word users to be able to read them well was another source of problems.

And look, maybe ODT would have been better; but that ship has sailed, and OOXML seems free for all intents and purposes, even though perhaps it's not 100% ideal. This is just not something that's all that important today.


> Isn't it all XML based?

That doesn't help much when the XML format consists of elements like "footnoteLayoutLikeWW8" which "specifies that applications shall emulate the behavior of a previously existing word processing application (Microsoft Word 6.x/95/97)" [1]. I think this kind of thing is what the parent comment was referring to.

[1] https://www.consortiuminfo.org/opendocument-and-ooxml/the-co...


Ah, I didn't know that! It seems like they're only there for compatibility and not in new documents(?)

Either way, it seems like fighting yesterday's battle.


The way I see it:

* Proprietary software allows me to get things done, sometimes with some user-hostility baked in, but in general, no software explicitly forces me to use it and I can weigh the pros of doing the thing vs the cons of user-hostility and decide whether to use it.

* Free software is more interested to promote some impractical ideology, at the expense of getting things done. More effort is spent on ideology rather than useful (and usable, in terms of UX) functionality, which is why proprietary (and even extremely user-hostile) software thrives.

Keep in mind that recent deviances of proprietary software with regards to privacy are more due to the effective legalization of spyware and fraud (when committed by a business as opposed to an individual) as well as an adtech/marketing and VC bubble rather than proprietary software itself.

We've had commercial, paid, proprietary software back in the day that was respectful of the user and where the incentives were actually aligned - the software was as good as it could be in order to convince the user to pay for it and recommend it to others. You get the occasional annoying DRM but I'm happy to take that in exchange for a healthy ecosystem of usable software.

Proprietary software doesn't have to be malicious, and its maliciousness can be fixed with legislation and regulation (enforcing the existing ones would be a good start) without getting rid of the concept of proprietary software itself.

Free software doesn't just have a PR problem - there are many verticals where a free software implementation either doesn't exist, can't get off the ground (because more resources are spent on bikeshedding and furthering ideology rather than making software) or is perpetually half-baked and its backers are deluded in thinking it's a viable alternative to the proprietary competition. I'd argue they wouldn't actually need PR if they had good software/hardware, it would sell itself (if anything, merely by being free as in money compared to paid, proprietary competition).


I mean the Debian site isn't so much different?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: