Hacker News new | past | comments | ask | show | jobs | submit | jodoherty's comments login

FTDI also makes a lot of USB chips with software controllable GPIO pins:

https://www.adafruit.com/product/2264

https://ftdichip.com/products/ft2232h-mini-module/

Bit banging on a modern OS subjects you to a lot of jitter though. It's not like using a parallel port in DOS where you just have to worry about interrupts. The preemptive scheduler can really mess up your timing.

That said, the FT232H, FT2232H, and FT4232H have an FTDI Multi-Protocol Synchronous Serial Engine (MPSSE) cores that you can program to protocols like SPI and I2C where the high speed part doesn't require any smart logic to handle. It's a bit of a special skill though (you send MPSSE specific command bytes over the USB interface into the chip's command buffer and tell it to execute them).

If you need more high speed smarts, it's also convenient to use a Raspberry Pi Pico with MicroPython or CircuitPython with Programmable I/O (Pio) with an interactive session:

https://www.raspberrypi.com/news/what-is-pio/

https://docs.micropython.org/en/latest/rp2/quickref.html#pro...

But yeah, beyond that, you're better off using an Arduino or something and doing it all on the microcontroller.

On the plus side, all of these things are relatively cheap and easy to obtain.


I wanted this for GPIO on my PC to interface with some hardware, but all the prebuilt USB TF232 adapters had the GPIO pins closed off, and AFAICT the TF232 requires flashing using some proprietary windows binary to get into a mode where the GPIO pins can be used as GPIO (since it has multiple modes of operation).

I can't believe I'm not missing something... Is there an off the shelf USB GPIO device somewhere? Plug it in and start using the linux GPIO driver?

The solution my friends gave me was "buy an arduino", flash the arduino, and use the arduino's gpio... which yeah, I could do, but is that really what it takes for a $2000 desktop to flip a bit these days?


Here a few options for you:

https://www.adafruit.com/product/2264 - USB to GPIO (and other stuff)

"Bus Pirate"

Or get a RasPi - it's not your desktop PC, but they're running Linux with direct GPIO access available in userspace.


The FT232H you linked in the default UART configuration has no GPIO pins according to the datasheet pin description table. You need to change to MPSSE mode or similar using the flashing tool.


The Adafruit FT232H works out of the box with pyftdi and libftdi. You don't have to use any special tools or flash it in any way. The USB commands are handled by the underlying libraries.

See page 9 of the datasheet here:

https://ftdichip.com/wp-content/uploads/2020/07/DS_FT232H.pd...

The async and sync bitbang columns denote the GPIO pins as D0-D7 and show them assigned to the 8 ADBUS pins that are provided on the breakout board.

Assuming the udev rules are set up in Linux, you can simply install pyftdi, open the device, and start using the ADBUS pins as GPIO pins:

https://eblot.github.io/pyftdi/gpio.html#setting-gpio-pin-st...

If you're using libftdi, you want to call ftdi_set_bitmode with the BITMODE_BITBANG enum value for the mode:

https://www.intra2net.com/en/developer/libftdi/documentation...

Then the ftdi_read_data and ftdi_write_data functions can be used to read or write to the ADBUS pins:

https://www.intra2net.com/en/developer/libftdi/documentation...

https://www.intra2net.com/en/developer/libftdi/documentation...

You can then build a nice, simple high level GPIO interface over that if you want.


That (or an ESP) is a really effective, easy, and cheap solution, which makes it hard for a more limited and more expensive solution to take hold. Most everyone who wants a digital output is capable of following the Arduino route to the end.


There's little market for the product you crave. Most people who know what GPIO is know how to buy a $5 microcontroller with a USB port and upload some firmware to convert serial commands to the pin states/transitions they need.


Parallel port adapters maybe?


Also, while libftdi isn't hard to work with:

http://developer.intra2net.com/git/?p=libftdi;a=blob;f=examp...

It's dead simple to also use these FTDI devices with Python:

https://eblot.github.io/pyftdi/api/index.html


Arduino is a nice, but it's still built on the shoulders of tools like GCC, isn't it?

I never owned a computer in the 90s, but my impression was that Cygnus[1] did a lot of work porting GCC and improving it for use in a variety of platforms, including embedded targets. They arguably helped pull GCC into mainstream usage, to the point of almost permanently forking it [2].

Those first 20 years of GCC development were prerequisites for something like Arduino to emerge in the mid-2000s. I'm just not sure we have the equivalent of GCC yet when it comes to synthesis tools.

When it appears, it might be a good opportunity to create a consulting company and drive it forward while also making some money. The key pitch would be improving your ability to maintain and share designs across targets while not having to pay recurring licenses on your tools.

[1]: https://en.wikipedia.org/wiki/Cygnus_Solutions

[2]: https://gcc.gnu.org/wiki/History


PBS Kids has a lot of games and activities on their website:

https://pbskids.org/games

If your kids watch any PBS shows them they'll recognize the characters.

The activities were fun enough for our twins to learn how to use computer mice at age 3.

Tux Paint is also really fun for young kids and a good way to learn mouse usage:

https://tuxpaint.org/


Thanks for the Tux Paint suggestion, I played a lot with Kid Pix as a kid and it seems that Tux Paint is similar :)

Haven't looked at the PBS shows yet, right now our son has been rather obsessed in a BBC show called Maddie do you know? because it explains how things works and he's excited to see train tracks, helicopters etc... He also really liked Mickey Clubhouse which has the advantage of being translated in Cantonese (my wife's language).


We had 3 under 3. When our son was 5 and the twins were 3, we were able to switch to forward facing car seats in the back of my Dodge Challenger, which is technically just a coupe. They're all Diono Radian car seats, and the kids fit comfortably 3 across.

Just thought I'd share in case you ever want to reconsider the dream car thing. You never know until you try and you can always bring car seats and test fit.

To your original point though, most of the time we use my wife's Honda Pilot because of the extra space for errands, sports gear, and luggage (and also because my wife doesn't want to drive stick in Northern Virginia traffic).


Cross checking work doesn't scale. That'll only become more of a problem as your business grows and you have more important concerns that pull you farther and farther away from the actual hands-on work being done.


In art school, we spent a lot of time learning how to give and receive critiques, because the fastest way to improve was to try frequently and critique often.

You learn very early to divorce your ego and sense of self from your artworks and embrace every attempt as an opportunity to improve towards an ideal you can never reach.

You also learn how to give meaningful criticism without being an asshole.

Writing code is very much the same.

Unfortunately, most software engineers haven't been to art school and have no formal training in how to give and receive useful feedback.

I recommend reading Art & Fear: Observations On the Perils (and Rewards) of Artmaking. It's a good book that helps you build a healthy mindset towards growing as a creative:

https://www.amazon.com/Art-Fear-Observations-Rewards-Artmaki...


I also got my first critiques when getting a degree in Art. One of the most memorable lessons and discussions was when I first was taught that once you put your work out there, it is no longer about you. Not at all.

It is about the work, and how other people react to it. The impact is has on other is all that matters. Your intent is interesting, but not relevant to their reaction. So if your work does not get the reaction you hoped for, that not a personal statement about you, it is simply something for you to work on.


> Unfortunately, most software engineers haven't been to art school and have no formal training in how to give and receive useful feedback.

I'd imagine no training in normal education is the problem here, not not being to art school...


Most normal education tends to skew towards the didactic all the way through your undergraduate studies, especially with the math and hard sciences (which includes most CS students).

That is to say, an instructor disseminates organized knowledge to the student. The student may be asked to communicate that knowledge back to prove mastery, but there's not as much emphasis on students giving each other critical feedback. Even when there is, it usually has a minor impact on your academic progress or grades.

As such, there's no incentive to learn how to give good feedback or make use of peer feedback.

Art school tends to be unique in that it cannot be taught that way. You spend years giving and receiving daily critiques and incorporating them into your growth.

The commonly accepted peer review processes we use in today's software engineering field involve giving and receiving feedback on your peers' work at a frequency that the vast majority of people simply have never experienced before.

Few people are naturally good at it, and few companies invest time into training anyone on how to do it well.


Came here to say this.

It still catches me off guard when I feel someone clenching up while we talk about ways to improve something.

I sometimes take those people through a tour of my own changes and talk through all the ways those could also be improved.


What are some example exercises you used in art school to get comfortable delivering and receiving critiques?


You simply have to do it every day with self-awareness of what you're doing and why it's important until you get comfortable with it.

Day one of my first studio class, we did sketches of the person next to us and the instructor helped critique us. We learned to critique each other over time and with careful guidance.


Exist.

That’s what I did for a good chunk of my twenties. My life has changed dramatically since then.

It’s hard to set a direction for yourself with that mode of living, so I pivoted from art school and joined the Air Force to focus on serving.

I’m sure the work you do can positively impact the world. Find a mission and focus on staying busy with that. Finish your PhD towards that end and continue take care of your physical health. Help people.

As a society we tend to focus on joy and pleasure as the keys to happiness (do things you like), but there’s something deeper I think you can find once you’re okay with simply existing as you feel today.


Interesting, thanks!


Mutual TLS is pretty cool. You can install client certificates into your browser issued from a CA that a server accepts, and the server can then use the details in your client certificate to authenticate you to an application specific user.

When you visit a server using mutual TLS, you'll get a prompt showing all your client certificates that match CAs that the server accepts. Once you select one, all your future requests will use that client certificate and be associated with that identity.

The client certificates can even be placed into smart card hardware devices (which can be USB or actual smart cards) that require a PIN or some other factor to use.

Because it's all built on public/private key encryption, the server has no credentials to lose in a data breach. Nobody can steal your credentials and reuse them to attack your other accounts.

And this is supported by all browsers today.


That would put a lot of responsibility on the user. It is analogous to keeping medical records yourself and taking them with you to the hospital every time you visit. In the event of a fire or other catastrophe, your credentials would be lost forever, correct?

Either way, I usually do not want to login or be prompted to login to sites I visit. When I do want to login, either via Mutual TLS or by entering my credentials, I would like to have a hotkey I can push that brings me to the login page, pushes the login button, or inserts my TLS cert.


> In the event of a fire or other catastrophe, your credentials would be lost forever, correct?

If you intentionally did the hard path of owning all your keys then didn't back them up - yes.

But most users would be subscribing to some auth service (Google, their bank, etc) and that organization would have recovery issues.

> I would like to have a hotkey I can push that

This is already super simple with no new tech - just make your login button of class "LoginButton" or something we pick, and the plugin will just click it via code.

The UI really is slick. When the site requests auth the browser pops up a window with the help of the system's secret storage and shows you your identities. This could be automated to avoid even that single popup if desired.


Also, the prompt is uniform and immediate anywhere you need authenticated. With a single client certificate that meets the server's criteria, most browsers can be configured to automatically apply your certificate so you never even see the prompt.

Furthermore, browsers support different kinds of storage for client certificates. You could, in theory, make a cloud hosted client certificate store that you unlock once per session to use with your browser.

Ultimately, it addresses the whole "finding the login button" concern by eliminating login flows from the application completely. If you have no certificate, then you can't access the service at a protocol level, period.


It's more like keeping your ID card with you. The hospital would still have your records but wouldn't give you access to them until you present your ID card.

If you lose your ID card, you have to go back to the issuer of ID cards and request a new one. Similarly, with client certificates, you go back to the Certificate Authority to get a new client certificate.


I think NixOS is what the future should be. Searching and resolving packages still seems slow, and things like flakes are still evolving to address rough edges, but having completely declarative definitions of the entire OS installation is amazing.

Spinning up native development environments as an extension of production environments is nice too.

I worry that tools like pyinfra and Ansible combined with Docker containers and traditional distributions might be good enough to prevent critical mass in terms of user adoption.

Even so, it’s worth daily driving for a while just to get a sense of what the world could be like. Kind of like Erlang OTP versus Kubernetes and containerized microservices.


I live in a region with a lot of government contracting businesses, so Red Hat Enterprise Linux is something I have to maintain a working familiarity with.

However, I use Debian for all of my personal projects and infrastructure.

The reason? There's no for-profit corporate interest directly controlling the project. The project's organizational structure resembles a constitutional democracy:

https://www.debian.org/intro/organization

There is an incorporated entity in the United States to handle a number of intellectual property and financial concerns:

https://www.spi-inc.org/projects/debian/

However, it exists as a non-profit with a very narrowly defined, specific set of purposes:

https://www.spi-inc.org/corporate/certificate-of-incorporati...

Because of this, I feel like the Debian project has a good combination of people and resources, making it easy to rely on long-term, but without the for-profit corporate interests that may conflict with my own in the future.


I've used exclusively Debian on all my servers and laptops for the last 20ish years.

Can confirm, it will continue to keep working, however there might be some things that piss one off in a similar manner as Ubuntu's snaps. I hate systemd for example, but grudgingly accept it.


Fortunately, we can still use Debian without systemd. I'm forced to use systemd in many places, but for example this laptop were I'm writing runs Debian without systemd.

For me it's more to worry about: the influence that paid Ubuntu developers gain, year after year, inside Debian. Or the presence of ubuntu/canonical changes inside Debian packages.

Regarding Ubuntu... I'm forced to use it at work, and we use to "debianize" ubuntu servers... this is: remove snaps and snapd, remove netplan (for ifupdown), remove lot of dependencies that come with the minimal install (used only for their paid services), remove many services/packages (cloudinit, multipathd, polkit, motd-news, lxd, apport, etc)...

And the recommends of the recommends of the recommends of the recommends of all that.

In the last LTS, we we're forced to build our own installer at the end: a minimal live system + debootstrap + a couple of scripts (parted/mkfs/grub-install).

With the previous debian-based installer, we were able to fix most of the Canonical decisions via preseed... with the last version it was a pain until I did our own installer.

Still, for many people, being driven by ubuntu, is perfectly fine. They use to hate forced UI changes (i.e. my mum), but everything else is fine for them while "it works".

It's maybe just more technical people who are bothered by the underlying changes.


> Regarding Ubuntu... I'm forced to use it at work, and we use to "debianize" ubuntu servers... this is: remove snaps and snapd, remove netplan (for ifupdown), remove lot of dependencies that come with the minimal install (used only for their paid services), remove many services/packages (cloudinit, multipathd, polkit, motd-news, lxd, apport, etc)...

> In the last LTS, we we're forced to build our own installer at the end: a minimal live system + debootstrap + a couple of scripts (parted/mkfs/grub-install).

That is a lot of work. Why not use Debian itself? If you want that lts support you can use AlmaLinux in the RHEL clone camp. Not aware of any upsells in that system from my usage of it as a kiosk. The desktop is rough with some common packages missing from EL9, but the server would be excellent. Even CentOS Stream has longer free support than free Ubuntu LTS.


It sounds like they have a top-down directive to use Ubuntu.


Correct.


Do you mean your laptop runs Devuan? I get that is nearly the same as Debian, but does it include the Debian social contract and community? Isn't it a separate project?

https://www.debian.org/social_contract


If you're going that way Devuan is probably a better choice, but Debian did support running without systemd and still has sysvinit on a best-effort basis - https://wiki.debian.org/Init


Sorry, it's not Devuan, but Debian 11.6, previously was 10.x and was on testing until Bullseye was released.

Works for me without systemd, no problem.

I already use systemd in hundreds of systems at work and personal servers/vps.

But on my personal laptop, I have too many custom stuff after 20 years of thinking around, glued across all the layers (kernel, boot loader, init, background services, tty, graphical environment) and I did choose to keep my laptop with the traditional equiv of all the things that systemd does, just because of that.

Debian is really flexible to allow you to choose components in the system, or to have more than one equivalent component, and to configure how they interact, and still be considered an official Debian system.

And even to allow to people like me to do your own Debian if you want. That is the "official Debian" that I like, the one that allows me, that doesn't force me.


> I hate systemd for example

Everyone's entitled to their opinion, but everytime I hear this, I have difficulty listening to the rest.

Systemd is 10x easier and more manageable than every alternative.


No.

(As a sysadmin of 15 years, I can clearly say all of them have their advantages and disadvantages, but being easy and manageable is not an advantage of systemd, but a property of all of them)


Can you provide example of what makes it easier than alternatives?


Like...everything?

Create a service that starts on boot:

    cat > /etc/systemd/example.service << EOF
    [Unit]
    After=network.target

    [Service]
    User=example
    ExecStart=/usr/local/bin/example

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl enable --now example
And the harder things get, the bigger the advantage. Like if you want to mount a disk, create a socket, and then afterward start unprivileged service...all very simple.


systemd is "easier" for people used to cisco/microsoft/etc.

The huge list of services with not-for-human names so your job can look complicated and essential.


It may be easier for them, but as someone without Cisco experience, systemd was a lot easier for configuring service dependencies once I understood it.

It's been 10 years now? I wouldn't go back, most of the suggested alternatives are so basic that troubleshooting becomes a chore. Almost all the hate is because they don't like Lennart.

We should fork it, get someone else to head that with a different name to get them on board.


i also accepted it. And made a point to forget the cisco experience.

Yeah, the dependency is a fine feature, but before it was even *easier*... just put a number on the rc.d symlink.

c'mon, if you think that is harder than editing a dozen files, remembering weirdly named camelCase attributes and then pointing to randomly named services, then i don't know what to say. But again, i also accepted it, for better or worse.

Now, i don't care about lennart, but i hated systemd for most of the time because they pushed an incomplete crap over something that was working, just to get contributors. If redhat wanted their cisco/windows services management clone, they could have worked on it. Doing what they did (i call it to pull a gnome) was just shitty behaviour and they should always be remembered for such action.


Eh? systemd unit files are tiny compared to the huge massive scripts that came before it. Often those scripts we're only understood by a few people as well.

I think this why a lot of people really hate systemd. Suddenly a bunch of arcane knowledge used to maintain specific scripts got made redundant.


I have a thousand gripes about systemd, but “it made my knowledge obsolete” doesn’t make that list.

Also, I have only seen two or so massive scripts about services. Most of them were slight variations of a standard boilerplate code.


Debian now works fine without systemd. The alternatives are sysvinit, openrc and runit. I've removed systemd from many working installations. The problem is rather the many random dependencies.

(Block systemd in a preference file with priority -1. apt install openrc (for example). Read the warnings. Reboot. apt purge systemd. Freedom.)


If you don't like systemd why don't you run Devuan? It's just Debian without systemd. I'm running it in a VM now to try it out and it's been great, so far. I like having everything back to being plain text that I can monitor and manipulate as I like. Rather than logs being weird binaries that need their own special tools to access.


why the aversion to systemd?


I don't like the design at a fundamental level. It feels brittle, bloated, and not very unixy. I'd rather have my init system be a handful of microscopic executables, using text files or symlinks for configuration and text files for logging.

I don't like everything depending on systemd it feels like too much complexity at the wrong part of the stack. It doesn't jive with my sense of architecture, it's not well designed software.


Systemd does use "text files or symlinks for configuration" — those being the unit files in /lib/systemd and /etc/systemd. `systemctl enable` just makes a symlink from /lib/systemd into /etc/systemd, even. What would you point to to claim that it does otherwise?

> text files for logging

...just sucks, on both embedded systems and production servers. (I.e. anywhere where you aren't debugging the machine on the machine, but rather from another machine.)

Either the program just writes to a plain text file forever — and so fills up your disk the first time it goes haywire (so now you have two critical runtime problems!); or it implements its own log rotation and compression (as must every other daemon — not very unixy!); or it must be specifically wired to work with syslog APIs in order to use rsyslog (which, by the way, uses binary wire protocols as well; logging at scale hasn't been text-based in a long time.)

Journald, meanwhile, just sits on the other side of the pipe from any systemd service-unit's stdout + stderr; manages log rotation + compression in a centralized way (which also means you get cross-unit log compression for free); and offers CLI tooling to pipe the multiplexed log stream back into anything that wants to read from it, in whatever format those things want to read from it (i.e. tools that want JSON Lines, get JSON Lines; tools that want plaintext, get plaintext; tools that want a binary record stream, get a binary record stream.)

Is this a Unixy approach? Well, it's pretty much the same one taken by the extremely venerable Unix/Linux line-printer (lp) subsystem — CLI commands, with textual config files, for interacting with a system daemon (lpd) that manages and manipulates binary state files, within daemon-owned directories. Would you complain that the contents of /var/spool/lpd aren't human-readable?


> (Text files for logging) ...just sucks, on both embedded systems and production servers.

Ehrm, no. Managing a sizeable fleet, with a central logging server for 1.5 decades, and we never had the problems you mentioned:

> and so fills up your disk the first time it goes haywire.

This is a bug of the program or configuration mistake or your monitoring is not working as intended.

Funnily, we're seeing more disk pressure from systemd journals. Go, figure.

Just remembered: syslog daemons have rate-suppression mechanisms to prevent big lines repeating too fast and preventing your disk from filling up. So even your program enters an infinite loop, a well configured syslog daemon (rsyslog, syslog-ng, whatnot), should note "X similar errors have been supressed.", where X can be anything from 2 to 1000 (or even more).

> or it implements its own log rotation and compression

Which you can disable 99% of the time and just delegate the stuff to logrotate.

> it must be specifically wired to work with syslog APIs in order to use rsyslog

rsyslog is just a syslog daemon. syslog is kernel plumbing at this level. You can terminate this pipe with anything.

> Journald, meanwhile, just sits on the other side of the pipe from any systemd service-unit's stdout + stderr; manages log rotation...

And provides nothing new when compared to syslog plumbing. A binary log, some tooling around that, and that's it. It even makes per daemon log monitoring harder by blinding syslog-aware monitoring and automation tools, hence we need to enable rsyslog on the system too. Now we have two journals. Neat.

> venerable Unix/Linux line-printer (lp) subsystem

Which only handles "line-printer" subsystem, and yes, it's more UNIXy. It doesn't get the text output, bashes into a binary data structure, and doesn't try to replace anything and everything from boot to logs to time sync to user login to tap water temperature.

It just stores its state in binary file. Which almost every UNIX daemon does. Incl, but not limited to X11 & CUPS.


> Funnily, we're seeing more disk pressure from systemd journals.

So configure journald correctly? It has multiple options to control disk usage from logs -- `man journald.conf` and search for "MaxUse" for the relevant options.

> Just remembered: syslog daemons have rate-suppression mechanisms

So does journald. Relevant options are RateLimitIntervalSec and RateLimitBurst, and individual services can set their own limits as well.


> So configure journald correctly? It has multiple options to control disk usage from logs -- `man journald.conf` and search for "MaxUse" for the relevant options.

So configure logrotate correctly; you hardly need journald for that.


Systemd had no shortage of issues, but:

> text files or symlinks for configuration and text files for logging.

Systemd gets this right and arguably pushed the whole ecosystem in this direction. The old rc scripts could barely be considered a text file and symlink configuration system — they were a pile of text files containing a miserable combination of code and configuration mixed together, along with a very simple configuration (this service is enabled in these runlevels, more or less) that got translated, hopefully correctly, into symlinks. Of course, nothing really kept the symlink farm consistent with itself or anything else except a pile of additional scripts associated with packages that tried and usually succeeded.


I agree with (basically) all of this, the larger point. I'll note that I said "or"... ;-) (CYA)

I'm not sure what the "right" solution looks like, perhaps a directory of TOML or JSON files. Perhaps the aforementioned + executable shell scripts with predictable naming? handwave handwave as long as it's "UNIXy". (consist of easy to edit text & not invent a new anything & be composed of pieces which do one thing well)


Do you think we'll get lucky with a systemd replacement similar to how pulseaudio has been deprecated by a much more reasonable implementation Pipewire?


In my copious free time, I have a vague idea of designing and implementing a mechanism called kpid1.

Basically, a task running as pid 1 (including in a container) could call a new kpid1() syscall, which would cause the kernel to completely take it over. The kernel would take care of all the usual init work, and it would expose the minimal API (presumably using a new kind of fd) to allow a different task to give it instructions and manage zombies as needed. And that’s it.

It’s worth noting that the entire concept of pid 1 is very unixy, but not in a good way. Reasonable modern designs (e.g. all the Windows variants) don’t have any real equivalent.


What benefit are you seeing in putting it in the kernel?


Several:

Zombie reaping could have a reasonable API. (Signals are miserable.)

PID 1 is magic in problematic ways. In particular, if PID 1 crashes, the whole system goes down with it. And having PID 1 be a normal program running from a normal ELF file means that that ELF file is pinned for the life of the system or at least until it execs something else. So handoff from initramfs to a real fs either involves PID 1 calling execve() or involves leaving the init process around. Upgrading the package containing PID 1 requires execve(). Running PID 1 from a network filesystem or an unreliable device risks a kernel panic for no good reason.

With PID 1 moved to the kernel, the actual service management job is no longer coupled to PID 1’s legacy. A service manager could hand off to another one by saving its state to disk and exiting, by running the new one and moving its state after the new one starts, or by any other ordinary means. And if it crashes, you can ssh in, read logs, save work, and then restart it or the whole system as appropriate.

As a minor additional benefit, having PID 1 in the kernel could enable some optimizations. Right now, a process must enter the zombie state when it exits, and it must stay in that state until its parent wakes up and reaps it. So a service exiting fundamentally involves some complex bookkeeping and a context switch to a single, unrelated process. If the kernel knew that kpid1 was in use and that nothing in the system actually needs to be notified of exiting children of pid 1, then a child of pid 1 that exits could simply go away, as it would on a sensible system like Windows.

(Yes, it's okay to admit that, in some respects, Windows is substantially better than Linux/Unix.)


Not OP, but the whole business of PID 1 having to reap orphan PIDs seems like something the kernel should have to do. Is there a good reason for when a process exits that not other process is waiting on, that a user-mode PID 1 process has to observe that exit?


My understanding is that reaping children is a normal thing for most processes to do, and it's only orphans that fall through to PID 1, at which point it's easier to deal with it there rather than need to do anything special in ring zero.


Reaping children is "normal" in a universe where processes have numeric ids that can't be reused for unrelated processes until some handshake occurs that frees the id for reuse.

If you take anything resembling a fresh look at this concept, it's absurd. Imagine if every open file had a systemwide unique id, and one specific process owned that id and would continue to own it until it released it.

Reasonable designs use weak references that don't have values that can be compared across processes. These are usually called "handles" or "file descriptors", and they don't have this problem at all. Nothing reaps sockets, for example, and nothing needs to.


I have to think ... you don't actually know what you're talking about

I fought against systemd for a while, too

I was wrong

The "if it ain't broke, why fix it" approach with 'classic' init scripts led to a far far messier place than systemd

My only complaint about systemd is that I haven't found a way to push the journal to text simply ...and that's most likely my poor google-fu or not having enough to time to fully dive into it, rather than "systemd's journal sucks"

Systemd does use "text files or symlinks for configuration"

It's very well designed - though you may happen to have a preference for a different architecture (...but the one you described is pretty much systemd, my friend :))


Hear your words.

How is that different from I don't like relativity theory at a fundamental level. It feels abundant, bloated, and not very physical. I'd rather have my Newton's law with a handful of augmentation.


Did you consider in your opinion that booting and service management is an insanely hard problem?

Would you write an optimizing compiler in multiple small tools as well? Essential complexity can’t be reduced, IPC will blow up your accidental complexity budget. Many times a monolith is indeed the best design choice.


> text files or symlinks for configuration

Like...Systemd?


In short, developers' attitude towards critics and initial design decisions.

Now it's better, not but perfect, and a lead dev working for Microsoft doesn't inspire confidence about its long term agenda.

I can post links to comments of mine if you're interested further.


Please post links, I’ve never truly understood the systemd/init conflict, unlike the emacs/vim conflict.

I’d like to learn more!


I want to like systemd, especially since it is the default and has all the mindshare in Linux. I'm also not particularly in love with the historic shell scripts approach, as some are. Some systemd elements, such as its journal, are convenient.

My issues with it are:

- If you setup an atypical configuration, particularly involving luks volumes, it is not hard to break systemd and dracut's assumptions, and then you will have a hell of time trying to boot and survive systemd updates.

- When it breaks, figuring out what the hell is going on involves having to learn a lot of systemd, which has lots of its own unique vocabulary and logic. There are many pieces and moving parts. It feels like someone went "microservice-crazy" with the init system and like there has to be a simpler way. The surface area of systemd is enormous.

- The whole anti-split /usr crusade is excessive. You might want to have /usr as a separate volume so you can mount it with the nodev option, for example. Why should that be forbidden?

If you conform to systemd's expectations about system configuration, I'm sure it works fine regardless of its elegance/inelegance and excessive complexity. If you would like to do things differently in ways that Linux's building blocks otherwise permit, you could be in serious trouble.


I wonder if surface area of bash is even quantifiable, I never managed to read its man page to the end.


it's only 3400 lines!

:P


Bugs and complexity is the usual reason. It seems to be a competence thing - experienced admins find it harder to debug and fix systemd issues, while regular users care less about being able to pop the hood and fix things themself.


Which issues? The last time I ran into a systemd issue was in 2016.


There's been gigabytes of text spilled in flames back and forth about systemd over the last ... decade? The first google completion suggestion for "is system" is "bad" and that'll lead you to plenty of criticism about it, lol.

For myself, eh. I find it a little annoying but basically tolerable, sort of like a reinvention of SMF from solaris. Linux system config / init has gotten a hell of a lot more complex since I first touched it in the mid-90s, sometimes we get more functionality for that and other times the grognard in me wants to bin it all and retreat to Slackware or something. What was the old joke, Microsoft admins have solitare.exe and Linux admins have "fiddling with text files"? :)


It breaks the philosophy of doing one thing well, has an awkward and esoteric config language and keeps growing in scope.


You might find this interesting: https://nosystemd.org/


Ubuntu exists because Debian stable releases were inconveniently outdated, and Debian unstable was occasionally broken. Ubuntu also explicitly compromised "software freedom" in favor of utility, particularly by making non-free video and wifi drivers easy to install; in contradiction to Debian's pro-copyleft design. This situation has not changed.

I suspect OP is considering a change from Ubuntu because Ubuntu itself has diverged so far from its original identity as a reasonably stable Debian-unstable fork.

It was several years ago that I abandoned Ubuntu in favor of Archlinux for that very reason. These days, I'm almost exclusively using NixOS, but can't recommend it to impatient or non-technical users. NixOS is incredibly stable, very fresh/up-to-date, and incredibly chaotic to use. Someday, I expect, there will exist something of a "distribution" of NixOS that - much like Ubuntu did circa 2008 - caters to the average user. I hope that day comes soon.


slightly wrong. Ubuntu succeed in part because of what you mention, plus lots of marketing money. debian have caught up now. and most people seriously wanting up to date is over at gentoo or arch.

But back to the "ubuntu exists" premise you started, it is because a rich guy wanted to take over debian and sell an enterprise solution based on it.

remember at the time enterprise was all the rave. google was pushing their enterprise suite, and there was ton of startups (zoho, etc) it was a crowded space then and red hat, suze were completely fumbling with their linuxes.


Marketing money? For about the first decade I think the only marketing was shipping CDs for free to anyone who wanted them. A billboard or two might have been rented over the decades in highly targeted circumstances. There is approximately zero marketing.


you think free CDs worldwide is cheap?!

they paid for "CD vending machines" at several locations where you could get free cds... that is more expensive than a billboard and practically buys you a spot on specialized magazines. They were going those marketing campaigns all over the place.


Oh yes. Free CDs worldwide was incredibly cheap as a form of marketing, especially when you have the ability to source the cheapest bulk pressing and shipping deals world wide. As are vending machines, compared with the ludicrous prices charged for a billboard in a relevant position. I can't even recall if the vending machines were Canonical or local Ubuntu user groups doing it for shits and giggles/course credit. A tiny drop of money compared to competitors marketing budgets.


Yeah, I just set up a new personal server, and after experimenting with Linux Mint a bit, decided that mint wanted to be a desktop with a gui more than a server (which is totally fine!) and ended up at Debian, after years of defaulting to Ubuntu. Debian feels more or less the same once I gave myself the option of installing unstable packages (which I probably wouldn't do for a professional server, but seems fine for a personal one).


I don't know about claiming Debian is free from corporate interest. As a packager, sure. And for its own stuff like APT, yes. But that's just a fraction of the overall Debian codebase.

Debian ultimately follows the OSS community for most stuff, and the 'community' is often corporation backed. Just look at systemd. A lot of distros didn't want it, but ultimately were stuck between maintaining masses of stuff themselves, or accepting Red Hat to maintain it for them. There are many other examples.


Comes full circle as Ubuntu is derived from Debian


As somebody said, "Ubuntu is Debian-based the same way milk is grass-based".


What a smarmy thing for that somebody to have said! They use the same package manager and many of the Ubuntu devs were originally Debian devs.


The same package manager, but different enough packages, PPAs, etc, and a different release process.

Many different technical decisuons.

A very different project governance.

I'm not implying that these things are bad! But different enough they undeniably are.

The fact that I can install .deb packages on my box using a port of apt does not make it Debian. (Saying this as a dedicated Debian user since 1998 and until the switch to systemd.)


I realize the above comment took more effort than parroting someone else's 9-word statement. Thanks for putting the effort to expounding on the differences between milk and grass.


ubuntu started out as "debian, but with more up to date packages and some light polish."

as a long time debian user in the 90s, ubuntu was an exciting new step for debian based distros.


The Gray Lady of distros...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: