Hacker News new | past | comments | ask | show | jobs | submit login

Author here. This is very much the case for a computer system as a whole also. Basically a network of cooperating microprocessors, including in I/O peripherals etc.

PCIe in particular is literally a packet-switched computer network - it has a physical layer, data link layer, and a transaction layer which is basically packet switched. There are even proprietary solutions for tunnelling PCIe over Ethernet.




To make it even funnier - Digital's last Alpha CPU, EV7, which was essentially the ancestor of AMD K8 (which finally brought "mesh" networking to mainstream PCs), actually had IP-based internal management network!

Each EV7 computer had, instead of normal BMC, a bigger management node connected to 10MBit ethernet hub (twisted ethernet, fortunately :P), and this network was then connected to things like I/O boards, power control, system boards... including to each individual EV7 CPU. Each so connected component had a small CPU with ethernet that was responsible for interfacing their specific component to the network, and when the system booted part of it involved prodding the CPUs over ethernet to put them into appropriate halt state from which they could start booting.


This kind of thing with functional domains accessible over Ethernet existed in at least one laptop as well, where you could connect to the "nodes" once you busted into it (my article): https://oldvcr.blogspot.com/2023/04/of-sun-ray-laptops-mips-...


And you have smaller one that basically pxe boot the bigger one and manage the power, cooling, etc. It is datacenters all the way down.

As someone that used to do embedded, there is a reason i felt most at home in erlang and elixir.

Their processes that share nothing and use message passing was really close to how it looks to build and code for an embedded platform.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: