Nice to see this series continued! I wish there were more, potentially non-discontinued series or books on how to build actual projects with Rust from scratch.
Originally, our first push was to get Cortex M bare metal microcontrollers as a "Tier 1" stable target for the 2018 edition of Rust.
Over the last couple months, we've been expanding, and have subteams for a bunch of different topics, including chip support, drivers, documentation, tooling, etc. We're mostly focused on helping other people who are getting started with embedded rust, as well as give feedback to the compiler teams, etc.
It would be interesting to hear more about your porting attempt, we also have a blog - https://rust-embedded.github.io/blog/ - if you'd like to share it as a blog post.
As Steve mentioned, https://github.com/rust-embedded/wg is our main coordination repo, and has links to most of the stuff we're actively working on.
Just wanted to say that I wish that this was how kernels were made - out of modular, reusable blocks. Would this scale? In other words, could Redox and the OS described in the system share IRQ code from the same crate and ultimately grow into two different systems that are built of mostly the same crates for low-level stuff, just wired differently?
We’ll see; a lot of hobby OSes are already doing this to some extent, like using the x86 crate to manage the definitions of various data structures needed by the platform. We can’t really tell until more OSes get further along (my own is taking forever due to lack of time...)
From the historical point of view, this heavy decoupling is associated with microkernels which fell out of favor with the rise of the Linux kernel. I think in this day and age, modern programming languages could make building a microkernel that doesn't fall prey to the shortcomings of MINIX a possibility.
Microkernels are IPC-heavy, right? What I rather imagine is having modules at source level, not kernel level. So the thing still compiles down to a monolith/hybrid, but modules are abstracted away and reusable.
The IPC overhead is very much manageable, microkernels tend to be a lot more responsive than monoliths and with paging for message passing the overhead is reduced even further.
Microkernels are much easier to optimize for multi-core CPUs than monoliths. The ´kernel modules´ from a monolith run as user processes in a microkernel environment so they automatically benefit from more cores.
All of the effort expended to enable the legacy PIC so that the legacy PIT doesn't fire timer interrupts into an invalid configuration seems weird, especially when justified with "the APIC is too complicated and we'll show that later".
Um... the Local APIC interface needed to catch an interrupt and wire up the timer is if anything simpler than what is presented here.
> masking all interrupts and remapping the IRQs. Masking all interrupts disables them in the PIC. Remapping is what you probably already did when you used the PIC: you want interrupt requests to start at 32 instead of 0 to avoid conflicts with the exceptions. You should then avoid using these interrupt vectors for other purposes. This is necessary because even though you masked all interrupts on the PIC, it could still give out spurious interrupts which will then be misinterpreted from your kernel as exceptions.
Two different remappings for two different devices. It doesn't matter what you did with the 8259 once you disable it (which is like two IO instructions). Might as well do it all once is my point.
The point of this series is to learn, so I think going through the legacy system first and then showing the newer one can make sense, it depends entirely on your objectives.
This path is what I've seen out of 99% of hobby OS tutorials I've read.
I'm with you on this. It's similar to how every x86_64 OS guide goes through the ancient ritual of working your way up to long mode. Why not just use UEFI?
Well... the hardware gets in your way there. Long mode properly requires paging to be enabled, which means that you have the choice between a complicated hardware bootstrapping proceedure to enable it, or a complicated bootloader environment which has already grabbed and used chunks of memory for page tables you need to not step on.
Also, use of real mode (still required for BIOS calls like e820) requires the ability to get back out of real mode, so you need the bootstrapping stuff in a real OS even if it's not in the tutorial.
But the register poking in the PIC/PIT is just silly. Turn that stuff off and use the correct hardware, even in a tutorial. Unless it's a tutorial on PC architecture history, I guess.
> ...use of real mode (still required for BIOS calls like e820) requires the ability to get back out of real mode, so you need the bootstrapping stuff in a real OS even if it's not in the tutorial.
I'm not particularly well versed in UEFI/BIOS features, but shouldn't BIOS calls like e820 be avoided in favor of equivalent UEFI functions?
> the hardware gets in your way there. Long mode properly requires paging to be enabled, which means that you have the choice between a complicated hardware bootstrapping proceedure to enable it, or a complicated bootloader environment which has already grabbed and used chunks of memory for page tables you need to not step on.
Just figuring out the UEFI's page-table structure seems much less burdensome to me. You'd have to set the tables up yourself regardless. Is the documentation/environment really so poor as to make just doing it yourself easier?
In theory you should just be able to include a header or use a crate (a cousin comment linked one) and not have to write any assembly.
>which has already grabbed and used chunks of memory for page tables you need to not step on.
UEFI supplies you with a memory map that allows you to see what memory is untouchable and large chunks of it can be remapped away into memory regions you don't touch.
UEFI just isn’t as well documented in a way that hobbyists can find, it seems. I know I’ve stuck to multiboot because there are so many examples and resources, and it’s one of the least interesting parts of the process, so I go with what’s simple. (Which is no longer multiboot and is now phil’s bootloader used in this tutorial.)
here was me hoping in such a modern langauge, perhaps someone would finally have a good APIC example. alas :D back to the 80s it is. :D it's ok i'll go back to my acpi document and try not to cry :D !
>As already mentioned, the 8259 APIC has been superseded by the APIC, a controller with more >capabilities and multicore support. In the next post we will explore this controller and learn how to >use its integrated timer and how to set interrupt priorities.
So are adders but nobody is trying to replace them... When your overhead is high enough you can poll obviously but why bother when you can set up an interrupt. They're more of a thing on embedded systems because you can get some seriously low latency numbers with them when used properly.
They've been replaced a few times under the hood. Message signalled interrupts exist, and even regular interrupts on PCI-E and HyperTransport look more like network packets or rdma read/writes than a true electrical signal.
DPDK turns off interrupts and manually polls the card in a loop AFAIK.
But for most cases, the underlying semantics though are useful enough that the abstraction isn't going anywhere anytime soon.
I'm not a hardware guy, so I don't know if this is feasible or has ever been implemented, but:
I could imagine "polite" interrupts—where instead of the processor immediately jumping into the ISR's code, it simply places the address of the ISR that "wants to" run into an in-memory ring-buffer via a system register, and then the OS can handle things from there (by e.g. dedicating a core to interrupt-handling by reading the ring-buffer, or just having all cores poll the ring-buffer and atomically update its pointer, etc.)
The major difference with this approach is that pushing the interrupt onto the ring-buffer wouldn't steal cycles from any of the cores; it would be handled by its own dedicated DMA-like logic that either has its own L1 cache lines, or is associated to a particular core's L1 cache (making that core into a conventional interrupt-handling core.) Therefore, you could run hard-real-time code on any cores you like, without needing to disable/mask interrupts; delivering interrupts would become the job of the OS, which could do so any way it liked (e.g. as a POSIX signal, a Mach message, a UDP datagram over an OS-provided domain socket, etc.) Most such mechanisms would come down to "shared memory that the process's runtime is expected to read from eventually."
There would still be one "impolite" hardware interrupt, of course: a pre-emption interrupt, so that the OS can de-schedule a process, or cause a process to jump to something like a POSIX signal handler. However, these "interrupts" would be entirely internal to the CPU—it'd always be one core [running in kernel code] interrupting another [running in userland code.] So this mechanism could be completely divorced from the PIC, which would only deliver "polite" interrupts. (And even this single "impolite" interrupt you could get away from, if the OS's userland processes aren't running on the metal, but rather running off an abstract machine with a reduction-based scheduler, like that of Erlang.)
Schemes like that are in fact implemented by some devices on top of the existing PCIe interrupt mechanism. For example, GPUs have many different interrupt sources, so a common technique is to have an interrupt ring buffer that the GPU writes to, which contains all the information about the interrupt source and additional payload data.
An actual PCIe interrupt is sent to the CPU only when that interrupt ring buffer goes from empty to non-empty, and the driver's interrupt handler simply reads the whole ring buffer contents.
It seems like your scheme would require dedicating an entire core to kernel interrupt handling, all the time (because if you let every core run userspace, and then a network packet arrived, it wouldn't be handled until some core went back into the kernel for another reason).
That seems strictly worse than the current design.
Right now nothing. I used to work with HPUX with "Real time Extensions". One of the things we could do is shut off interrupts for certain processes.
There was a subset of systems calls we could use while in this realtime mode (A lot of unix system call really rely on interrupts, and the whole OS is built on them..).
I think interrupts started from hardware signals, but were expanded to include software.
Consider the problem you are trying to solve. I/O devices, from the CPU's standpoint, are things that are very slow and require infrequent service, but when they do require service, have extremely strict hard-real-time latency requirements. Interrupts are one approach to being able to get useful work done while waiting for I/O, without burning a lot of CPU resources on polling.
Another approach is to have a processing hierarchy, like old mainframes did. Off-load the CPU with some kind of I/O processor or channel controller that can do the real-time data transfers, and "coalesce" low level interrupts into a single larger interrupt that captures more work -- think a single DMA-COMPLETE interrupt instead of a bunch of GET-SINGLE-BYTE interrupts.
You can of course push the processing into hardware but that is much harder to change than an I/O driver, so the interrupt-driven-driver design pattern wins on software maintainability.
Well, more like hyperthreads than cores, if I understand the Propeller correctly.
The earliest implementation of hyperthreaded hardware for doing I/O that I am aware of is the CDC 6000 series, announced in 1959, if I recall correctly. The CDC 6X00 Peripheral Processor Units (PPU) were actually a single processor logic cluster, with 10 copies of the PPU state (which old-timers called "the PPU barrel"), yielding effectively 10 I/O processors that ran at 1/10 the master clock frequency of the CPU. I/O drivers were written as PPU code that actually polled the I/O device. The PPU could scribble anywhere it wanted in main memory, so the PPU did all the work of moving data from the peripheral into main, or out from mem to device. Interrupts were very simple -- the PPU computed an interrupt vector address and more-or-less just jammed it into the CPU program counter. But the net effect was that on a Cyber 6000 (later Cyber 170-series) machine, much of the I/O was delegated to the PPU's, and thus a single interrupt represented the completion of a large amount of work.
It is a really great device. The two most common exceptions are price and language support.
In many designs, the chip can replace several.
Early on, yes. Was SPIN and PASM (assembly language, but no where as hard as one would imagine) Today, C, and other languages are well supported.
It is a true multi-processor. The developer can choose freely between concurrency and parallelism as needed. Combining code objects is crazy easy too.
Concurrent can be something like a video display on one core, or COG as they are known, keyboard, SD card, mouse on another. Once done, those two cores would appear much like hardware to a program running on another one.
Parallel could be several cores all computing something at the same time. Doing a mandelbrot set is an example of both.
Main program directs the set computation, one core outputs video, the remaining ones work a little like shaders do, all computing pixels.
Interrupts could have made a few niche things a tiny bit better. Mostly they really are not needed.
I had a ton of fun programming and doing some automation with this chip.
It's second gen will tape out, and early revision chips already have. Real chips, back from the fab for a final round of polish and testing.
On those, every single pin has DAC ADC, a smart pin processor and a variety of modes and pullups, pulldowns all configurable in softwares. It is a little like having a mini scope with good triggers on each pin.
Interrupts are present, but no global ones, nor with any ability to interfere with other cores.
This will keep the lego like feature of grabbing drivers and other code, and having it act like built in hardware, while at the same time making for event driven code that is easily shared and or combined with other code.
Interrupts are called events and there are 16 of them, three priority levels, and that is per core, 8 cores total.
People can build crazy complex things able to input signals or data, process with high speed accurate CORDIC, and stream data or signals out.
Freaking playground. I have been running an FPGA dev system for a while now. That is 80 Mhz.
Real chips will clock at 250 Mhz and up through about 350.
You could get rid of interrupts if you take a completely different approach to CPU architecture. The high-level view of how a CPU works is that you load in a program, which involves setting an instruction pointer at the head of a list of instructions. This is the very first thing you do when you write a kernel; you set the instruction pointer to the entry function on your kernel and the CPU goes from there. Without interrupts, it would do that forever. You need the interrupt in order to make the CPU look at any instruction not prescribed by the program you loaded in the first place, which is how you get any sort of I/O going.
An alternative architecture that would not need interrupts would be something that is driven by data. Instead of loading in an initial program, you would load in some initial data, and the CPU's execution would be driven entirely by that. On every cycle, the CPU would look at any new data that has arrived and process it accordingly. In this view, key-presses or timer ticks would just be like any other data flowing through the system.
It is fundamentally how processors work. That is how context switching happens IO happens etc. I mean if you are at work and you need to go the doctor the school will call you to interrupt you. If someone didn't what would happen to your kid?
IO can happen by message queue that thee OS looks at from time to time, context switching can be initiated by a lot of algorithms that don't need to invalidate your pipeline state, and OS calls can be entirely synchronous.
Interruptions are mostly a bad legacy from the time our computers were slow, kept around for compatibility reasons.
A core that is otherwise sleeping, an instruction countdown, a synchronous microinstruction that does some "finish this and jump if the list is non-empty" triggered by a timer...
There is a lot of stuff implied by an interruption. Computers need some of them for some functions, but never the entire lot.
To me an interrupt is a signal. They can be as complex or as simple as one desires. I'm unsure that concept can ever be replaced. Interrupt controllers, on the other hand, are very complex, which may warrant different abstractions someday.
Interrupts at their base are signals for crossing from an asynchronous, parallel world into a synchronous, sequential world. Some sort of signal needs to exist for that purpose.
There is likely a way to cross domains that can be formally reasoned about more easily. Although, like functional programming, implementing the abstraction directly on silicon probably wouldn't make much sense. Process calculus is the place to start if one is interested in this line.
interrupts are still the main methods. but modenr osses use LAPIC /APIC which is a more modern form of interrupt controller. it has a little more abilities and i think 1 local per cpu or core or w/e. basically interrupts are good, where 'polling' is the legacy method (very legacy :D) of querying hardware untill it's read. via interupts hardware can let the OS know, so it's not wasting cycles waiting for hardware.