Also in memoriam Jochen Liedtke [0], who started the L4 kernel project and rekindled research in microkernels by showing that they don't have to be slow ('slow' as in Mach's long IPC latencies).
Every year I’m reminded how diverse the recipients of this award are. You have a provably correct microkernel, DNS, Wireshark, and Jupyter Notebooks all being recognized as significant software systems.
The only thing I know of that provides a desktop is genode, which can run on top of seL4 (but also quite a few other kernels). Given that it'll be more like dinking around with genode than seL4 proper, exactly because of this multi kernel abstraction layer...
Maybe an ARM desktop once multi core is implemented and verified. Verifying the whole thing for x86 seems scary. It is a kind of undertaking that gives normal people a glimpse of how ADHD is. It is a huge task and I would not know how to structure it, where to start and how to predict and plan all the intermediate steps.
Write down all the modules, then write down their dependencies on each other. Make sure the dependency graph is as close to a line as possible. This is called “layering” in software engineering.
It looks like there’s a good bit of industry energy behind this stuff (defense contractors and intelligence agencies don’t want their side’s stuff broken into). So I don’t think this is one of those “Apple takes from the little guy and doesn’t contribute back” stories.
Does it matter? A microkernel isn’t a huge scope, and once it is proven mathematically correct, there’s not a ton left to do; other than maybe adding very niche features or doing touch-ups here and there.
Apple is not using seL4 which is the proven one and which required sizable modifications to the original L4 variant seL4 was descended from. Apple is using a modified version of one of the many other L4 variants.
Except that making changes can actually invalidate the proof that were made. At least it matters in the sense that Apple should have made an arrangement so the things added didn't invalidate the correctness.
That's not really how any of this works. The formally verified kernel of the operating system gives you some assurance that your primitives are reliable (unlike on, say, Linux, where you're always a kernel reference tracking bug away from an LPE). But the "application" code you build on top of L4 doesn't "inherit" that formal verification; it's just code, with code bugs. If you formally verify your own code, it's verified, but otherwise you still own your own bugs.
For what it's worth, I'm not sure Apple publishes which variant of L4 it runs on, and there are many of them. The whole formal verification conversation here might be moot.
> I'm not sure Apple publishes which variant of L4 it runs on
The SEPOS kernel is an apparently derived from a fork of L4-embedded they used in Darbat (a fork of XNU to run on top of L4). Not formally verified unless Apple internally has done so.
Absolutely, the problem I'm referring to is if Apple changed the kernel and not only built an application on top of it. I don't know, but maybe Apple just built an application on top, which seems sensible.
The last thing you want to do is to add niche features, those really should be handled in different processes. MM, IPC, interrupts, process creation/destruction/scheduling. That's about it.
True, but as with every other OS project out there, nobody is gonna use it without userland. The kernel itself is already pretty mature, but now the effort should be put into making a full fledged OS out of it.
provably-correct microkernels like seL4 are popular for "cross-domain solutions" that simultaneously handle unclassified and classified information while ensuring separation. NSA requirements for such systems are strict and right now most of the options are commercial, like BAE XTS-400. More defense contractors are looking towards seL4 and the NSA seems to be encouraging it for the benefit of a shared, open-source platform across these specialized projects.
I believe seL4 is also used for the firmware on Qualcomm modems now, but it's a little hard to follow if they've switched from OKL4 which was a related project.
I thought commingling classification/user clearance levels on the same hardware fell out of favor a while ago, because of side channels (with speculative execution being the final nail in the coffin.)
Isn't seL4 more used for things like data diodes, cryptographic equipment and military hardware? Stuff that you need to keep from getting hacked or malfunctioning, but not time-shared with untrusted users.
Nah, multilevel security fell out of favor because no large commercial vendor could do it. They retracted the requirements over a decade ago because it prevented Microsoft and IBM from bidding on government contracts to “secure” their infrastructure.
Well, Microsoft made some half-assed runs at Orange Book compliance (e.g. Windows NT without floppies/CDs...or networking), but Trusted AIX was a real thing with real users. Having been involved in a couple of MLS Unix implementations, IMO it didn't fall out of favour because no vendor could do it, it fell out of favour because no one could effectively run it in production. The admin overhead of a 'real' TCSEC validate system is formidable, and at some point someone decided 'useable with admins with 6 months of training (e.g. privates and corporals)' was more important than 'checks all the boxes', and the boxes that needed to be checked changed.
As far as I am aware seL4 is not used in any DO-178 Level A systems. As far as I am aware Airbus is using Integrity-178 microkernel which is the industry standard in commercial aerospace and is used in almost every commercial airliner and military aircraft.
Pure capability-based access control, where capabilities are communicable, but not forgeable "references with rights" to objects. While this is interesting, it's not really unique: other L4 kernels also have it, and Capsicum (but not POSIX capabilities) implements something like it on Unix. But even here, seL4 has some unique twists, such as the way IPC replies are handled, which allows policies that prevent "unsolicited" replies.
A unique approach to memory management, where after boot-time, the kernel does no memory management, and even kernel memory is managed completely by user-level code. In fact, the kernel has no heap. Instead, when the user requests an operation that requires kernel memory (e.g. creating an address space, which requires memory for page tables) the user provides the memory explicitly to the kernel. This sets seL4 apart not just from monokernels, but from other L4 kernels as well.
Support for passive servers: one of the most exciting recent features, these are server processes that run on scheduler time "donated" by the client. Among other things, this can be used to make sure that non-critical clients will not monopolize services needed by critical clients.
IIRC seL4 is also the fastest L4 kernel for most use cases - and its worst-case execution time is bounded, at least on older CPUs which have published timing data.
It's at the same time pure microkernel and performant and usable in real world (not just a research project).
The memory management, scheduling and IPC are implemented in an interesting and novel way (compared to your typical mainstream OS) and allow for quite different OS design.
In particular, capability-based IPC and memory management could provide a robust base for a secure OS from the ground up, instead playing whackamole in an aging POSIXish design.
Sadly (tho I understand the rationale), most approaches to building a full OS on top of (L4 in general) are "run userspace Linux in an isolated container", which kind of ignores all the power underneath.
A lot of the seL4 work was actually funded by several nations' taxpayers, and developed by the late Trustworthy Systems group at Data61 (formerly NICTA). Sadly, they were disbanded a few years back and had to rush to set up a standalone seL4 Foundation: https://microkerneldude.org/2020/04/07/the-sel4-foundation-w...
> seL4 is the result of big investments. Firstly by the Australian tax payers, who (through NICTA) funded its creation, and (through NICTA and then CSIRO’s Data61) continued supporting it. Over the past 6 years, US taxpayers (mostly through DARPA, but also other parts of the DoD as well as DHS) invested a lot in completing and extending the verification story, as well as deploying on real-world systems. And most recently, HENSOLDT Cyber funded verification of the RISC-V port of the kernel.
Australia seems to be quite good at software research, also producing Mercury which seems like a very underrated language, but somehow has zero products to show for it except Jira.
It could be nice, although I’d expect a government to (justifiably) expect some “supply chain” type audits that lots of smaller open source projects wouldn’t want to go for.
That’s an interesting idea, there should be a technology incubating grant program that supports FOSS, which the government is fundamentally dependent on and fosters domestic and global growth at least as much as NSF grant funding does.
Also in memoriam Jochen Liedtke [0], who started the L4 kernel project and rekindled research in microkernels by showing that they don't have to be slow ('slow' as in Mach's long IPC latencies).
[0] https://en.m.wikipedia.org/wiki/Jochen_Liedtke