Hacker News new | past | comments | ask | show | jobs | submit login
The seL4 microkernel (github.com/sel4)
102 points by fgeorgy on Sept 18, 2015 | hide | past | favorite | 12 comments



Their process was extremely elaborate. They wrote a Haskell prototype which was then automatically translated to their specification/verification language (Isabelle/HOL). They then proved all the theorems they wanted (which, I can only guess, was very hard), and hand-translated the spec into C. Then they proved that the C translation was a refinement of the spec (i.e. doesn't deviate from its boundaries) using a theorem prover and an SMT solver.

That last part (translation validation[1]) is what I found most interesting, or, at least, the most novel. Here's the paper: http://www.cl.cam.ac.uk/~mom22/pldi13.pdf

[1]: http://download.springer.com/static/pdf/209/chp%253A10.1007%...


The pinnacle of engineering for separation kernels. Well, in development and compilation assurance. The work on other EAL6-7 requirements is ongoing at NICTA. Should handle them easily. The best part is that, unlike most verified kernels, this one was open-sourced and allowed to be used freely. We had hoped for that with NRL's TCX kernel but it never showed up. NICTA and OK Labs delivered.

See papers (below) for how to apply a separation kernel to solve various problems while minimizing TCB of trusted components. Secure comms can be built on top of a Mikro-SINA-style VPN. Nizza via seL4 can also protect Tor protocol or GPG in purpose-built appliances. The tech is also being integrated into GenodeOS. Helps to Google what commercial vendors did with similar tech, previously and currently, for inspiration: XTS-400 STOP OS, GEMSOS, LOCK, Boeing SNS Server, INTEGRITY-178B, LynxSecure, VxWorks MILS. Lots of schemes composed on these to leverage their security in a variety of applications.

https://os.inf.tu-dresden.de/papers_ps/nizza.pdf

http://genode-labs.com/publications/mikro-sina-2005.pdf


> NICTA and OK Labs delivered.

But the process seems very difficult. A quick look at the overview paper shows that they rejected model checkers in favor of a theorem prover (which is probably orders of magnitude harder to use) because they feared state explosion. But have they tried or simply rejected the approach as unfeasible? Model checkers now can use multi-core machines and clusters (that are now quite cheap) to verify real-world programs (or at least sub-programs). Amazon verifies their AWS services with a model checker. If model checkers can work for a task such as this, it would lower the effort considerably.


It is a difficult process to prove it at this level. Model checkers, design-level proofs, design-by-contract, static analysis... these are all techniques with good tooling available that will get you 80-90% the way there. The other 10-20% can be difficult or impossible. That's what projects like seL4 aim at.

The advice I give on this stuff is to use tech such as this (or above list) on the most critical portions. For instance, the underlying kernel, the key middleware (eg TCP, Protocol Buffers), the type system of the language, the compiler, long-term software like SSL, and so on. Effort is made once that continues to pay off in numerous uses while being incrementally extended. Modularity, careful API design, and layering make modifications easier as proven in Orange Book B3/A1 systems.

So, you build it then keep reusing it. Older, security kernels were used in VPN's, firewalls, databases, messaging systems (esp mail guards), primitive desktops (esp console), thin clients, and so on. New one's can be used similarly... are by Green Hills and Sirrix IIRC. Middleware part is extremely important, though, as even MILS kernel vendors admit (and develop). Best route is to combine: kernels like this; components written in safe subsets amenable to static analysis; middleware synthesized w/ security checks from specs and constraints; CPU or board support packages whose implementation details are hidden behind interfaces w/ their own verification done. Pieces of that are already deployed or verified. Just need an integrated version that's also OSS.

Then, the rest of the developers can use Rust, Ada, Eiffel, whatever on top of these. If there's runtimes, they should be similarly verified.


> The other 10-20% can be difficult or impossible.

Can you explain why a model checker would fall short of the last 10-20%?


Sorry for late reply. Don't take the number literally: it was a play on something Dr Schell used to say (example in [1]). He pointed out the reason GEMSOS and A1 class included formal verification was to get the last bit of assurance done. The tests proved the presence of specific issues. The model checkers and provers were meant to prove absence of issues. Model checkers often did a search through a number of states checking for invariants but were often limited due to state explosion. So, provers had to be used to show a given design implemented given policy or invariants in all situations. Additionally, show that a refinement to more concrete form preserved equivalence.

This is hard work. The lessons learned in 70's and 80's showed you could only do it once you had good mathematical formulations of certain concepts. That explains why the effort vs defects found varied considerably among projects. The other aspect that all of them discovered was the design or code had to be built with verification in mind. As in, certain implementation details were easy to modify, translate, etc while others were either difficult or impossible. You could use model checkers, traditional methods, etc to get pretty far in your assurance argument, maybe even 90-99%. That last bit that fell out of range of tools due to their problems or your implementation choice might be impossible gap to close.

So, the highest assurance argument took a combination of strong specs/models, proofs of invariants on all use-cases, minimization of state/complexity in general, and easy-to-analyse implementations. You can actually see this in action if you look at the layering/composition of GEMSOS [1], PSOS [2], KSOS [3], and VAX security Kernels [4]. CompCert's compiler passes applied [5] same rules of thumb.

I still say use lighter methods if you have to or can get away with it. There's been a number of software that used model-checking with success at finding flaws and a decent assurance argument. The problem was that they often require you to extrapolate what happens with small amount of states, data, or time to potentially huge amounts. Experience shows many things break down as load intensifies. So, I'd rather know it works in all states. And that's the last piece of assurance case that might be impossible to reach if you didn't design for it from the start. Hope this helps.

[1] http://www.iwia.org/2005/Schell2005.PDF

[2] http://www.csl.sri.com/users/neumann/psos.pdf

[3] https://c59951.ssl.cf2.rackcdn.com/5085-1255-perrine.pdf

[4] http://www.cse.psu.edu/~trj1/cse543-f06/papers/vax_vmm.pdf

[5] http://compcert.inria.fr/


seL4 is really cool; I was just looking at and the other L4 family / derivatives for µkernel ideas and trends.

More links:

- https://en.wikipedia.org/wiki/L4_microkernel_family#High_ass...

- seL4: Formal Verification of an OS Kernel (2009) [pdf paper] http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf

- l4v (seL4 proofs git repo) https://github.com/seL4/l4v


Has anyone seen open source user space for this? I find the kernel really interesting.

I'm a fan of the way they put device drivers in user space, for example. But I'm curious if anyone has actually written a NIC driver for it. I'd like to take it for a spin and write some sample code, but I'm not sure if I can boot this thing with a network stack for my experiments to use.


I was one of the early users of seL4 and am still a named author on the seL4 manual.

It's quite possible that things have changed/improved over the last few years, but when I was working on it, I found the programming environment to be, very, very difficult. The capability system suffers from several circular dependencies (e.g splitting up memory capabilities required cnode capabilities, making cnode capabilities required memory capabilities). And the userspace environment is very bare, you have to manage your own page-tables, and your own capabilities. That said, it is also very powerful if you want to write fast, secure, embedded programs, which is what it was really designed for.

I'm pretty sure they wrote a device driver for a network card as part of one of the projects, but I don't know if it ever made it out the door. I'm sure you could ask the current team to see if they did and if they would make it open. You never know, you might get lucky.


I'm told you should look here: http://genode.org/


Yes but it's not general purpose. If you look you will find examples but they are all rather specific to hardware and usually also to the application running on top of the microkernel. I don't believe there is a Linux/Minix style "interactive userland" available at this time, and Genode while useful to begin building one, lacks the ability to just provide one "out of the box" on seL4.


"SEL4 is free. What does this mean for you?"

http://m.youtube.com/watch?v=lRndE7rSXiI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: