Hacker News new | past | comments | ask | show | jobs | submit login
The Muen Separation Kernel (muen.sk)
72 points by englishm on Aug 31, 2015 | hide | past | favorite | 21 comments



"The Muen Separation Kernel is the world’s first Open Source microkernel that has been formally proven to contain no runtime errors at the source code level."

This doesn't really jibe with seL4's claim: "The world's first operating-system kernel with an end-to-end proof of implementation correctness and security enforcement is now open source." http://sel4.com/


The issue here is the spec - no runtime errors at the source code level is, as I understand, a completely different specification than end-to-end implementation correctness.

In particular, the Muen Kernel report itself [1] explains:

By implementing the kernel in SPARK and proving the absence of runtime errors, we have shown that the kernel is free from exceptions. While these proofs provide some evidence to the correctness claim of the implementation, the application of these particular formal methods do not provide any assurances beyond the error free execution of the kernel. Proving functional properties such as the correspondence of the scheduler to a given formal specification is necessary to further raise the confidence in systems based on the Muen kernel.

In other-words, we don't yet have formal confirmation that this thing actually does what we might expect it to - just that its execution is bug-free.

[1] http://muen.codelabs.ch/muen-report.pdf


But in the other direction, it seems that end-to-end implementation correctness implies absence of run-time errors. The seL4 authors write[1]:

> IMPLICATIONS [...] a functional correctness proof already has interesting implications for security. If the assumptions listed in Sect. 5.5 are true, then in seL4 there will be: No code injection attacks [...] No buffer overflows [...] No NULL pointer access [...] No ill-typed pointer access [...] No memory leaks [...] No non-termination [...] No arithmetic or other exceptions [...] No un-checked user arguments [...] Aligned objects [...] Wellformed data structures [...]

And this was already done in 2009. So I don't think Muen is the first microkernel to prove absence of run-time errors. (Maybe they claim that Muen is the first open source run-time-error-free microkernel in the sense that they did the verification after seL4, but before seL4 was open-sourced?)

[1] http://ssrg.nicta.com.au/publications/nictaabstracts/3783.pd...


I would guess so, seL4 was open sourced in August 2014 and the first public preview of Muen was December 2013, per a quick check with Google.

On the other hand, I've read that if you want to do serious SPARK/Ada work, you've got to buy AdaCore's tools or benefit from their academic program. seL4's verification down to binary was done using generic GCC (a clever way was found to meet the higher level proofs with stuff generated from the binary; they first tried CompCert (a verified C compiler that is free for non-commercial use) but I gather that suffered from an impedance mismatch).


Misleading. AdaCore provides Ada and SPARK IDE's in full under GPL if you release your software under GPL:

http://libre.adacore.com/comparisonchart/

Lots of free guides, tips, libraries, etc. You get way better tools in terms of testing, inspection, analysis, etc if you buy them. No doubt. The base platform, good enough for Muen or IRONSIDES DNS, is free. You can replicate their work easily plus create safer variants of established software such as Nginx, etc.

Whereas seL4 relies on Haskell, Isabelle, several proof frameworks/tools, C, and GCC. They achieved a lot more but with a lot more tools, time, and expertise. Those of us that were interested couldn't even check the verification stuff, especially the C-related tech, until late 2014. Using them would be... non-trivial to say the least. :)

Good that they open-sourced as that will allow (is allowing) others to build on the work. I know it's already getting integrated into Genode OS.


As I was told in a HN discussion that non-educational "for GPL" version doesn't supply the same runtime, and the run time supplied is not very good. Maybe it's OK, but with all those restrictions and limitations it just didn't sound like an ecosystem I want to mess with. Plus I prefer to be able to develop and release software with less restrictive open source licenses.

WRT seL4 vs. Muen, I wouldn't be surprised if seL4 does less for you, it's only 9,000 lines of C code, and some of it is artifacts from the manual Haskell -> C translation.


Ahhhhhh. Wasn't aware of that. Will have to check with them. Muen uses the zero-runtime profile. Not sure about IRONSIDES. Far as those using a runtime, if that's the case, I'd recommend some FOSS people create an alternative one that is good. Then they get the language, toolset, etc benefits without that issue. Recreating those will be much more difficult.

"WRT seL4 vs. Muen, I wouldn't be surprised if seL4 does less for you, it's only 9,000 lines of C code, and some of it is artifacts from the manual Haskell -> C translation."

That's not what I meant. seL4 is a separation kernel: they're supposed to do almost nothing. The thing is that once a modification occurs, it can invalidate the whole security claim. So, the person modifying it needs to be able to correctly use whatever its assurance depended on. That will be much easier for Muen given a domain expert only needs to learn Ada/SPARK vs all that went into seL4.

Now, Muen certainly has less assurance in that it doesn't have formal models of design, security policy, etc. Most FOSS types won't do all that, though. So, just requiring a language that knocks out errors and maybe learning Design-by-Contract is a nice alternative that might get more people involved. That it's Ada... might get less people involved. Who knows. (shrugs)


Exactly. It is exactly equivalent to saying that if a strict FP program compiles it is "bug free" but of course it has little to say about whether the code is 'correct'.



It's interesting work. It's what my niche would call medium assurance software: a regular development approach that selectively uses higher-assurance tech or practices to produce stuff with fewer defects or risks. Aside from issues in x86 or security policy, the other big risks it will face are how it handles error states, how much apps can affect its internals with function calls (static helps), covert channels (esp timing), and what effect compiler has on it. I recall IRONSIDES was "provably immune" to single-packet DOS and correct W.R.T. specification thanks to SPARK... until a compiler bug decided differently. ;) Ada really needs a compiler like CompCert C or FLINT ML for safety/security use-case.

Good news for people wanting to try it out is that it's been integrated into Genode OS per recent newsbite:

http://genode.org/documentation/release-notes/15.08#Genode_o...

They claim it's also ready for day-to-day albeit still early so problems will happen. Patient, early adopters and contributors are target audience. Have fun! :)


I agree that the toolchain is large, unwieldy and insufficiently proven itself.

However, given that it competes with other software compiled with random (usually GCC) compiler versions (what, do you think all seL4 drivers that eventually set up DMA will be verified?), the compiler issue is not a problem unique to Muen.

Xen has the same problem (and by extension Qubes), and even seL4 has it for all purposes that exceed running the kernel and sigma0.

The big innovation here is probably the easy general availability of a practical Separation Kernel system. And the other (not quite as big, but still significant) innovation is having a Separation Kernel that you can fix yourself, if anything looks off to you: Many of the commercial offers in that space are held together by lots of duct tape, it's just hidden by means of binary-only releases, certification procedures and sales reps.


"However, given that it competes with other software compiled with random (usually GCC) compiler versions (what, do you think all seL4 drivers that eventually set up DMA will be verified?), the compiler issue is not a problem unique to Muen."

There's how good it is relative to what it competes with and how well it meets its goals. I'm talking about potential issues with the latter. And drivers have nothing to do with what I said about the kernel. There's other methods/tech for handling them.

"the compiler issue is not a problem unique to Muen"

I have a certified compiler for C. Most separation and microkernels use that. Some use a C++ subset which can be easily converted to C. I don't have one for Ada or SPARK. So, of open kernels, the problem is indeed unique to Muen. AdaCore has commercial tech to handle it in a labor-intensive way but I'd also consider making an equivalent C system side-by-side. Then, can use SPARK proofs, C verification methods (eg Frama-C, Astree), and certified translation to object code.

Meanwhile, that the very-complex compiler can destroy the security argument is worth remembering and possibly testing/inspecting for. I'm working on the concept for an easily-built, robust Ada compiler. Once I'm done, I'll pitch it to the AdaCore people to see if they'll do it or assist an external project in doing it.

"Xen has the same problem (and by extension Qubes)"

Yes, it's why I call it low assurance. I told QubesOS team, "secure" appliance vendors, etc about alternatives with low TCB and strong security. A glimpse at the market will show how many people bought that. ;)

"even seL4 has it for all purposes that exceed running the kernel and sigma0"

They admit it. They also have many parallel projects ongoing at NICTA to fix that. They solved the compilation part recently in an unexpected way. Others they're currently working on are here:

https://ssrg.nicta.com.au/projects/TS/

The seL4 team and Gernot Heiser were more clear on assumptions and limitations of their tech than most I read. There were certainly no attempts to mislead. Refreshing compared to the marketing crap I had to deal with from likes of Green Hills.

"The big innovation here is probably the easy general availability of a practical Separation Kernel system. And the other (not quite as big, but still significant) innovation is having a Separation Kernel that you can fix yourself, if anything looks off to you: "

I agree. I said the same in another comment here to 'hga.' It's why I promote and bring attention to Muen elsewhere so more people review, try and contribute to it. Medium assurance is certainly better than low and might get replaced with high given time.


This relies on Intel's Vt-d , but there are some issues with Vt-d :

http://invisiblethingslab.com/resources/2011/Software%20Atta...


x86 has some really weird ISA quirks that can lead to exceptions and, frequently, security holes. I wonder how good their ISA model is.


Exactly. They'd need to model the correct, known error, and likely error states (eg errata inspired) of the processor. After validating that model, they need to model, test, and implement their kernel operations against it. Only then can we believe its implementation will be bug-free.

At least until another flaw is discovered on Intel's side. ;)

Note: This was approach taken in the FM9001 and Verisoft bottom-up verification programs. Those were for correctness whereas Rockwell-Collin's AAMP7G processor also verified security (eg isolation). Personally, I don't trust anything on x86 as it's too flawed. SPARC would be my choice given it's cleaner, an open standard, and has OSS implementations (eg Gaisler LEON, Oracle T1/T2).


On extremely brief inspection, they avoid a bunch of the problems by not actually running user mode (CPL3) code. That means no SYSRET garbage, no weird IRET states, no SWAPGS, etc.

On the other hand, using virtualization for everything means that you pay a full host/guest switch cost every time you context switch, and that's much more expensive than SYSCALL + SYSRET.

Their exception handler asm looks highly questionable, but maybe they really never have exceptions :)


The point was that they both created models of hardware, software, error states, and so on to use to ensure correctness of the software at every state. They even designed a safer, C variant and compiler to attempt to knock out that risk. Past that, they used microkernels and modular software (esp privileged) to further reduce risk + ease analysis.

So, the takeaway was that whatever code you're looking at better represent what's actually going to happen on the machine when it works and when it doesn't. Bottom-up verification methodologies like the above help to achieve that. Merely coding what you think works and proving that code has no common defects isn't enough.


the site is marked as high risk >.> http://www.mcafee.com/threat-intelligence/domain/default.asp...

just sayin' :) not trying to imply anythin'. apart from that my gateway doesn't let me on it :P


Same thing happened to me on the corporate network.

Not sure why this happens. I would imagine the domain was previously owned by someone that got it into the banned list? or perhaps a blanket ban for certain registrars or something like that?


It's probably because it's a "security related" product, which caused some moronic knuckle dragger to classify it as a "hacking tool." This happens a lot for things related to networking, security, virtualization, etc.

Corporate IT is a low circle of hell.


And here I thought this was going to be a new kernel for machine learning :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: