Hacker News new | past | comments | ask | show | jobs | submit login
Writing an OS in Rust: Advanced Paging (phil-opp.com)
336 points by ingve on Jan 28, 2019 | hide | past | favorite | 141 comments



I started to create the counterpart, a x86_64 bit emulator written in Rust: https://github.com/fotcorn/x86emu

Disclaimer: Almost no comments, unidiomatic Rust, tons of hacks. Currently can run the uncompress code of Linux and some initial startup code. Not much more than instruction decoding & execution and basic page handling is implemented, the rest of the hardware is missing.

Sadly I do not really have time right now to work on it. I am still torn if Rust helps more than it makes everything more complicated, but recent Rust version should have made some things I wrangled with much easier.


> I am still torn if Rust helps more than it makes everything more complicated

Do you have an example in your code of what is complicated? I took a quick look, and didn’t see anything crazy.


For someone who is completley not into OS programming but has heard a lot about safety and rust etc. etc.

Assuming the rust safety lives up to the type.

Is this (or any other project) likely to ever become a "real" useful os. What about including rust modules in the linux kernel? What are the barriers?


Is someone likely to build a Rust OS that is real and useful? Yes. Projects are already in progress, and even if they all shut down, I'm sure more will exist in the future.

Is that Rust OS going to experience "success" by traditional metrics, such as taking a significant chunk of Linux's marketshare away, and becoming a viable thing you can run on some real piece of hardware for some real purpose? I won't say "no" but it will be at least a decade before that will happen. Not because Rust is bad, or the resulting OS will be bad, but all the entrenched advantages existing OSes have are just staggeringly huge. If a perfect OS jewel of perfection appeared today, but with no other support for it (no software, drivers, applications that can only run on it, etc.) it would be a long, long time for it to get any traction.


The key to getting there (aside from the driver issue), I think, would be creating an environment that is simple, easy to work with, and friendly to creating new tools. The faster you can spin up new tooling, and the more you want to, the more willing people will be to move to the new system. Then you provide VMs, emulators, and/or compatibility environments for running software from "legacy" systems.

Do not attempt to make a new posix system, because we already have giant, entrenched, mature posix OSs and you'll bring nothing new or interesting enough to the table to warrant switching while being held back by the creaking old standard.

I'd be willing to make a lot of sacrifices to move to an OS that met my personal list of wants.


Maybe, but if you dont't have a POSIX API you'll cut yourself off from a lot of applications. Even Windows is gradually admitting defeat with WSL.

You can only get adoption of a non-POSIX OS if it offers something really unique to the user, or is in a very different market space.


You can do it in a VM or compatibility layer. Your OS can support "personalities" like Windows uses to implement WSL, but with containers. So now you can have multiple POSIX containers on your non-POSIX system.

You should encourage people to make native software using the new paradigms and abstractions of the OS, built from the ground up with decades of hindsight, rather than encouraging crappy ports.

> You can only get adoption of a non-POSIX OS if it offers something really unique to the user

At this point, just not being built on 50 year old crappy standards would be pretty unique. I have my own personal wish-list of features that would lure me in even if the only hardware supported was an RPi.


You still end up in the OS/2 situation - even if non-native apps didn't always run great, had much worse memory usage, didn't feel native, etc... they were still good enough that nearly nobody (comparatively) wrote native OS/2 apps. It's not impossible to imagine an alternate past where OS/2 had worse DOS/Windows compatibility and ended up more successful! (In actuality, I think it'd probably have done even worse — showing that the strategic options to replace the foundation of an entrenched ecosystem are "bad" vs. "worse".)

I'd like to see other paradigms explored, too, but it seems unfortunately hard to get there from here.


Yeah, it is hard. So what? So is just about everything worth doing.

One thing is for certain: you will not replace the current players by cloning them.


Isn't that how Linux replaced most the UNIX players, by cloning them?


I think that had more to do with the important innovation of being free and open.


There were plenty of free and open operating systems. Linux and FreeBSD had the advantage of also being similar if not the same as what they were replacing, as well as allowing a lot of the software that was designed for UNIX's to also work with little or no change. In other words, they cloned the look, feel and compatibility (well , FreeBSd didn't really clone it, it IS it).


IIRC FreeBSD was legally nebulous at the time, and Minix couldn't be modified and redistributed. I don't recall any other significant players.


AFAIK, any possible nebulous legal situation with FreeBSD should have been resolved by late 1994[1], which is extremely early in Linux's lifetime if we're talking about supplanting major operating systems. I think you're right that there weren't a lot though.

That said, I can't imagine Linux actually having gone very far if it wasn't a UNIX clone. I think it would have been a real long uphill battle and it would have gotten handily beat by whatever open source UNIX clone came along next, for the pure reason that it's much easier to deal with POSIX compliance (and all the software it brings that actually makes it worthwhile to run the OS) when trying to copy a system that already did it and worked out the kinks.

I mean, if someone came out with a free (maybe open source) version of Windows at that time which was legal, we could easily be living in a world where that's the dominant free OS.

I guess what I'm saying that that being vastly better on one metric (in this case, free) doesn't necessarily mean you'll take over a market (because a free operating system that can't run any software isn't very useful), and just being a clone won't either, but being a clone that's generally as good one most metrics and vastly better on a few others (e.g. a UNIX which is also free and open source, instead of just some other UNIX) may.

1: https://en.wikipedia.org/wiki/FreeBSD#Lawsuit


That said, hiding the POSIX API behind a compatibility layer like WSL can have some major advantages because it means that updates to either side can be done independently. That would mean you can have the unique offering and the compatibility. You could then also look at turning WINE into a compatibility layer for the same functions and have a lot of other software work similarly.


You pretty much have to have a POSIX compatibility layer just to support stuff without resorting to VMs.

That said, there is a lot of room to innovate in OS/userspace boundaries that don't rely on POSIX APIs--see epoll/kqueue/iocp for one place where this innovation already occurred. I'd love to see better alternatives to POSIX signals, for example. So an OS that treats POSIX as a mere compatibility layer and not the primary driver of its design could be welcome.


See ATC'13 How to Run POSIX Apps in a Minimal Picoprocess (https://www.usenix.org/system/files/conference/atc13/atc13-h...). I would say it's a POSIX emulator OS.


This was the research that led to WSL.


Windows is not admitting defeat as such, rather taking advantage of those unhappy Apple customers that just want a POSIX shell and don't care about writing Objective-C and Swift based software, the actual customer base Apple cares about.


WSL is more of an emulator though - it's isolated in its own box, and the rest of Windows doesn't see the POSIX view of the world. If you're writing a library, and you want it to be used on Win32, you can't assume POSIX.


Most of the action on desktop takes place in a browser though. So I think you could go a long way by getting to the point where you can run Firefox or Chrome (or some other webkit etc. based browser). Isn't this sort of what ChromeOS does already.


The whole point of a desktop PC is that you can produce, not only consume. Otherwise you might as well buy an iPad and browse the web/watch YouTube.


> Most of the action on desktop takes place in a browser though.

Then who cares about the OS? Why even bother. Just use ChromeOS.


The obvious answer is "because ChromeOS is spyware."

But the really fun possibility is having an OS where "Web" and "Posix" are equally-placed personalities that the OS ran, not only preventing rogue web pages from touching the posix system, but also preventing rogue posix processes from touching the browser (except where the end-user intentionally joins them through the system-provided Open File dialog or by opening the case and removing the debug pin to unlock cross-VM debugging). ChromeOS sort of implements that today, by running custom Linux environments in containers, but I'd rather trust Xen than Docker as the underlying isolation layer.


This is about the most boring thing I've ever heard. This is trivial to accomplish today, hell, QubesOS does the whole concept much better.


Fully agree.

I think that now is the time for not only new system level programming languages, but now is also the time for new OS implementations using said programming languages. That being said, "now" actually refers to a period of time that will be anywhere from 5 to 25 years long (and you might want to weight that towards the long end).


> a perfect OS jewel of perfection appeared today, but with no other support for it...

Sounds like seL4 to be honest, and yeah, it's taking a while to see adoption.


>"Yes. Projects are already in progress, and even if they all shut down,"

Might you have any recommendations of specific ones worth paying attention to right now?


The most advanced one is Redox https://www.redox-os.org/


Thanks for the link. Interesting it looks like Redox is writing their own filesystem or at least reimplementing ZFS:

https://github.com/redox-os/tfs

I wonder how that effects the time horizon for maturity. I've always just assumed that a filesystem takes a good 10 years to be really stable. Maybe my assumptions are way off? Or the facts that its a reimplementation makes that matter much less? Didn't it take BTRFS around this length of be considered production stable?


TFS is very cool. Too sad that ticki has stopped working on it: http://ticki.github.io/blog/why_im_leaving_open_source/

Due to this, I don't know whether it will be ready at all.


Redox OS is the most mature rust-based operating system that I know of. Exactly at which point you can call a project such as this useful or real is of course debatable, but redox is advanced enough to run a GUI.

The linux kernel is very unlikely to accept anything other than C. It would further complicate their build system, and raise the (already high) barrier to entry for developers. Rust also does not have the breadth of architecture support that the Linux kernel needs.


I think none of your points are really set in stone.

* The kernel already contains some C++ which (if you ask C programmers) is not C. And many C programmers are more likely to accept Rust than having to deal with C++.

* AFAIK there is not gcc front-end for Rust yet but LLVM tools can compile the kernel, too. When Rust stabilizes gcc support may come, too.

* The barrier for entry could be lowered by using Rust for Rust programmers if it is employed in some specific modules. The resulting code could be more clear. Not everything in the kernel is pointer pointing and bit shuffling.

* The number of architectures that Rust supports are growing daily.


> The [Linux] kernel already contains some C++

Are you sure? I don't think the kernel even compiles with a C++ compiler [1]. Also, I just ran cloc on my linux repo and didn't get any actual C++ code.

Perhaps some external kernel modules are written in C++, but that seems like a bad idea too [2].

[1] https://www.phoronix.com/scan.php?page=news_item&px=45-Linux... [2] https://www.threatstack.com/blog/c-in-the-linux-kernel


That [1] was what I remembered. They were preparing for C++. But obviously it didn't get as far as actually including C++ code in the mainline kernel. Maybe it was a preparation in order to allow experimenting with C++ in the kernel.


Where in the (mainline) kernel is there C++ code?


I could swear that I read that they start allowing some C++ contributions. But with a search in the kernel source I could not come up with any evidence. If there is any C++ code in the kernel they hid it well.


You may be thinking of GCC


Pretty sure the linux kernel is exclusively C. Actually the major OS kernels are pretty much all C - Windows, FreeBSD, MacOS and other BSD derivatives. The GNU software and utils could be written in C++.


MacOS is not a BSD derivative when it comes to kernel.

And I'm pretty sure Windows has some C++ in kernel space. It might be limited (no exceptions etc), but that's another matter.


In OS programming, there is a strong bias towards favoring C as the only serious programming language. Rust is one of the few programming languages that has the potential to break through this barrier, especially because the ability to support bare-metal programming is one of its design goals.

But the key word there is "potential"--Rust, as it stands right now, is not ready to be used in production OSes. Features such as inline assembly or other developments for #[no_std] support need to be worked on and supported in non-nightly modes for any production OS to take notice, for example.


I feel like if C++ didn't break through this barrier then I wouldn't hold my breath for Rust.


I don't follow that argument.

C++ didn't break through because the benefits didn't outweigh the drawbacks compared to straight C (meaning you can implement C++ like concepts in straight C, and not have to deal with a lot of baggage C++ brings).

Rust is a whole other kettle of fish. It has more or less been proven by this point that people cannot write memory safe C, even experts in the language are still introducing memory-unsafe bugs in 2018. Therefore Rust is bringing features the C language cannot offer.

Which isn't to say I know that Rust will be "successful." I don't know that. But I do know it is worth TRYING to see if it can be successful, and C++'s lack of success isn't really a counter-argument.

I'm still sad that Microsoft's Singularity OS[0] never got further developed, interesting concept.

[0] https://en.wikipedia.org/wiki/Singularity_(operating_system)


> ... It has more or less been proven by this point that people cannot write memory safe C, ...

Or concurrency safe. There aren't many languages that can help with this class of bugs. Like race conditions. (Someone on HN said Pony is another language that has concurrency/thread safety guarantees.)


Bias is everything, Joe Duffy briefly mentioned at his RustConf keynote that many at WinDev did not believe Midori was possible, even when shown running in front of them.


i think this would be the link, if anyone is interested: https://www.youtube.com/watch?v=EVm938gMWl0


Not to shill but I'm working on a Singularity/Midori-inspired Rust JIT-compiled operating system. If that ever gets anywhere I'll post a link/Show HN.


Yeah I was sad about Singularity too. As for C++, yes it doesn't being an absolute guarantee of memory safety, but C is so error prone that it's ridiculous to suggest anything below an absolute 100% guarantee wouldn't be a compelling improvement. C++ was a gigantic improvement in the safety of C (i.e. such that following good coding practices actually gives you legitimate confidence your code is memory-safe) while still having all of its flexibilities, and yet it still got shunned, so I'm unconvinced that a nitpicky language like Rust makes all the difference just because of its formal verification. If kernel folks wanted compiler help with memory safety they could've embraced C++ and gotten most of the way there.


C++ can bring very little baggage. That's why its widely used in Embedded environments.

And I'd wager that C++ can write far better optimized abstractions than a human in C every could. In most cases.

Writing memory-safe C is clearly possible - many other enviroments are written in C. It come down to using a safe subset.


> Writing memory-safe C is clearly possible - many other enviroments are written in C

Show me one major piece of C software that does not have frequent memory-safety bugs and I might believe that.


djbdns/qmail might qualify here.


qmail had LP64 memory corruption bugs.


One ever right? I wouldn't count that as "frequent" - lots of programs in memory-safe languages have more/worse bugs than qmail.

Of course, the existence of one significant not-too-vulnerable C program doesn't really prove a lot either way...


I though Georgi Guninski found several.


It did, just not on pure UNIX clones.

Symbian, BeOS, Haiku, Genode, ARM mbed, macOS IO Kit, Android (specially after Treble), Fuchsia, Windows, IBM mainframes (old PL/S code has been partially replaced with C++) ...


I guess by OS programming I assumed they meant the kernel. Fuchsia is hardly an established OS at the moment so I don't really compare against it, and for Windows, MS's documentation has repeatedly discouraged C++. I'm sure some use C++ in the kernel anyway but I'm not under the impression it's generally accepted.


All the examples I gave, except for Fuchsia, are examples of C++ in production for kernel code.

Windows has been slowly moving into C++ since Windows 8, as Microsoft considers C++ the future of systems programming in Windows.

https://docs.microsoft.com/en-us/cpp/build/reference/kernel-...

https://www.reddit.com/r/cpp/comments/4oruo1/windows_10_code...


Wow that's good to hear, thanks. (I wasn't familiar with the other OSes so that's why I didn't comment on them.)



Using C++ for kernel development means you first have to drop most of the useful parts of C++ and take usability hits with almost all the other features.

Rust and C have an advantage because both are fairly close to the metal, Rust in no_std mode isn't much different than C but the compiler is angry at you all the time.


>Using C++ for kernel development means you first have to drop most of the useful parts of C++ and take usability hits with almost all the other features.

This is commonly accepted wisdom, but it's not entirely true. (It's only a little bit true.)

The kernel is a freestanding environment where big chunks of the standard library are missing. However, the most useful parts, like unique_ptr<T>, can be reimplemented as needed.

The kernel absolutely can support RTTI and exceptions. Porting libcxxrt is relatively simple, for example, even if implementing the entire Itanium C++ ABI is a big task. Though, the kernel should still avoid throwing and catching exceptions in sensitive code like interrupt handlers. This is not radically different from other C++ projects. The use of exceptions is typically discouraged, for example, in tight inner loops.

It is true that enabling RTTI and exceptions causes code size to explode. This is a valid concern. While memory is cheap, cache misses are not cheap. Mark functions and methods with "noexcept" as needed where this becomes a problem.

Finally, it's absolutely true that exceptions make it impossible for the kernel to make solid real-time guarantees. Fortunately, most operating systems only have soft real-time constraints, if that.


> Using C++ for kernel development means you first have to drop most of the useful parts of C++ and take usability hits with almost all the other features.

What? No... the most useful parts of C++ are its compile-time facilities... templates and type utilities and RAII/smart pointers/etc. (I could go on and on) all of which are still there. You'd lose some runtime stuff like the ability to throw exceptions and some RTTI, which some people already advise each other to avoid in user mode too (not to say I agree, but they're still finding a lot of use in C++ without this). If losing those is equivalent to losing most of C++ for you then you're not really taking advantage of C++ to begin with.


> Rust in no_std mode isn't much different than C

This isn't really true; you still have memory safety, and generics and a sane stdlib.


Memory safety flies out the window when you write a kernel and the stdlib isn't available in no_std mode. You only get a very limited subset in the core lib.

Memory safety in kernels is hard because you might have to do things like switch stack or even address space and the rust compiler handles such actions rather poorly. Whether or not writing to the rsp register will crash the kernel or successfully pivot the stack is guesswork at best and only works reliably if you code it entirely in inline assembly.


Look at the embedded Rust stuff like RTFM - they find clever ways to build safe abstractions over unsafe primitives. If you can get 95% memory safe device drivers, thats already a huge win.


Correct and in my own project I want to achieve 100% memory safety for non-primitive drivers (Disks, GPU, Ethernet, USB Devices) and 100% memory safety for any non-privileged code of primitive drivers (ie, PCIe bus, SATA, USB, etc.).

Any privileged code can then be written with those 95% and that's the only weak point left.


Memory safety in kernels is also welcomed and is very present in mainframe OSes and high integrity systems where human lives are at stake.

Yes, there are a couple of areas where unsafety is required, there isn't any other way.

However 100% of the kernel source code doesn't need to be unsafe.


There is two types of memory safety that you talk about there, I think.

One is the memory safety that the rust compiler wants to hold and the other is the memory safety you program into a system.

Those can be entirely different with incompatible guarantees, depending on the mission statement of the software.

Some things a kernel must do are unsafe for the rust compiler. Dereferencing a null pointer, for example, is basically a crash in most operating system's userland.

In kernel space a null pointer may be valid memory that points to real data you need to use.

Setting the page tables is unsafe as well; it's almost impossible to guarantee that the following instructions can run in a static manner. You have to validate the page tables you're about to load at runtime and even then the result could be a process crashing.


> Rust and C have an advantage because both are fairly close to the metal, Rust in no_std mode isn't much different than C but the compiler is angry at you all the time.

Those things the compiler is angry about? Those are things that would be runtime errors in C.

`rustc` is pedantic because it needs to be in order to provide the assurances it gives. There is a subset of "valid" Rust code that `rustc` rejects because it isn't smart enough to understand it would be safe, but I would guess that those cases are less common than than the cases of "seemingly valid" Rust code that is in reality problematic if it were accepted.

Beyond that, I would like to hear what errors have made you feel like the compiler "is angry" at you instead of feeling the compiler "is trying to help me". We spend an inordinate amount of effort on 1) accepting code that we should accept by improving ergonomics, 2) provide extensive suggestions when "seemingly correct" code hits the wall of `rustc`'s understanding, language design or things addressable by ergonomics, and 3) provide as good explanations as possible for all other cases. Because of this, I believe the experience of a newcomer today is much better than even a year ago, even though it can still be improved.


>Those things the compiler is angry about? Those are things that would be runtime errors in C.

I'm very very aware of that though this is not always true.

For example Rust doesn't have a sane way to have a reentrant mutex without including std, which is bad because for interrupts in a kernel you need to either have very fine grained locking and fallbacks or rely entirely on lockless operations. Interrupts can occur on top of eachother in which case it behave a lot worse than simply being reentrant.

rustc yells at me a lot for the solutions this requires because the way I solve it violates ownership rules of rust the hard way to make the code simpler and understandable.

Other times it's when for bizarre reason rust wants the code to implement the Sync and Send traits despite the kernel running alone on the core (there is no threading in that kernel) and I would love to be able to tell rustc to just shut up about this and simply ignore Send, possibly even Sync, on a module level.


You shouldn't have to worry about Send and Sync if you aren't using generic types that require that their type parameters be Send and Sync. What such types are you using?


Various no_std libs use Send+Sync in places which makes things fun.

Another would be static mutable variables, which are perfectly safe in a single-threaded environment, doubly so if you map that memory on a per-core basis to ensure each kernel has a unique variable value. Esp if you need lazy_static then there aren't many options other than wrapping the data in a mutex for no good reason.


If it's per-core you should look into making `#[thread_local]` statics work, which remove some of the restrictions.

`static mut` is not always safe even in a single-threaded environment, because of reentrance - see also https://github.com/rust-lang/rust/issues/53639.


I'm not sure if Thread local would work since i'm not certain if the bootloader can handle a TLS section in the ELF file, my kernel cannot do this either as I've yet to implement threading.

Reentrance is not the issue here, I guard this inside the data structure, interrupts will be worst case reentrance most of the time.


Then you can use the techniques in the thread I linked.

But without guarding mutability behind some way to indicate interrupts are disabled, or an outright lock, and without guaranteeing no data races (Sync), it's not actually safe.

And the compiler can't make `static mut` safe to use as any of those requirements could be broken and then it's not safe at all anymore.


If you know your types are Send and Sync, then you should implement Send and Sync for them. That's how you "tell rustc to just shut up about this."


They aren't Send and Sync, ie not multithreading safe, they are however, reentrant safe, which is all I need. rustc complaints and I don't want to implement traits on types that don't need them and don't adhere to the contract of the trait.


Linux's kernel runs on more than one core and is pre-emptible, so that's just not generally true.


In the context of a single core, this happens. Multiple cores make it harder because on top of an interrupt taking locks, other cores now also take locks and you can no longer rely on various promises that a single core gives you.

While the Linux kernel can do this, it's not easy to write from scratch, especially because Linux is in C so the compiler doesn't complain about weird things you do.


> Those things the compiler is angry about? Those are things that would be runtime errors in C.

> There is a subset of "valid" Rust code that `rustc` rejects because it isn't smart enough to understand it would be safe

Way to tell him he's wrong only to then proceed to directly contradict yourself...


> [...] but I would guess that those cases are less common than than the cases of "seemingly valid" Rust code that is in reality problematic if it were accepted.

I would guess that most people starting to try out Rust are hitting the latter more often than the former. "Valid" Rust code that `rustc` rejects is uncommon (and for the most part, a bug), but wanted to acknowledge it happens. A "common" case of it would self-referencing structs, but that is considered a bug to be solved. I've only personally experienced problems with valid code being rejected when expressing complex trees involving associated types with their own independent lifetimes[1][2][3].

[1]: https://github.com/rust-lang/rust/issues/55756

[2]: https://github.com/rust-lang/rust/issues/54378

[3]: https://github.com/rust-lang/rust/issues/54895


Who says that C++ didn't break through that barrier, though? Sure, the big popular OS projects like (I'm assuming) Windows and Linux mostly stick to C in their kernels, but Haiku (for example) seems to be doing quite alright being written predominantly in C++, including in the kernel: https://git.haiku-os.org/haiku/tree/src/system/kernel

Outside the kernel, C++ seems to have more significantly broken through that barrier.


You've repeated one of the existing responses from 9 hours ago.


Sorry about that.

In my defense, I linked directly to example source code.


Here's the explanation from Linus for C++

http://harmful.cat-v.org/software/c++/linus


This is a prime example of the gatekeeping nature of C programming you see: the idea that, if you're not using C, you're not a serious programmer, so go away and play with your toy language.

The main actual objection to C++ is that, well, the exception handling model is controversial, and it is especially ill-suited to kernel programming. RTTI can also get thrown in the lump here. Standard libraries aren't going to work in the kernel anyways, and so if you strip that out, what benefits does C++ get you over C? The correct answer, actually, is that you get RAII for added safety, and tooling such as IDEs have better support for finding all implementations of a virtual function than they do function pointers, but for a lot of C programmers, this can be a sense of "real programmers shouldn't need such assistive technology."


>"The main actual objection to C++ is that, well, the exception handling model is controversial"

I am not familiar with this. Can you way why exactly it is considered controversial?


The implementation of C++ exceptions is what is often called "zero-cost exception handling." However, the zero-cost actually turns out to be a lot of cost, just cost omitted from normal accounting.

The way zero-cost exception handling works is you build this large table that says "if there's an exception where the program counter is in this range, jump to this point." There's no added code to the program in normal control flow (hence the name), but when you get an exception, you have to do a mildly expensive unwind and table lookup to get to the code. The table itself also comes with the price of requiring extra relocations for all of the PC offsets, which also comes with a startup hit.

One of the main costs is that unwinding through a function is assumed for all functions, unless proven otherwise. This means, even if you're not using exceptions yourself, the compiler still needs to generate code to catch an exception after every function call, run all destructors in reverse order, and then rethrow the exception. And remember that destructors and constructors are themselves function calls, which therefore might throw exceptions as well. Generating the code to actually do all of this destruction correctly requires adding code to the function to handle this that is not in the normal execution path, which can also cause issues with things such as instruction caches.

A final issue is that exception handling requires RTTI to be able to dispatch to different catch handlers. RTTI needs to be generated for every potential type (this includes POD structs, for example), which is again a codesize bloat issue. For types that have virtual functions, the RTTI information need to be included in the vtable, which means they're not going to be eliminated by dead global elimination passes (as unreferenced POD structs would tend to be).

In short, RTTI and exception handling require a lot of extra tables that have to be generated even if you don't use them. Furthermore, a major concern for kernels, exception handling requires unwind support, which is generally not part of the kernel library repertoire and can be tricky to do manually. Compiling without exceptions and without RTTI is not unusual for large C++ applications for these reasons.


> This means, even if you're not using exceptions yourself, the compiler still needs to generate code to catch an exception after every function call, run all destructors in reverse order, and then rethrow the exception.

For every non-inlined function call (or one that it otherwise knows additional information about, e.g. with LTCG). For C++, this is a very big difference, since idiomatic C++ has a lot of inlined functions.


Going from memory here, since I can't find any references.

I believe there are problems in the handling of exceptions in constructors and destructors, which permeates into other parts of the language such as how arrays are constructed, etc. For one, the only way a constructor can fail is through an exception, and that's pretty heavy-handed for a language which markets itself as "don't pay for what you don't use." So you can't do RAII in C++ without exception handling. (Note that Rust solves this by not even having constructors.)

There are also some problems with exceptions being slow code paths, but that probably varies by compiler.


You can do RAII without exception handling... you just don't throw from the constructor. There's no stipulation you have to throw from your constructor.


So your constructor has to be infallible. That means either you can't allocate memory in your constructor, or that all classes that are backed by dynamic memory have to have a tacked-on error state in case the constructor "fails".

An operating system can't just crash if memory allocation fails, because it's actually normal for the operating system to run out of memory. If your OS is even a little bit like Windows 95 or System V, it uses all of the "left-over" memory for the file system cache, so every time it tries to allocate memory, it probably needs to flush a page to disk, which takes long enough that the OS will also want to go back into the scheduler to try to find some more work to do while that runs.

Ironically enough, C++ has fallible allocation in the standard library in spite of being ill-suited for it (unless you have exceptions; those can support fallible allocation just fine), while Rust went with the let-it-crash approach in its standard library, despite the fact that the language itself allows you to just return an Option from all the functions that allocate memory.


I was just saying you can have an error state, like iostreams already do. It doesn't imply you give up RAII.


If you're doing that, you will often end up needing an init() method, and you arguably don't get RAII if you're not doing initialization during construction. I mean, you can write safe code this way, but if C++ hadn't backed itself into a corner by trying to make user types act like builtin types, this wouldn't even be a question. Constructors wouldn't have to be a whole domain of study unto itself, they could just be functions and you're done learning about them.


Huh? No. You can perfectly fine have a constructor that just leaves the object in an error state if it fails to acquire its resources. And a destructor that doesn't destroy unless acquisition is successful. No need for init(), though you can have that too. And your "arguably" is not arguable, it's just plain wrong. You don't have to successfully acquire something on every single invocation in order to have RAII. Otherwise e.g. initializing a smart pointer with null would suddenly imply smart pointers are not RAII...


This is the first paragraph on wikipedia:

>In RAII, holding a resource is a class invariant, and is tied to object lifetime: resource allocation (or acquisition) is done during object creation (specifically initialization), by the constructor, while resource deallocation (release) is done during object destruction (specifically finalization), by the destructor. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts (holding the resources is a class invariant), and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks.

All of this is violated if you are constructing objects in an 'invalid' state. The whole point of RAII is that you don't do this. If you have to check if the object is in an error state at the start of every method call, then you're not benefiting from RAII.


No, you're just arguing and misinterpreting things for the sake of arguing. RAII doesn't mean you absolutely have to hold a resource all the time. Like I said, nobody (other than you) says e.g. shared_ptr isn't RAII just because it can be in a null state that holds no resources. If you understand RAII you know that no benefit of RAII goes out any window if a smart pointer or some other object is initialized in a state that happens to hold nothing.

If you want to redefine things to be different just to win an argument, you can, but it's not something I'm going to continue entertaining as I'm pretty tired of it now.


Look, I said it was arguably not RAII, so I can see how one could make the point that it still works if you do it that way. But you saying "I'm just plain wrong" to say that RAII actually means resources must be allocated on initialization... I don't know what to tell you. That's literally what the acronym means.

And just because some low-level pointer primitive allows itself to have a null state doesn't mean your Files and DBConnections, etc., should have free reins to do the same.


Thanks for the detailed explanations. Cheers.


Just replying to your first sentence: I'm actually curious how many NEW OS projects are being written in C. I would suspect your comment is more geared towards entrenched operating systems where C was already the groundwork.


The MesaLock Linux project aims to structurally replace parts of Linux with safer equivalents. For example OpenSSL: https://github.com/mesalock-linux/mesalink

I think projects such as MesaLock are interesting because they don't try and start from scratch[1], but instead incrementally improve the status quo.

[1]: Though I love Redox & pals. I'm definitely rooting for their success.


As far as I understand, the goal of the original post is to be an educational resource.

Redox OS on the other hand is a real OS in Rust, already at a basic useful level: https://www.redox-os.org/

As for Rust modules in the Linux kernel, I expect the barriers there to be primarily political. Other proposals for non-C modules in the main tree have not been very welcomed.


Rust modules in Linux? Anyone could start to do it now, but it won’t be mainlined, probably not ever.

A real useful OS in rust? I think this is a complicated question. I don’t see a rust based posix kernel ever taking off in an interesting way, the C based kernels are stable and fairly safe, the community quickly fixes issues when they come up. A new kernel won’t have the architecture and driver support so it will have those downsides with the upside being maybe more safety which is harder to measure. It’s just that compelling. Now a new hypervisor? Maybe some sort of unikernel core? Maybe some sort of container hosting microkernel or some new thing I’m not dreaming up right now? I could see that taking off. I think it would have to be a new thing.


Hypervisor / container hosting was what I had in mind yes. Seems that would be the place where a from-scratch OS, safe, would immediately gain usefulness.


So something like Firecracker?

https://firecracker-microvm.github.io/


The other way around. Firecracker is like a stripped down qemu-kvm.


Atm the barriers are all the unstable features you need to rely on as well as rust and LLVM bugs.

The LLVM bug I've encountered makes it impossible to properly handle interrupts with error codes unless you use [naked], which is unreliable in my experience.

The Rust bugs are on one hand a None variable being initialized to a Some() value and a bug where rust injects a few instructions between inline assembly that corrupt the stack.

These problems can usually be worked around in some ways, it's annoying and means the rust compiler is not quite there yet.

However, with rust it's easier to reason about why an error occured than in C, atleast in my experience, although some things can only be found out by trial and error (corrupted stack or hitting undefined instructions), as a result I would predict that Rust could result in a lot more fun designs and architectures than simply writing a Microkernel (Redox).


> a None variable being initialized to a Some() value

Did you file a bug about this? this seems extremely bad.


No, I'm waiting on a fix for the other bugs first since I've monkey patched this one so far by using some unsafe.


> The Rust bugs are on one hand a None variable being initialized to a Some() value

Wait. There has to be more going on there than just that—perhaps it's some interaction with naked functions?—or else we'd be seeing crashes everywhere from this.


Well, there is only a single place were this happens reproducable. All other None values are fine in the same binary. The variable is not touched by any code I haven't verified over this.

I don't have any naked functions in the relevant binary either.


Fuchsia isn’t pure Rust, but has a lot and it’s only increasing.


There is a demand for a FOSS, realtime, safety critical OS for embedded systems. Linux and QNX are the main options right now, but they both have their downsides (QNX being proprietary and Linux being monolithic).


FreeRTOS maybe ? But I am not quite sure what is the exact license, or if there are restrictions in terms of features for the free version.


I doubt Linux faires that well against something like INTEGRITY OS.


AWS Firecracker, there VM behind Fargate and Lambda is written in Rust, if that counts as an "OS".

https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...


I think it would take a lot of persuasion for Linux to be written in anything but C (and assembly). All their tooling and expertise is geared towards GNU C.

Rust is a young language and LLVM based too so I doubt it'll be accepted any time soon.


The barriers are mostly cultural, but even the technical ones are huge. Including non-C code into the Linux kernel would require a lot of internal momentum, because of inertia. On the technical side, it would require everyone downstream of the kernel to update their build process.

That said, there is little stopping you from experimenting with writing your own kernel modules in Rust that wouldn't need to make their way into mainline.


steveklabnik: "Not even Rust people think that the kernel should adopt Rust."

https://news.ycombinator.com/item?id=16547450


The Linux kernel, yes.


Well well well.. as a special kind of user, I often wish for a tinier even limited OS that would part from C type system.

For rust modules.. BSD added lua support for kernel modules.. surely linux can tolerate gc-less rust.


One of the areas still in development in Rust now is handling memory allocation failures. That's important for kernel development.


This is both kinda true and kinda not; the allocator work will help with using the standard library data structures on a kernel heap, but many kernels write their own specialized structures anyway, so you’d make the kernel allocator fallible and it all works out.


Yeah, the above will just make such development more accessible.


Why wouldn't it be a good solution to write something like Box, but with a new function that returns `Option<Box<T>>`? I'm not sure what Rust is lacking here.


>>> Assuming the rust safety lives up to the type

^ this... :-)


Probably not, at least on the desktop, good luck getting drivers working when you see the state of Linux / BSD after 20 years and thousands of engineers that worked on them. It's 2019 and we still have issues on Linux that were fixed in Windows XP era.

Also I think people are blinded by that "rust safety" thing for an OS, it doesn't really matter, Redox that is written in Rust had a challenge a year ago about breaking it, it took a few minutes for someone to kernel panic with bad arguments in a user space CLI, so how long do you think it's going to take to have something as secure / stable as Linux for Redox, 10 / 15 years and at the same time you need all the features / drivers to have a usable OS, just forget about it. The only way you can have a "successful" OS nowdays is to target a very specific niche / usecase.


Given the amount of software that runs on servers with a limited footprint need, or orchestrating virtual machines or containers running similar/other software I wouldn't count out the potential. It won't be a desktop application anytime soon, but there is room for the OS is the Service type of OS.


creating recursive page tables is bad idea.


Why?


It's not a general solution- you often need to access the page tables of other address spaces, and those aren't picked up by a recursive mapping.

And while a single recursive mapping is a relatively small fraction of the address space, you probably don't want to do it for all address spaces simultaneously.

So you need something more general anyway, and you may as well just use it all the time.


It is possible to access other address spaces, it is just a bit more complicated:

- Set the recursive entry of the active level 4 table to the (recursive) level 4 table of the other address space. - Now the recursive addresses point to the other address space, so you can access its tables freely. - The difficult part is to switch back to the active address space again because can no longer access the original level 4 entry. This can be solved with a temporary mapping to the active level 4 table or by using different level 4 entries for the recursive mapping of active and inactive address spaces.

However, I'm no longer sure whether recursive page tables are a good choice for the blog since they're complicated, difficult to learn and not portable. See my reply to DSMan.


this does seem really complicated. my guess is that it doesn't save on TLB space since the hierarchy is kind of baked into that cache. it saves a little bit of memory?


It really doesn't save anything, and makes some operations like finding the physical address for a virtual address more complicated. His description of "identity mapping" is also a bit too focused on page-tables and kinda misses the bigger picture. x86_64 has a 48-bit memory space, so you can literally get away with identity-mapping all of your physical memory into a higher-half location with no issues, because the virtual-address space is much larger then the amount of physical memory a 64-bit computer is going to have. Ergo, even if you went with his described approach and just made a single identity mapping for each page table, it would still use so little virtual memory space you could just allocate from elsewhere for your memory-mapped files and such with no worry of them touching eachother. And by doing a single large allocation of all of physical memory, you can use very large page sizes to setup the identity map, significantly reducing the number of pages required.

There is also a presumption that you can only choose one mapping technique, which isn't the case with x86_64 since it has such a large virtual address space and you can map the same page in more then one location. So, as long as you keep track of what's happening with your pages, you can have an allocator that just allocates contiguous pages in the identity mapping, and also have an allocator that maps separate physical pages into a contiguous virtual mapping, and both allocators can coexist perfectly fine as long as you ensure pages are only allocated to one purpose at a time (Which isn't very hard to do. You can even have your virtual allocator just call your physical allocator).


Thanks for the feedback!

I chose recursive page tables because the underlying mapping is very simple as it only requires to map a single level 4 page table entry. Thus, the bootloader does not need to force any address space layout on the kernel. If the kernel wants to use a different mapping scheme, it can use the recursive page table to perform that mapping and then unmap a single entry to undo the recursive mapping.

That being said, I like your proposal to map the complete physical address space to some higher-half location. It would allow to introduce a simple `phys_to_virt` function, which completely avoids the bitwise operations required for recursive page tables. It would also make this blog post much easier, which is always a good thing.

The question is how we can support that approach in the `bootloader` and `x86_64` crates without breaking users who wish to continue using recursive paging. I opened an issue in the blog repository with some ideas: https://github.com/phil-opp/blog_os/issues/545


Admittedly, when I made that comment I had not had a chance to fully read your article (And it's a good article by the way, I really like this series). I didn't realize that the mappings you're creating are just for your bootloader and the OS is going to throw them away. On that note, there's lots of things I want to mention/point out, hopefully it forms something of a coherent thought.

First, I would recommend taking a look at how bootloaders like GRUB handles this, since they do it pretty dead simple and make it easy for kernels to know exactly what situation they're going to be in right when they start. For GRUB, when loading a `x86_64` ELF kernel, it loads the kernel's segments at whatever physical address they are linked at (ELF records separate physical and virtual addresses for segments), and then identity maps everything and jumps to the kernel's entry point. Note it doesn't do anything else, like setup a stack, it assumes the kernel will do that if it wants one.

At that point, there's a problem, because almost all kernels are going to be higher-half and thus linked at a different virtual address then their physical address (But GRUB ignored the virtual addresses). Thus, the kernel's entry point needs to have some type of stub which can run at the identity mapped location, create a new page table with both the identity mapping and the higher-half mapping, and then jump to the actual kernel at the virtual location.

Now with that, I want to point out that the kernel does not need to use the page table that GRUB set up, and in fact GRUB doesn't even provide a good way to do that. But they don't have too - the kernel can simply load a new one into `CR3` to completely replace GRUBs page table. And once they do that, they can jump to the higher-half and then remove the identity mappings. And they can do this without having to do any allocation because they can simply make the initial page tables static variables, and then they'll already be in memory ready to be used once GRUB loads the kernel into the correct physical addresses.

You can see that in my kernel here[0], though it's a fair amount different being that it's 32-bit. The same idea applies though - my code is linked at a higher-half address, but GRUB jumps to the entry point while running at the lower physical address. This means that if I were to jump straight to my C code, it would be completely broken, because it was compiled assuming it was running at `0xC0100000`, not `0x00100000`. The assembly stub fixes stuff up just enough so that we can run at `0xC0100000`, and then later in the kernel we setup a more involved mapping.

This gets to another important point that I couldn't really figure out - how is your kernel linked? I looked at `cargo-xbuild` but I couldn't figure out it. I don't quite know what linking looks like for Rust, but I would think at some point you drop down to the LLVM linker and should have some linker script like this[1]. The `AT` specifies the physical address for those segments to be placed at, where the `. = KMEM_LINK` specifies the location the code will run from - that's how you achieve a higher-half kernel. I'm guessing that, if you don't have any linker scripts, then it just gets linked at whatever addresses the linker happens to pick for normal executables, which will cause you problems down the road since your kernel will be located at a weird location.

And now that I'm looking at the code further, I'm guessing you may have already run into this problem, since I noticed in `bootloader`[2] that you ignore the ELF file's physical addresses[3] and instead just load the kernel at a constant physical address. I would highly recommend against this, the ELF kernel should have correct physical addresses and if you ignore them then the kernel will not be able to correctly create a new page table itself, since it will setup incorrect mappings based on the assumption of where it thinks the kernel is located in physical memory vs. where it was actually loaded. The identity mapping will also not working, since the stub to load the new page tables will attempt to use incorrect physical addresses.

This does make the bootloader a fair amount more challenging though, since you have to move the code that loads the kernel into memory into an piece of memory unused by the kernel. But I would say for a first pass, simply moving the bootloader to a somewhat higher physical address and then assume the kernel will be at a low physical address (and panic if it's not) would be fine. Also, you still need to place the `BootInfo` somewhere, but you can just pick any random physical memory location. As long as the kernel knows the physical address of where it is, it can avoid writing over it (Or more likely, the kernel will just copy the entire thing into a separate structure inside the kernel to keep things simple). Either way, the kernel can handle that pretty easily.

> The question is how we can support that approach in the `bootloader` and `x86_64` crates without breaking users who wish to continue using recursive paging.

I don't really have a good answer for you as for avoiding breakage (I think it's somewhat impossible here), but I would argue that any kernels that want to setup their own recursive setup should simply do it themselves. They can write the entry point to their kernel to do whatever they want, including setting up a recursive mapping.

If you have any questions please feel free to ask them! I could also try to condense this somewhat and put it on the `github` issue if you'd like. But to respond directly to the ideas in the `github` issue:

> The recursive_level_4_table feature enables the recursive mapping of a level 4 entry that we currently have. Provides a RECURSIVE_LEVEL_4_TABLE_ADDR constant to the kernel. > The map_physical_memory feature maps the complete physical address space to some virtual address. Provides a PHYSICAL_MEMORY_OFFSET constant to the kernel.

If you respect the physical addresses in the ELF file and setup an identity mapping with them, then neither of these are necessary - the kernel doesn't need the `PHYSICAL_MEMORY_OFFSET` because it already knows it's at the locations specified in the ELF file, and the kernel doesn't need the location of the current page table at all because it can just replace it with a completely new one that also has the correct identity mappings for the current physical memory location.

[0] https://github.com/mkilgore/protura/blob/master/arch/x86/boo...

[1] https://github.com/mkilgore/protura/blob/master/arch/x86/boo...

[2] https://github.com/rust-osdev/bootloader/blob/master/src/mai...

[3] https://docs.rs/xmas-elf/0.6.2/xmas_elf/program/enum.Program...


Thanks for the detailed reply!

I don't like the traditional GRUB approach because it leaves so much work for the kernel (assembly entry point, stack setup, creation of new page tables, etc). Why not treat the kernel like any normal application and load it like a normal ELF loader? This has the following advantages:

- Each loadable program segment is mapped at its specified virtual address. To create a higher-half kernel, just set the desired virtual address in your linker script. If you don't need a higher-half kernel yet, simply use the default linker script. This is what we do at the moment since we don't have any userspace yet. Like in normal applications, the specified physical addresses do not matter.

- There's no assembly code required at the entry point. You can start directly in Rust because there's already a stack and everything lives at its expected address.

- Less startup overhead, since the page tables are only set up once in the bootloader. With the GRUB approach, the kernel needs to recreate the page tables since GRUB only created rudimentary mappings.

Like you said, the disadvantage is that the kernel doesn't know its physical addresses. But it can still find them out by traversing the page tables.

Overall, I think it's worth it because it makes the startup process much easier from the kernel's perspective. I believe that this is especially important for a teaching project because readers shouldn't have to understand assembly, linker scripts, virtual/physical addresses, higher half kernels, and page tables before they can even boot their Rust kernel. For people that don't like the bootloader's decisions we can add more configuration options or they can use a custom version of the bootloader (e.g. the nebulet project does this [1]).

[1]: https://github.com/nebulet/nebulet/blob/c0f3c993a87e72a95f85...


Previous discussion https://news.ycombinator.com/item?id=18903235

Edit: Discussion of previous post.


That's actually a different post.


As an exercise in optimistic parsing, the OP could've meant “Previous discussion of previous post”. It's a handy link in either case, thanks!


Whoops, so it is!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: