Hacker News new | past | comments | ask | show | jobs | submit login

adding an mmu to a basic rv32e processor might double its size and power consumption, and more than double the verification effort; moreover, if you're targeting applications where deterministic execution time or even worst-case execution time (wcet) is a concern, the mmu is a very likely source of nondeterminism

so it's not so much that you don't care about it as that you might not be able to afford it. 'cheap and fast' depends on what scale of machine you're talking about; a 1¢ computer (not yet available) can afford less than a 10¢ computer like the pms150c or (rv32e) ch32v003, which can afford less than a 1-dollar computer like the stm32f104c8t6 or (rv32i) gd32vf104, which can afford less than a 10-dollar computer like a raspberry pi, which can afford less than a 100-dollar computer like a cellphone, which can afford less than a 1000-dollar computer like a gaming rig, which can afford less than a 10-kilobuck computer like a largish cpu server, which can afford less than a 100-kilobuck computer like a petabox

unix originally ran on the pdp-11, the relevant models of which had interprocess memory safety in the form of segmentation, but no paging. i've never used a pdp-11. adding paging (for example, on the vax the sun-1, and the i386) enabled a variety of new unix features:

- as you point out, it enables a process to be larger than physical memory;

- fork() became immensely faster because it didn't have to copy all of process memory, just the page table, and mark the existing pages copy-on-write;

- execve() became immensely faster for a similar reason: it could 'demand-page' the program into memory as you executed parts of it, instead of waiting to start executing it until the whole thing had been loaded from disk;

- shared libraries became possible, so that executable code used by many programs at once could exist as only a single copy in memory (though you could do this without paging if all the processes share the same address space, perhaps with different permissions imposed by an mpu — this wasn't considered an option for unix in part because it would involve either giving up fork() or only having one process in memory at a time);

- similarly, it became possible for processes to communicate through shared memory buffers, which is commonly used to get images onto the screen quickly;

- it became possible to memory-map files, like on multics, so you can access data in them without copying it, which normally takes about twice as long as accessing it;

- it became possible for user programs to use the paging hardware to implement the write barriers for their garbage collectors by using mprotect(), though that's never been a very popular thing to do because sigsegv handlers are slow and usually nonportable;

- and, as veserv pointed out, it eased fragmentation.

non-unix systems used paging for a variety of even more creative purposes:

- efficient system image checkpointing as in keykos or eumel, by way of atomically marking all pages on the system copy-on-write and then streaming out the dirty ones to disk, so you never had to reboot; after a power failure or system crash, all the same programs would be running in the same state as at the last checkpoint. qemu can do this too, i think

- distributed single-address-space oses, where memory pages migrate around a cluster over a network according to where they're being accessed, so every program on the cluster is running in a single shared 64-bit address space; this didn't turn out to be as useful as it sounds

- insert your mindblowing creative idea here

anyway it's totally possible to implement memory protection without paging, and lots of computers have, past (with segmentation) and present (with mpus). but paging gives you a lot more than just memory protection




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: