This is a nice write-up. A couple of minor corrections:
First, in several places it talks what a physical RAM "chip" looks like in terms of its memory layout. It would be better to simply think of an address space instead of a physical chip. Even without virtual memory, you don't know - and don't care - how the hardware addresses an actual chip. It may not even be a chip! The same principles would apply with old-fashioned magnetic core memory or any other kind of memory.
Also, this is completely wrong:
> Swapping at the granularity of 4KB chunks mitigates the issues caused by fragmentation (since they are small).
If you're comparing segmentation with paging, then paging doesn't just "mitigate" fragmentation. Fragmentation simply doesn't exist when you always allocate the same size memory blocks. It's not because the 4KB pages are "small" - segmented memory often used much smaller allocations than that. You could have any size page and it would be the same thing: there's no such thing as fragmentation in this situation.
Consider the difference between a heap allocator that manages blocks of any requested size vs. one where all blocks are the same size. In the latter case, your allocator can simply consist of a list of free blocks. To allocate a block, you take one from the list. To free a block, you add it to the list. It's that simple, and there is no opportunity for fragmentation to happen here. And this is completely independent of the block size, as long as all blocks are the same size.
Of course you could sub-allocate memory within or overlapping your pages, using some other kind of memory allocator like malloc(), and those memory allocations would suffer fragmentation just as segmented memory does. But again that fragmentation is due to the fact that you're allocating different sized blocks, and it's a separate question from whether the paging mechanism itself can become fragmented.
First, in several places it talks what a physical RAM "chip" looks like in terms of its memory layout. It would be better to simply think of an address space instead of a physical chip. Even without virtual memory, you don't know - and don't care - how the hardware addresses an actual chip. It may not even be a chip! The same principles would apply with old-fashioned magnetic core memory or any other kind of memory.
Also, this is completely wrong:
> Swapping at the granularity of 4KB chunks mitigates the issues caused by fragmentation (since they are small).
If you're comparing segmentation with paging, then paging doesn't just "mitigate" fragmentation. Fragmentation simply doesn't exist when you always allocate the same size memory blocks. It's not because the 4KB pages are "small" - segmented memory often used much smaller allocations than that. You could have any size page and it would be the same thing: there's no such thing as fragmentation in this situation.
Consider the difference between a heap allocator that manages blocks of any requested size vs. one where all blocks are the same size. In the latter case, your allocator can simply consist of a list of free blocks. To allocate a block, you take one from the list. To free a block, you add it to the list. It's that simple, and there is no opportunity for fragmentation to happen here. And this is completely independent of the block size, as long as all blocks are the same size.
Of course you could sub-allocate memory within or overlapping your pages, using some other kind of memory allocator like malloc(), and those memory allocations would suffer fragmentation just as segmented memory does. But again that fragmentation is due to the fact that you're allocating different sized blocks, and it's a separate question from whether the paging mechanism itself can become fragmented.