Hacker News new | past | comments | ask | show | jobs | submit login

The fact that you can put everything, ever into 128-bit means you can easily shove ten thousand 20-terabyte storage devices into a single 128-bit system's memory-mapped I/O space and still have 3 exabytes left over.

No more packet-switched serial storage I/O. You now have first-class ability to ask for any byte anywhere, really really really fast.

Because I/O request speed is now only limited by the the memory controller (which already goes at TB/sec in x86 hardware), a fast storage controller now has the opportunity to optimize and batch requests downstream to storage devices much much more efficiently. Because if the storage devices go at a certain speed but suddenly the addressing infrastructure is A LOT LOT faster, your optimization window just went through the roof and you can coordinate much more effectively.

I forget the exact architecture, but one of IBM's 128-bit boxen already does this. Various random bits of the hardware use MMIO as a first-class addressing strategy. The OS does the rough equivalent of `disk = mmap2(/dev/sda)` at bootup. Maybe this is a z series thing.




What about DMA over IPv6? Using a single 256-bit address, you would be able to address any byte in any IPv6-enabled computer directly.


The calculations are off. 128 bit addressing allow for more than 3.40 x 10^38 addresses, and your storage example is 1.76 x 10^18 bits. That is, if we address the individual bits (we do not), we don't even have a name for the unit denoting that magnitude of space addressable left when using 128 bit addresses.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: