Hacker News new | past | comments | ask | show | jobs | submit login

I don’t know it seems excessive to me. I could see the cold storage maybe, with spanning storage pools (by my reckoning there were 10TB drives in 2016 and the largest now are 20, so 16 years from now should be 320 if it keeps doubling, which is 5 orders of magnitude below still).

> Right now our limitations in these regards are addressed by distributed computing and databases, but in a hyper-connected world there may come a time when such huge address space could actually be used.

Used at the core of the OS itself? How do you propose to beat the speed of light exactly?

Because you don’t need a zettabyte-compatible kernel to run a distributed database (or even file system, see ZFS), trying to DMA things on the other side of the planet sounds like the worst possible experience.

Hell, our current computers right now are not even close to 64 bit address spaces. The baseline is 48 bits, and x86 and ARM are in the process of extending the address space (to 57 bits for x86, and 52 for ARM).




Thanks to Moore's law, you can assume that DRAM capacity will double every 1-3 years. Every time it doubles, you need one more bit. So if we use 48 bits today, we have 16 bits left to grow, which gives us at least 16 years of margin, and maybe 48 years. (and it could be even longer if you believe that Moore's law is going to keep slowing down).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: