Hacker News new | past | comments | ask | show | jobs | submit login

So the important question is, how does the affect how we write applications?

Is the api different, or are we still reading/writing disk files?

Should we do memory mapping of the files or not?

Should we parallelize access to different sections of big files? Or write a ton of small files?

How does this affect database design? Current big data apps emphasize large append-only writes and large sequential reads (think LSM trees). Does this make sense any more?

What does disk caching mean in the context of these new drives?




API is the same. Memory map if you prefer that access style and it suits your OS/language preferences. The OS will almost certainly not let you map straight across into the device's PCI memory mapped window so you'll incur a copy to userspace penalty either way. Benchmark. There is probably no longer any advantage to a sequential write, but you still have a per-syscall and per-IOP overhead, so large writes will be faster than N small ones. Disk caching is still there and still decreases latency but isn't so critical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: