Hacker News new | past | comments | ask | show | jobs | submit login

DRAM latency for a random access read can get into the low hundreds of cycles on modern multi-socket devices. But the streaming bandwidth remains very high. Cache systems will routinely prefetch the next block ahead of an access if they detect that memory is being used sequentially, eliminating a huge chunk of that pipeline stall.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: