Hacker News new | past | comments | ask | show | jobs | submit login

A massive advantage of your new linear allocator is that it keeps your memory access continuous. This means that the processor is more likely to have the most recently used memory locations already in cache.

You might see further improvements if you split your allocations between two (or more) allocators. One for memory you expect to remain hot (core to the compiler) and one for stuff you think is one-off. That might improve access locality further.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: