Hacker News new | past | comments | ask | show | jobs | submit login

Windows doesn't overcommit memory so without a pagefile the available virtual memory is greatly reduced, you can "run out" of memory quite easily with certain applications that allocate but don't use the memory.



I've let Windows auto-manage the pagefile size for my current system. It picked 19GiB total. It's empty. With 128GiB of RAM, that's 15% more memory, but the vast majority of that RAM is free anyway. 15% certainly isn't "greatly reduced" when you max out the RAM capacity in a desktop motherboard.

Lots of the arguments for swap/paging seem to ignore the possibility of just buying "overkill" amounts of RAM.

On the other hand, if you have truly enormous data sets you can RAID 4 M.2 SSDs to max out a PCIE 6.0 x16 slot for 121GiB/s bandwidth in a multiple-TiB swap file. It'll be a while until SSDs get big enough for this to max out the 256TiB virtual address space of x86_64, but you can get 8TiB M.2 SSDs now...


I got 128gb of RAM and with a smaller page file I've ran out of RAM when doing LLM stuff mixed with other background workload. System managed is the way to go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: