Hacker News new | past | comments | ask | show | jobs | submit login

That's why I explicitly said you have to look at the correct column though: you need to look at Commit Size, not Working Set.



Commit size isn't the most important number either. You can commit large chunks of memory and if you don't touch it, Windows won't allocate it; not in physical memory nor in page file.

Here's another article with more recent details: http://blogs.microsoft.co.il/sasha/2016/01/05/windows-proces...


> Commit size isn't the most important number either. You can commit large chunks of memory and if you don't touch it, Windows won't allocate it; not in physical memory nor in page file.

No.

Short response: Try committing 1 TiB of memory without touching it and tell me how successful you are.

Long response: Unlike Linux, Windows doesn't overcommit. It is completely irrelevant whether physical pages have been actually allocated to back the the virtual pages that are committed. The fact that the virtual pages are committed means that there are guaranteed to be physical pages available somewhere when the need arises for them to be allocated (whether they are in the page file or in physical memory is irrelevant; what matters is that the storage space exists one-to-one), i.e. the fact that some virtual pages are committed means you have lost that much physical memory from the system already... which is exactly the number you want to look at when you're trying to figure out how much memory a program is using (since the entire point is to see how much memory it'll leave you for other programs).

(And shared memory is pretty much irrelevant for VSCode so let's not go on an irrelevant tangent.)


> It is completely irrelevant whether physical pages have been actually allocated to back the the virtual pages that are committed

It is in fact the only relevant thing with memory; physical memory is the constrained resource. If you're constrained on swap space, you're going to spend the rest of the year swapping.

Overcommitting is neither here nor there; a failure to have a backing store for memory (whether in physical memory or page file) will result in OOM, but nobody is actually worried about OOM. Editors and systems lose responsiveness long before then. The failure mode from apps using too much memory is swapping, not OOM.

Part of the reason measuring memory usage from a process stats perspective is so hard is because some memory is more important than others; in particular, access patterns matter. If a process is starting to swap, whether you see a cliff edge in performance, or a more gradual decline, comes down to the access pattern. The working set concept approximates the "frequently used" quantity of memory, which is why Task Manager uses it by default, but it's subtle since it's not a simple function of allocation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: