Hacker News new | past | comments | ask | show | jobs | submit login
In defence of swap: common misconceptions (2018) (chrisdown.name)
82 points by ingve 8 months ago | hide | past | favorite | 97 comments



Swap is still actively harmful if placed on an unsuitable device.

A few days ago, I got a complaint about slow ops in a Ceph cluster, and one of the Ceph OSDs spent quite a visible amount of time in response to a simple "ceph tell osd.13 ops" query that is a natural first step when debugging slow ops. Upon investigation, it was found that one of the nodes had a significantly higher load average (50) vs others (14-18). In "top", out of 256 GB of RAM that the node had, approximately 100 GB were cached and thus formally available, yet, out of 8 GB of swap, 6 GB were used, and kswapd consumed a non-negligible percentage of the CPU time.

The kernel is some 5.15.x Ubuntu kernel; the root disk (which also holds the swap file) is a cheap SATADOM, and SATADOMs, in general (yes I know there are exceptions), have miserable performance and low write endurance; I forgot to check SMART statistics but cannot exclude that the drive might have started failing. I did check the dmesg, though, and there were no I/O errors.

The problem has been resolved by disabling the swap file. I should probably have set up swap on ZRAM instead, but a configuration without swap also works.


That's a configuration issue on your part. You shouldn't put SWAP on devices that has basically no throughput.


That's on the customer's part, or, more properly said, on the Ubuntu installer's part. Defaults matter.


I just searched to see what Ceph reccomended, and here is the funny part:

-- Configuring the operating system with swap to provide additional virtual memory for daemons is not advised for modern systems. Doing may result in lower performance, and your Ceph cluster may well be happier with a daemon that crashes vs one that slows to a crawl. --

If a single ceph node crashes and restarts, I assume that the cluster will pick up the slack and heal/deal?


Correct. A node that crashed quickly (or where earlyoom killed something - I am looking at you, https://tracker.ceph.com/issues/48673) is visible only as a monitoring alert that goes away by itself, because all Ceph daemons are configured to be restarted after a crash by default. A slow/swapping node, on the other hand, leads to customer or end user complaints.


Ceph actually works very well with swap on zram. I have some older Ceph clusters that have 48GB zram devices used for swap.

It sometimes helps with defragmenting memory, memory pressure in general, and depending on your workload and other things running on the box, you could have better performance by being able to have more things in page cache.


> Swap is still actively harmful if placed on an unsuitable device.

Reading that, I just realized it could be done over the network - that would be impressively glacial. Should be a good laugh.

Heck, why not over a db9 null modem slip connection? Start a compile and send it off to the Long Now Foundation - set it up next to John Cage's ASLSP organ.


You can laugh - and maybe should laugh - but this is in fact a real thing. Swapping to NFS was... I won't say common, but not unheard of once upon a time, almost always in diskless machines. You boot from the network, you mount your root filesystem and home over the network, you swap over the network. These days it's better to use NBD (network block device) for that, at least on Linux, but the option remains. And yes, it remains a pretty niche thing, because yeah network isn't exactly ideal performance, but it's available if you need it.


IIRC nbd even explicitly supports being used as swap device (swap on some generic network FS is a bad idea because it is essentially guaranteed to deadlock the VM subsystem) and I have seen it used that way for disk-less workstations in early 00's.


That's the vision of CXL :-) To be fair, the network is very very fast.

https://en.wikipedia.org/wiki/Compute_Express_Link


Oh this is a special cluster interconnect.

Yes, those have always been impressive and usually cost as much as a luxury sports car.

Hopefully the increased demand from the AI world will help bring those prices down.


infiniband is pretty cheap now!


Apps would crash before the device got the traffic back telling it what to swap out.


swap would be fast enough on NVMe-oF for anything not a DB, no?


I'd imagine latency matters more than throughout. Complicated disk operations over my 10Gb fiber network still sucks like it did over 10baseT 30 years ago. Maybe there's smarter ways to set things up but NFS/sshfs is still bogged down for me. 1/5th - ~1/100th depending on the application.

Also I don't have fancy money. I was imagining the $20 gear on Amazon/eBay. Maybe that's my problem...


Oh, I've meant putting NVMe proto on a wire, with such drives on the other side. Not traditional network app protocols.


But why do you have only 8 gigs of swap?

the OSD's main job is file serving, so you need to use as much ram as possible to act as virtual tiered storage (caveats apply) so having a large swap to hold useful but not often used bits of ram is really useful for performance.

Our file servers used to have ram+~30% as swap.


> This means that, in general, vm.swappiness is simply a ratio of how costly reclaiming and refaulting anonymous memory is compared to file memory for your hardware and workload.

But I think this ignores the likelihood that I want this particular memory reclaimed. It is not equal for files and anon memory. At least on my systems, if anon memory is dirty, it's because I did something with it, and if I haven't freed it, that means I'm still working with it. But there's all sorts of stuff that just kind of ends up in the disk cache, which I will never touch again. It is irrelevant to me that refaulting a swapped anon page is no slower than rereading a file, because the probability that the first will happen is so much greater than the second.

For all that systems try to guess about which pages are in use, or will be used again soon, I find they pretty much all get it wrong and as a result the system gets janky, and it's best to simply weight keeping anon memory in core much higher. I expect disk access to be slow when I read a file. I do not expect ctrl-tab to be slow, ever.


>I do not expect ctrl-tab to be slow, ever.

I think ctrl-tab might cause a disk read even without swap, because some/all of the binary of the program you're running might have been evicted from memory (backed by the program file, not swap) and need to get faulted back in when you hit ctrl-tab. The disk read might occur even without eviction, if that part of the program had never been read into memory yet.


Yes.

Please realise that things that can be evicted from RAM include the code and static data for the programs you are running. If you switch off swap, then you are disqualifying stuff like data pages that a program will set up on process start and then never touch again, which is sometimes a distressingly large amount. That then uses up RAM that could be used to provide a more responsive system.

The ideal contents of swap are those data pages that are never going to be touched again. On a normally running system that doesn't have massive memory pressure, that is basically all that will make it into swap, and that's fine because those genuinely are pages that you won't need, and they aren't causing the problems with alt-tab.

What probably is causing the problems with alt-tab are one of two possibilities. Either the system is over-burdened and it is having to swap out large amounts of working set (to which the solution is to install more RAM or reduce the load, not switch off swap), or you have been transferring large amounts of data through RAM, causing the disc cache system to evict parts of a program you haven't used for a long time in favour of these files that another program keeps loading in (and switching off swap won't help much with that either, because the system is being quite happy to evict parts of that program that aren't anonymous pages - a solution is to get the program that is handling lots of data to ask the OS kindly not to cache certain things, if it for instance knows that it will only load them once).

I do think that Linux's behaviour has improved massively over the last 10 years or so. A lot of these myths about swap were created before the improvements, and we should stop believing them.


Do I understand correctly that a low swappiness basically also means higher priority of anon pages over disk caching, simply because the former uses more space and the latter "runs out" eventually? I.e. low swappiness would achieve what you want, sort of?


The article is from 2018 and has had interesting discussion here before [1].

My conclusion is: In production, in a datacenter, when code is stable and compute-per-dollar efficiency matters? Yeah, sure, I can believe that swap makes sense.

On a dev machine? Not on mine for sure. If you think swap is a net positive on a dev machine, then try pasting the following (wrong) code in a bash prompt on Linux:

  echo '
  #include <stdlib.h>
  #include <stdio.h>
  #include <string.h>
  
  int main()
  {
    for (int i = 0; ; i++) {
      char *mem = malloc(1 << 30);
      if (mem == NULL)
        return 0;
      memset(mem, 42, 1 << 30);
      printf("%5d GiB\n", i);
    }
  }
  ' | cc -O3 -o crash_my_laptop -x c -
  ./crash_my_laptop
We can discuss the results in a couple of hours, when you recover control of your machine. (Note: with no swap and no z-ram, it crashes after 10 seconds; no side effects.)

[1]

https://news.ycombinator.com/item?id=40582029

https://news.ycombinator.com/item?id=39650114

https://news.ycombinator.com/item?id=38263901

https://news.ycombinator.com/item?id=31104126

https://news.ycombinator.com/item?id=29159755

https://news.ycombinator.com/item?id=23455051

https://news.ycombinator.com/item?id=16145294

https://news.ycombinator.com/item?id=16109058


> On a dev machine? Not on mine for sure. If you think swap is a net positive on a dev machine, then try pasting the following (wrong) code in a bash prompt on Linux:

> We can discuss the results in a couple of hours, when you recover control of your machine. (Note: with no swap and no z-ram, it crashes after 10 seconds; no side effects.)

That might be true for this contrived example. But my real-world experience is exactly the opposite.

In a case of memory over-usage (in my case because of working in a huge bazel project and several browsers open + some electron apps):

- without swap, the system at one point just got immediately so incredibly slow, that the only thing I could reasonably still do was to restart my machine, while

- with swap, the system (at higher memory usage) just got noticably slower and I could close some app / the browser to get into the normal usable regime again.


My real world experience is the same as GP's and the opposite of yours.

In case of memory over-usage:

With swap: system becomes unresponsive

Without swap: OOM kills offending process

This has been my real world experience 100% of the time on Linux.


I'll add that swap is required for certain sleep modes (hibernate) so there are reasons to use it. I typically create a swap partition that's 1.2x system RAM on laptops for this reason, but don't for desktops (which also usually have more system RAM).


Let's be honest, users of this website are literally the worst possible group to reason about "typical PC usage"


This reads like something from the 90s. Buy whatever you think you need as swap as "more RAM" and call it a day. I'm running my desktop Linux without swap since maybe 20 years, and I never had any of this "pathological behaviour at near-OOM" afterwards.

And I have time to think about better problems than "You can achieve better swap behaviour under memory pressure and prevent thrashing by utilising memory.low and friends in cgroup v2."

The core problem with swap is that nowadays you need gigabytes of swap to actually make a difference (when you have 32GB of RAM, 1GB of swap is 1/32 of a difference). And you certainly don't want to system to swap in and out gigabytes of memory.


Given that the author works at a large company, that has millions of machines, and where any swap change has cost implications in the tens of millions pounds, I suspect the article is based on data, rather than vibes.

Swap is less critical now, and given that k8s has some pathological dislike of it (again I suspect based on vibes) I can see why people dont have it.

However when you are using close to 70% of your total ram, swap improves performance significantly. Also where I work, it also stop the OOM from killing my repo VFS layer when I'm compiling something.

> The core problem with swap is that nowadays you need gigabytes of swap to actually make a difference

It was always the case. but then a TB of disk isn't that expensive anymore.


> Given that the author works at a large company, that has millions of machines, and where any swap change has cost implications in the tens of millions pounds, I suspect the article is based on data, rather than vibes.

Yes, but the author is most likely optimizing for a metric that I don't care so much in my per personal computer while dismissing metrics that I care about.

> It was always the case. but then a TB of disk isn't that expensive anymore.

But swapping out GB of RAM is expensive in terms of latency.


but its rarely gb at the same time. Thats the point. Its a dumping ground for stuff thats not used very often. meaning that you can have a bigger VFS cache, which reduces latency


> but its rarely gb at the same time. Thats the point. Its a dumping ground for stuff thats not used very often.

But it has to be GB to actually make a difference. If your swap is not full of GB of data, then obviously you don't need it (or it just becomes a reserve when you are in an OOM situation - but this is where you don't want to be with swap avaible either)


> > but its rarely gb at the same time. Thats the point. Its a dumping ground for stuff thats not used very often.

> But it has to be GB to actually make a difference. If your swap is not full of GB of data, then obviously you don't need it (or it just becomes a reserve when you are in an OOM situation - but this is where you don't want to be with swap avaible either)

I think the parent meant, that it barely needs to be read / write GBs at the same time. Not that that much swap is never being in use at the same time.

So you can use a lot of swap, but will never need to read it back in all at once.


indeed, thank you for explaining it clearer than me!


I have the same experience.

I have stopped using swap on any computers, laptops, desktops or servers, more than 2 decades ago.

At that time, removing the swap was definitely a great improvement (with the condition of having installed enough real memory in the computers; but even with insufficient memory it was much better to have a few processes killed instead of having an unusable computer that had to be forcefully rebooted, losing everything that had not been already saved).

Perhaps the handling of swap has been improved meanwhile and in certain unusual niche environments having swap might provide some benefits.

Nevertheless, I doubt very much that this is true. I have never seen any such case where swap could have been useful.

When there is enough memory, there should be no writes to the swap, because such writes can only diminish the performance, and in an unpredictable way, because they can trigger the SSD garbage collector. If there are no writes to the swap, there is no reason for it to exist. If there isn't enough memory, the cheapest solution is to buy more memory, instead of trying to find a software workaround for that.

The memory pages that are not dirty can always be discarded by the operating system and read again later from the SSD if needed. No swap is needed for that. There exists no way in which swap can improve performance in comparison with having enough memory. Having to also discard dirty pages is just another way of saying that there isn't enough memory and you are willing to degrade the performance instead of paying for enough memory.

Swap memory is certainly never needed in personal computers, outside of temporary use in certain exceptional situations, when one would be willing to run very slowly a program that does not fit in the memory, due to lack of access to a computer with more memory.

Swap could be useful only when configured in a great number of servers that run a well characterized workload, where one could make a trade-off between application performance and total memory cost.


Your on a desktop.

In a server environment you don't have the luxury of running your 4 GB needing app on a 128 GB machine - that means you have overprovissioned and are paying a stupid tax.

A modern server aims to run as close as possible to 100% CPU and RAM usage.

In those cases you need swap to kick out rarely used memory (initialization, ...)


> And you certainly don't want to system to swap in and out gigabytes of memory.

Why though?

Swapping 32GB of memory in and out of an NVMe SSD is much faster than swapping 1GB out of a spinning disk so I really don't get that argument.


> Swapping 32GB of memory in and out of an NVMe SSD is much faster than swapping 1GB out of a spinning disk so I really don't get that argument.

This is true, but it still takes time. What kind of workload do you have where you swap out 32GB of RAM? If you are in this situation you almost certainly need to buy these additional 32GB of RAM.


What if I don't have no more free DIMM slots but I already have 1 TiB NVMe and I am fine with my workload being finished slightly (or even noticeably) slower as opposed to not being able to do at all?

As an example of such workload: a server, one of the many, that runs financial calculations from many web users. It's fine for it to become somewhat slower: we'll notice the performance degradation in the metrics and stop accepting new connections for a while. It is not fine for it to just die with OOM ― the users automatically will reconnect to other servers, leading to a sort of a thundering herd scenario.


> What if I don't have no more free DIMM slots but I already have 1 TiB NVMe and I am fine with my workload being finished slightly (or even noticeably) slower as opposed to not being able to do at all?

Then you obviously have a workload where having swap make sense (and you could even add swap on the fly in such a case). But that is not typical at all.


On a desktop/laptop, where most of the “used” RAM (browser tabs) isn't actually used at once.


Firefox (and I guess Chrome as well) throws aways memory of unused tabs.


You can't do that in all situations, since they could have important state.


Which is incredibly annoying and thankfully can be disabled.


But it is still more than an order of magnitude slower than RAM, and is harmful for the disk cells.

If you have enough money to provision NVMe and still swap multiple gigabytes, just get a 16GB RAM stick for $40.


>just get a 16GB RAM stick for $40

Laptops come with soldered RAM now where upgrades are not possible. And most people use laptops nowadays instead of desktops.

Buying another laptop to double your ram just because you hit the swap once in a blue moon is kinda wasteful.


If you are frequently swapping out 32GB, a laptop is not the right tool for the job.


Where did I say anything about swapping out entire 32GB? Why does everything need to be binary in this argument? As in you either swap 32GB or have swap disabled?

What if you from time to time you only swap a couple of GB? Isn't that better than having your system completely lock up and need rebooting?


> If you have enough money to provision NVMe and still swap multiple gigabytes, just get a 16GB RAM stick for $40.

Why should I pay $40 when I have way more than 16GB of free SSD storage on my computer already?!


Because:

> it is still more than an order of magnitude slower than RAM, and is harmful for the disk cells.

I mean, if that's not an issue for you, be my guest. But that's not the global optimum.


It's not slow enough for me to notice, and disk cells will likely outlive the rest of the computer anyway. Its not first generation SSD anymore.

Reusing hardware you already have clearly feels more optimal than buying new one because it makes things unnoticeably slower.


It's not that hard to max out system capacity on desktop boards. I'm using the maximum possible 128 GB RAM on my desktop.


NVMe is cheap in that I don’t really need everything I bought. But I use all of the RAM I have.


SSD write endurance scares me more than running out of memory.


Why? Have you actually looked at the numbers and did the math, or are you scared out of FUD and think the sky is falling every time write calls are made to the SSD?

I treat my 1TB SSD like a loaner and after doing the math, based on my write patterns in the last 2 years, the SSD should wear out in about ..checks notes.. 12 years. I'm a lot more likely to replace it 2-4 years though to upgrade to a bigger and faster model anyway.

Plus, all storage dies eventually, that's why backups are important. You paid for it, you might as well make the most use of it while it lasts instead of trying to "hypermile it" since you won't be leaving it as inheritance to your grandchildren.


You're correct but I'm not sure it's FUD though and more a legacy of the limitations of early SSDs, which were indeed quite vulnerable to wear.


But we don't live in legacy times we live in present times. So is it too much to expect people on a tech forum to be at least a bit up to date on the present tech instead of regurgitating fears 10 years out of date?

If you're wearing out your SSD life endurance in a few years, you have an issue.


I'm not disputing that (hence the “You're correct” at the beginning of my response), I'm just reacting to your mention of “FUD”.


Spend your 40$ on backups first, and not on more RAM


Yeah the only reason I've ever needed swap was to compile chonk cpp packages on SBC systems that simply don't have enough ram to load it all. With swap it works fine, otherwise the system just freezes entirely. Despite what OP claims, emergency memory is literally the only case I've ever had for it over the years. On desktop buying more ram is trivial.


Yeah. Some of the reasons on that list are debatable

I think swap is useful as a temporary memory overflow area, but that's it. Actually "running relying on swap" is harmful

Items 1 and 2 on the list sound like fiction. Item 4 sounds plausible (same for the Windows 9X series - sigh - where cache and swap would allegedly fight each other)


The problem isn't really with swap, it's that people try to use a very blunt instrument (disabling swap) to deal with with an ill-defined problem (thrashing and the system slowing down to a crawl).

There's really no way to tune a system's swap size for ideal responsiveness. You can have a 32GB RAM + 32GB swap system perform great, and a 32 GB + 8 GB system perform miserably.

The problem is the working set, which is an often hard to predict quantity dependent on runtime factors. You can swap out 30GB of unused junk to disk and have excellent performance, but if you're constantly going through 33 GB of data on a 32 GB system, performance is going to be awful.

So really want you want to avoid this last case from hanging the system is to measure the system's responsiveness and act in response to that, instead of going for trying to get rid of the swap file. You can do that with something like systemd-oomd, which starts killing stuff when things are starting to get slow rather than when some arbitrary usage number is reached.


This is important knowledge. My past misconception was that without swap system will be more efficient, programs will be killed when running out of memory instead of inefficiently running on hard disk-backed memory. In reality, Linux becomes totally unresponsive when running out of memory on systems without swap. This is because instead of swapping the least used parts of memory, it frees RAM by removing from it executable program code and shared libraries, because these things can be re-read from disk.


My experience is opposite.

Linux (long ago) always became unresponsive when running out of memory with swap. Removing the swap solved this problem and it remained perfectly responsive when running out of memory.

I have no longer used swap for many years, so I do not know the current behavior with swap, but I have seen out of memory cases on modern systems without swap and I have never seen again a case when the computer becomes unresponsive.


I wonder if there's knobs to fix that. One of the organizations I worked at had a very aggressive OOM killer.


I have yet to see substantial swap usage outside of memory pressure scenario. Means the benefit is marginal. All processes seem to be reasonably optimized not to leave lots of not accessed memory around. Or they have sparse access pattern which is touching a lot of pages.


I think you're talking about your personal machines and this author is talking about data centres. If there's never any memory pressure in your data centre you're wasting money on RAM.


The article does not mention anything like that. And even in DCs, overoptimizing the memory pressures carries the risk of latency spikes or worse.


Excuse me for not reading the whole article with full attention, perhaps I missed an actual answer to my question, but what I never stop to wonder about is how swap even is a useful element of the whole complexity when I have what can be described as ridiculously big RAM which is far more than enough for anything I do, comparable to my whole SSD drive size and certainly bigger than unused space on it? Also how is having e.g. a 16 GB swap partition/file better than deleting it and installing an extra 32 GB DIMM instead? To me it seems the whole idea of swap should be deprecated (outside embedded kernels) as real RAM is affordable enough to buy any amount you may need is affordable. In fact I have been just getting as much RAM as the motherboard would support (some times unofficially, sometimes with help of BIOS code hacks) and disabling swap since the days of Windows 98.


Do my eyes deceive me, or is nobody talking about what the article actually says?

There is exactly one case where swap can be helpful: when

1. there are low traffic memory pages that can be swapped out in lieu of

2. page caches in the working set that would otherwise be dropped, and

3. you can't/don't want to pay for more memory.

In particular: swap is NOT for when your system is out of memory.



Best use, and perhaps only use, of swap on a laptop or desktop: hibernation.


Swap is still mostly mysterious to me (and the article is mostly over my head). My current naïve understanding is that swap is slower memory, and a disk cache is faster disk. I remember back in the windows xp days moving the pagefile to a RAM disk. That significantly decreased latency (re)running programs. These days I disable swap in my 64GiB windows box and have temp files in a RAM disk mostly for security reasons.


I think what many people forget is that applications can ask for a bunch of memory and then never do anything with it (or just do something with it once).

Imagine you want to write something down, so you ask me for a notebook. Now you start writing very slowly such that I can tell you're not going to need more than one page of that notebook for quite a while.

I don't want to take the notebook away from you because you're fussy and you insist on having the entire notebook to yourself, but someone else comes along also wanting to write something down.

Me being the clever kernel that I am, decide to let you keep a "virtual" copy of your notebook, but really only the first page of it is "real" paper and I give the notebook to the next person who ALSO insists on having an entire notebook to write down a few words on a page.

I can now have 20 people all writing in their own "virtual" 100 page notebooks, while I really only have a single 100 page notebook to go around.

Only when the combined total of all needed pages REALLY hits 100 pages will I start to run into issues.


Surely we can do this without writing anything to disk?


I didn't really complete the analogy.

Once you hit 100 actual pages of your notebook, I could either give up completely or I can say "Hey that first page you wrote to, I noticed you haven't used it in a while... I'm just going to store that in the filing cabinet."

Now again we're fine and we're even past 100 pages, but if you ever need to read the page I stored in in the filing cabinet, I'll have to swap that page out for another page, which will be time consuming and inefficient.


“moving the pagefile to a RAM disk.” is a hacky way to emulate disabling swap :P

In an ideal world, swap would automagically move stuff that you are not likely use soon out of RAM even when you use poorly written programs that don’t do that explicitly. And, disk cache would automagically keep stuff in RAM that you are likely to need again soon even when using poorly written programs that don’t do that explicitly.

So, ideally, they work together to max the effectiveness of you RAM.


Wait? Moving the pagefile, which should cover large memory allocations (not necessarily large memory usage, just the allocation), to a Memory based disk. Well...that's interesting, to say the least!


Please don't do that anymore! Windows requires a page file. Page files should not be on a ram disk. There is a hierarchy of memory. Temp files should be on a disk where they should be.


> Swap is still mostly mysterious to me ... I remember back in the windows xp days

I mean no disrespect, but why in 20 years haven't you just read up on or watched a video explaining swap and disk cache?


None taken. I guess because I haven't really had the need to know more than I do about it. Maybe you can enlighten me. Can you explain to me why my xp installation behaved much more snappy after I moved the page file into a ram disk? Is swap really useful these days, or is the overhead too wasteful? Is it perhaps a crutch from the time it was necessary but now only lives on because of a kind of learned helplessness from programmers who don't need to care about modestly allocating memory?


You likely had much more ram than the average person did, right? My understanding was XP was based around the needs of the average consumer and wrote the OS in such a way as to almost mandate swapping rather than writing something smarter that could avoid swapping if the machine had more than enough ram.

Because that swapping would occur regardless, there would be a performance penalty when you didn't actually need it. Pointing the swap back to ram circumvented that.

I wouldn't say it's a crutch as there are still plenty of users machines and situations where ram might run out, so swapping is better than the apps crashing. But yes it's becoming increasingly obsolete.


Thanks, makes sense.


Although for different reasons (and different talking points) it reminds me on the discussion about swap (virtual memory / paging file) on windows (and all those "great" guides suggesting to set it to a fixed size or bluntly just to turn it off because "no one needs it anyways").


Fixing the page file's size made sense for the time.

The page file would have resided on HDDs, having the size fixed meant it would occupy one contiguous chunk of space on the platters thereby maximizing access speed and latency. A page file of variable size would inevitably become fragmented, which manifests as horrible access speed and latency.

Nowadays with the page file residing on SSDs this advice isn't relevant anymore, but that doesn't mean it wasn't relevant during its time.

As an aside, I still stick to the RAMx1.5=Swap rule because I'm just far too lazy to really optimize it, drive space is cheap now, and I still don't trust Windows to be smart with sizing it. Yes, that means I have a 96GB page file in my Windows 11 machine with 64GB of RAM; yes, it works and I don't care any further. :V


Well, fixed pagefile size are (were) fine if you def. knew for what tasks / applications you use.

But for a lot of people (esp. in gaming but not limited to) fixed pagefile sizes caused problems when applications overallocated memory beyond the pagefile size (not necessarily used it, just allocating it was already enough to trip out of memory situation, I think).


Assuming that OP is correct in everything, it isn't shown anywhere that swap must reside in persistent storage. It could be stored in RAM, couldn't it? I mean you always have a fixed max amount of swap, so if it fits in memory , then why not put it in memory? But then why bother with swap?


> then why not put it in memory? But then why bother with swap?

because ram is expensive compared to disk.

Ram is normally fixed compared to disk

by storing not-used but still needed pages on disk, allows for much greater performance with less resources. for example it allows a bigger VFS cache.


So hypothetically, if ram was as cheap and easy to add as storage, swap would not make sense?


depending on OS and upper limit, yeah RAM all the way. But I suspect the kernel doesn't think like that. I know that OSX does things differently, so perhaps there are better ways


It can be, that's what zswap is all about.


I'm a desktop user, with most of my usage either being gaming or compiling. I don't typically run into memory limits in either usecase. Given that I almost always have free RAM available, what does swapping to the disk do for me? I have free RAM; why not use it?


On consumer machines with a lot of ram, there shouldn't be a need for swap.

If the OS is written so inefficiently that it performs worse without swap, put the swap on a ram drive to satisfy it without losing any performance.


I don't get this article. Especially not the things about unreclaimable memory, and the With swap / Without swap distinctions. It seems what OP is describing is just .. having more RAM.


Just use zramswap if you dislike the idea of swap


I found that helpful.


Common fallacy in sysadmin:

- A user has a system with X RAM and Y swap. They buy Y RAM. If one compares (X RAM + Y swap) versus (X+Y RAM), it's obvious that they don't need swap anymore.

Thing is, the correct comparison is (X RAM + Y swap) versus (X+Y RAM + Z swap), where Z is a new amount of swap.


What's the rationale for the downvotes? The line of thinking I've highlighted is very widespread, and it's flawed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: