How to set Swappiness permanent to 0 on Proxmox 9

Ralf Wolbers

New Member
May 3, 2025
15
2
3
On Proxmox 8,4 the following two Commands set swappiness permanent to 0:

sysctl -w vm.swappiness=0

echo "vm.swappiness=0" | tee -a /etc/sysctl.conf

The value stays 0 after Reboot. On Proxmox 9 it does not save it permanent. After Reboot it is 60 again
 
Why would you do this? There are good reasons for swap even on hosts with more than enough memory:
https://chrisdown.name/2018/01/02/in-defence-of-swap.html

You could use even part of your RAM as a cimpresses swap device reduing iotimes and giving more headram before memory runs out: https://pve.proxmox.com/wiki/Zram

See also: https://forum.proxmox.com/threads/recommended-amount-of-swap.171319/post-798247
I've seen that same article, @Johannes S .

One of the reasons messing with swap at all is so frustrating is that's so dependenet on the individual build.

Proxmox's default ZFS-based install doesn't even create a swap at all:
1757036170027.png
 
  • Like
Reactions: Johannes S
  • Like
Reactions: UdoB
Why would you do this? There are good reasons for swap even on hosts with more than enough memory:
https://chrisdown.name/2018/01/02/in-defence-of-swap.html

You could use even part of your RAM as a cimpresses swap device reduing iotimes and giving more headram before memory runs out: https://pve.proxmox.com/wiki/Zram

See also: https://forum.proxmox.com/threads/recommended-amount-of-swap.171319/post-798247

There are lots of reasons you don't want to use swap day to day, and that it should be used as a last-resort available resource. I actually wrote an article on the matter: https://it.lanified.com/Articles/Youre-Using-Swap-Memory-Incorrectly

But in the case of a PVE node, the fact is that all data in swap, just like every other OS (Windows, other Linux, etc) is not in RAM, as in on a disk. This means that:
  1. That disk has increased wear that is avoidable
  2. The data is slower to interact with as it's now back to storage instead of RAM (which numerically the article refers to)
  3. You're using what should be an emergency buffer space day to day as a band-aid solution instead of resolving the root cause
  4. The systems that rely on the data in swap are running slower because that data is not in RAM

I've been working with Proxmox VE clusters for over 13 years now, and I have seen nothing but upsides to setting vm.swappiness to 0, while still having some amount of swap capacity available for emergencies. Additionally, using tools like zram is yet another band-aid solution that consumes more CPU to kick the actual problem down the road.

What's the proper solution look like? A handful of things:
  1. Make sure you actually have reliable metrics being pulled from all your systems so you can make informed decisions (the webGUI in PVE is horribly inaccurate for RAM usage _inside a VM_, but is useful for the RAM insights for each PVE node themselves). In one of my environments we use libreNMS and pull all the data via SNMP, but this can be achieved with other tools like Grafana + Prometheus, etc. If you don't have accurate, reliable metrics, you can't make informed decisions.
  2. Review the ACTUAL RAM usage of each of your VMs and containers. I have worked in countless environments where there are many VMs (Windows and Linux) with either too much RAM allocated to some VMs, and not enough RAM to others. This can be derived from reviewing the metrics in #1 because you'll have historical data to work against to decide how much RAM each system _actually_ cares about, as there are going to be systems that fluctuate their RAM usage over various periods of time, and ones that will not.
  3. Turn RAM ballooning off, it causes you more performance and capacity problems than you realise. Windows and Linux does not properly release RAM through ballooning systems so this doesn't even properly work for scaling down anyways. Additionally it takes more resources on the hypervisor node to manage ballooning for each VM, and this has compounding performance effects. By turning ballooning off, you can have more guarantees of expected capacity, and eliminate a performance issue you did not realise you had.
  4. Once you have properly sized all your VMs and containers for RAM usage, you can now determine the next step. Install more RAM or not. Until the last few months, RAM has been very very cheap. I don't know when sanity will return to the RAM market, but for the sake of explanation we're going to operate inside a sanity bubble where RAM prices are sane. But if at this point you're still choked for RAM... INSTALL MORE. RAM is cheap and fast, disk (swap) is slow and wears out.
A few thoughts to add. I have E-Mail alerts sent to me when _ANY_ of my systems (physical, VMs, etc) use >10% of their swap capacity, because it typically indicates something is wrong before it's gone wrong. I also have all my Linux VMs and desktops periodically flush swap because it's really that big of a problem. But I always want swap to be present (except for k8s nodes, for reasons I won't get into here) so that if something really explodes, it can use it.

I've literally spent over a decade studying swap in Windows and Linux, there is a lot of misconceptions around it.
 
  • Like
Reactions: waltar and gseeley
Swap Memory always has to exist on a permanent storage device, which is significantly slower than RAM
ZRAM was mentioned above. The article you linked feels "bloated" as it repeats itself a lot.
 
  • Like
Reactions: Johannes S
ZRAM was mentioned above. The article you linked feels "bloated" as it repeats itself a lot.

People remember things through repetition, and referencing earlier aspects of the same article are done for the sake of explanation. And yes I know ZRAM was mentioned, that does not invalidate what I speak to, nor what the article outlines.
 
Repetition doesn't improve an argument though. Maybe I missed something but as far I can see none of your arguments counterdicts Chris Downs point that modern Linux kernels use swap for memory management and without swap " memory management becomes harder to achieve" (Chris Downs https://chrisdown.name/2018/01/02/in-defence-of-swap.html ). The potential performance and wearout issues related to physical swap can easily be mitigated with utilicing zram and zram doesn't need to be large.
 
Repetition doesn't improve an argument though. Maybe I missed something but as far I can see none of your arguments counterdicts Chris Downs point that modern Linux kernels use swap for memory management and without swap " memory management becomes harder to achieve" (Chris Downs https://chrisdown.name/2018/01/02/in-defence-of-swap.html ). The potential performance and wearout issues related to physical swap can easily be mitigated with utilicing zram and zram doesn't need to be large.

The article, as well as the points made in this thread, clearly spell out the performance and hardware wear costs to using swap day to day. I don't comprehend how exactly this is missed when the article very easily spells this out. There's literally a section titled "Example numbers demonstrating the magnitude" which show the performance impact in some example scenarios.

I wrote that article because of the egregious amount of misconceptions, misunderstandings, and frankly ignorance around the actual implications of relying on data in swap day to day. To clarify a possible misunderstanding, I'm again _NOT_ saying to turn off swap, but to _NOT_ have data in swap as a day to day occurrence. It is literally data _on disk_ by its very definition, and that a) increases wear on the relevant disk(s), b) reduces performance when that data needs to be interacted with, and c) is redundant since the data was probably previously already on disk in the first place.

The proper solution is to have more RAM and correctly size the systems in the environment, which again I clearly spelled out _in this thread_ and is being... seemingly intentionally... overlooked.

So I've made it very convenient, and well spelled out why swap is a bad idea. It's not my problem if you don't read what I have to say, or are willing to simply dismiss it for whatever reason. I've been working with these systems at many different scales (as well as many other hypervisors and operating systems) for decades now. I have experienced the tangible gains from properly sizing systems and tuning environments to not rely on swap in the regards I speak to. An article from 2018 does not invalidate my industrial experience and expertise on this matter. Again, I wrote the article and I'm speaking up on the matter here because there is a lot of bad information out there one way or another.

The original question I was replying to was "Why would you do this? There are good reasons for swap even on hosts with more than enough memory" which I have more than adequately responded to. Do with the information as you see fit. I'll go make environments run like dreams, and if you want to misuse swap and CPU cycles with zram, then by all means, create more work for me to consult for. I love it.
 
  • Like
Reactions: waltar
You assume that people use physical swap, then the performance and wearout issue is real. But this doesn't cover to use a (rather small) zramswap device as a kind of virtual swap so the kernel memory managment system can do it's thing.
Edit: I now see that you already was told this at some earlier point: https://forum.proxmox.com/threads/zram-why-bother.151712/
Now if you still think the same (which you obviouvsly) do this is your prerogative but for OP @Ralf Wolbers and other interested people might want to read the arguments from that thread where people disagreed with your conclusions so they can make their own mind.
 
Last edited:
swap should be used sparingly if at all on a server device. If you run in a memory constrained environment, you will end up with unpredictable application performance.

I dont set up swap on server at all. If I see OOMs I either move VMs or add ram.
 
  • Like
Reactions: waltar
What I don't see mentioned here is that just monitoring the swap usage is not as good as monitoring the swap in AND swap out activity. You want to monitor that, because if you only swap in stuff without reading it back soon, that is actually already not important. A lot of swap in / swap out this means that you have not enough memory available (or misconfigured) and your system is going to be slow and wear and tear will hit your disks.

For most of my systems, I use a combination of swapiness=1, zram as first tier swap and if necessary and available a second one on disk. I could not eliminate swapping completely and it is happending mostly in containers. They sometimes need just more RAM that they have and begin to swap, maybe you experienced something similiar.

Edit: adding example

Code:
root@proxmox ~ > uptime
 09:43:52 up 46 days, 14:27, 14 users,  load average: 0.35, 0.45, 0.56
 
 root@proxmox ~ > free -m
               total        used        free      shared  buff/cache   available
Mem:           64150       16178       43820         884        6325       47972
Swap:           9638         171        9467

root@proxmox ~ > cat /proc/swaps
Filename                                Type            Size            Used            Priority
/dev/nvme0n1p2                          partition       5676028         0               -2
/dev/zram0                              partition       4194300         175616         

root@proxmox ~ > sysctl -a | grep swap
vm.swappiness = 1
 
Last edited:
What I don't see mentioned here is that just monitoring the swap usage is not as good as monitoring the swap in AND swap out activity. You want to monitor that, because if you only swap in stuff without reading it back soon, that is actually already not important. A lot of swap in / swap out this means that you have not enough memory available (or misconfigured) and your system is going to be slow and wear and tear will hit your disks.

For most of my systems, I use a combination of swapiness=1, zram as first tier swap and if necessary and available a second one on disk. I could not eliminate swapping completely and it is happending mostly in containers. They sometimes need just more RAM that they have and begin to swap, maybe you experienced something similiar.

Edit: adding example

Code:
root@proxmox ~ > uptime
 09:43:52 up 46 days, 14:27, 14 users,  load average: 0.35, 0.45, 0.56
 
 root@proxmox ~ > free -m
               total        used        free      shared  buff/cache   available
Mem:           64150       16178       43820         884        6325       47972
Swap:           9638         171        9467

root@proxmox ~ > cat /proc/swaps
Filename                                Type            Size            Used            Priority
/dev/nvme0n1p2                          partition       5676028         0               -2
/dev/zram0                              partition       4194300         175616       

root@proxmox ~ > sysctl -a | grep swap
vm.swappiness = 1

In my exhaustive testing over the decades, both Windows and Linux does not reliably swap _out_ data back into RAM. This is one of the important details why I not only monitor for any usage of swap at all on anything I care about (and alert if enough swap is used, >10%), I also have periodic swap off & back on tasks on most of my systems (mostly VMs really) so that they can generally correct themselves when occasional swapping happens. If the swap usage goes long enough, I get alerted, and that's historically been a reliable indication "oh something is wrong, human, take action!".

I would care about swap _out_ if it was actually a reliable mechanism, and in my exhaustive experience with many differently sized and differently behaving Linux systems, I do not have any confidence that I can rely on swapping out to happen in any way I can plan around. Hence the manual tasks to flush it, and alerting when that doesn't go so well.
 
I wrote a little bit about SWAP monitoring here. If you use disk based SWAP anyways I'd recommend ZSWAP.
Note that swapon is basically the "human readable" version of cat /proc/swaps and with sysctl vm.swappiness you don't have to use a pipe and can also set it directly.

In my experience I have found disk (partition?) based swapping to be less than ideal. I don't mean in a performance perspective, more a systems administration perspective. I've generally converted all my systems that were partition/disk to file-on-disk swap method instead.

Consider the following...

With disk/partition based swap, you need to effectively turn whatever system it is off to change the sizing of the swap (in any direction), unless you're feeling insane and prefer to resize your partitions _while they're mounted_ (OS mount for example). Whenever I need to resize partitions, I'm a fan of gparted in a live Ubuntu Desktop environment.

However, with file-on-disk swap method, you can resize your swap (as you see fit) without having to interrupt any operations on the target system. Swap off, remake the swap file with the new size, swap on, boom you're back in business.

Just sharing some thoughts for the room.
 
You assume that people use physical swap, then the performance and wearout issue is real. But this doesn't cover to use a (rather small) zramswap device as a kind of virtual swap so the kernel memory managment system can do it's thing.
Edit: I now see that you already was told this at some earlier point: https://forum.proxmox.com/threads/zram-why-bother.151712/
Now if you still think the same (which you obviouvsly) do this is your prerogative but for OP @Ralf Wolbers and other interested people might want to read the arguments from that thread where people disagreed with your conclusions so they can make their own mind.

Swap always exists on disk in some way. A VM's disk storage is literally on a disk somewhere, whether it's a NAS, SAN, Ceph cluster, or even local LVM. All of that storage is backed by SSD, HDD, NVMe, or some sort of physical medium, because that is where your permanent storage is kept. Windows and Linux use the same storage the VM does for its swap, this is a 100% reliable assumption, because that's how it works in reality.

ZRAM is not swap, it is compressed RAM, if you're using ZRAM as "swap" then you're literally using RAM to RAM while you RAM, xzibit would love to have a conversation I think. Swap is, by definition, _not_ RAM because it's meant to be (by design) a fall-back buffer when there is RAM capacity pressure (Windows and Linux). Sure you _CAN_ put swap on a ramdisk, or as you say, ZRAM stuff, but why aren't you just using the RAM to begin with for... RAM functions? That's not actually worthwhile.

Also, if you're going to "call me out" in another forum thread, do yourself, and myself a favour.. actually read the last comment in that thread... it's by me, and clarifies a lot, plus has support. Don't half-ass calling me out. In-fact, YOU literally thumbs-up'd my last two comments in that thread, so... dunno what you're trying to point out there...
 
Last edited: