[SOLVED] No swap in container

proximity

Well-Known Member
Jul 19, 2019
48
1
48
50
Hi,

In the config file I have a swap entry:
swap: 100000

But when checking top in the container I get:
MiB Swap: 0.0 total

There is also no entry in /etc/fstab in the container. I have added swap to the host via /etc/fstab:
/dev/nvme0n1p1 none swap sw 0 0

which shows:
# swapon -s Filename Type Size Used Priority /dev/nvme0n1p1 partition 134216700 0 -2

Any idea why swap is not appearing in the container and how to fix that? thank you.
 
Last edited:
hi,

what is your pveversion -v output?

also please post the full config of the container: pct config CTID
 
hi,

what is your pveversion -v output?

also please post the full config of the container: pct config CTID

proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve) pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562) pve-kernel-5.11: 7.0-3 pve-kernel-helper: 7.0-3 pve-kernel-5.11.22-1-pve: 5.11.22-2 ceph-fuse: 15.2.13-pve1 corosync: 3.1.2-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.0.0-1+pve5 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.21-pve1 libproxmox-acme-perl: 1.1.1 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.0-4 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-4 libpve-guest-common-perl: 4.0-2 libpve-http-server-perl: 4.0-2 libpve-storage-perl: 7.0-7 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-2 lxcfs: 4.0.8-pve1 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.1-1 proxmox-backup-file-restore: 2.0.1-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.2-4 pve-cluster: 7.0-3 pve-container: 4.0-5 pve-docs: 7.0-5 pve-edk2-firmware: 3.20200531-1 pve-firewall: 4.2-2 pve-firmware: 3.2-4 pve-ha-manager: 3.3-1 pve-i18n: 2.4-1 pve-qemu-kvm: 6.0.0-2 pve-xtermjs: 4.12.0-1 qemu-server: 7.0-7 smartmontools: 7.2-1 spiceterm: 3.2-2 vncterm: 1.7-1 zfsutils-linux: 2.0.4-pve1

arch: amd64 cores: 20 cpulimit: 20 cpuunits: 1024 hostname:server1 memory: 110000 mp0: data:subvol-100-disk-0,mp=/mnt/data/,size=1000G mp1: data_zfs:subvol-100-disk-0,mp=/mnt/data2/,size=1G net0: name=eth0,bridge=vmbr0,gw=xx.xx.xx.xx,hwaddr=xx:xx:xx:xx:xx:xx,ip=xx.xx.xx.xx/24,type=veth onboot: 0 ostype: ubuntu rootfs: local-zfs:subvol-100-disk-0,size=20G swap: 100000
 
do you have any swap on your host?

if you've installed with ZFS as the root filesystem then it's not set up since it seemed to cause issues.
 
do you have any swap on your host?

if you've installed with ZFS as the root filesystem then it's not set up since it seemed to cause issues.
Yes. I have ZFS as storage but added a partition with swap on it on the 2nd drive:

Code:
Device             Start        End    Sectors  Size Type
/dev/nvme0n1p1      2048  268435456  268433409  128G Linux swap
/dev/nvme0n1p2 268437504 3907029134 3638591631  1.7T Solaris /usr & Apple ZFS

Code:
# swapon -s
Filename                                Type            Size    Used    Priority
/dev/nvme0n1p1                          partition       134216700       0       -2

host top:
Code:
MiB Swap: 131071.0 total, 131071.0 free,      0.0 used.  90455.6 avail Mem
 
Do I have to activate something myself in that case? And if so do you know how to do that?
your output from swapon shows it's already activated on the host, if you've added it to /etc/fstab then you shouldn't have to do anything else.

is your system using unified cgroup structure or cgroupv2 ? you can check with cat /proc/cmdline (since PVE7 we use cgroupv2 by default but i'm curious if you changed it or not) and post it here.
if you see something like systemd.unified_cgroup_hierarchy=0 then you're using the unified structure [0]
if there's no such entry then you're using v2.

please post the contents of the following from the PVE host :
Code:
cat /proc/meminfo
Code:
CTID=100
# unified
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.usage_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.limit_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.memsw.usage_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.memsw.limit_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.stat
# cgroupv2
cat /sys/fs/cgroup/lxc/$CTID/memory.max
cat /sys/fs/cgroup/lxc/$CTID/memory.swap.max
cat /sys/fs/cgroup/lxc/$CTID/memory.swap.current
cat /sys/fs/cgroup/lxc/$CTID/memory.stat

and also from inside the container post the output of:
Code:
cat /proc/meminfo

[0]: https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat
 
Last edited:
your output from swapon shows it's already activated on the host, if you've added it to /etc/fstab then you shouldn't have to do anything else.

is your system using unified cgroup structure or cgroupv2 ? you can check with cat /proc/cmdline (since PVE7 we use cgroupv2 by default but i'm curious if you changed it or not) and post it here.
if you see something like systemd.unified_cgroup_hierarchy=0 then you're using the unified structure [0]
if there's no such entry then you're using v2.
Code:
cat /proc/cmdline
initrd=\EFI\proxmox\5.11.22-1-pve\initrd.img-5.11.22-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
please post the contents of the following from the PVE host :
Code:
cat /proc/meminfo
Code:
MemTotal:       131888292 kB
MemFree:        93068856 kB
MemAvailable:   92584932 kB
Buffers:             392 kB
Cached:         12957452 kB
SwapCached:            0 kB
Active:           959748 kB
Inactive:       12763612 kB
Active(anon):     706964 kB
Inactive(anon): 12666632 kB
Active(file):     252784 kB
Inactive(file):    96980 kB
Unevictable:       11944 kB
Mlocked:           11944 kB
SwapTotal:      134216700 kB
SwapFree:       134216700 kB
Dirty:               128 kB
Writeback:             0 kB
AnonPages:        777012 kB
Mapped:           139104 kB
Shmem:          12597836 kB
KReclaimable:     380608 kB
Slab:            1199952 kB
SReclaimable:     380608 kB
SUnreclaim:       819344 kB
KernelStack:       12624 kB
PageTables:         8152 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    200160844 kB
Committed_AS:   16366948 kB
VmallocTotal:   34359738367 kB
VmallocUsed:     1949932 kB
VmallocChunk:          0 kB
Percpu:            42752 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:     1643000 kB
DirectMap2M:    51742720 kB
DirectMap1G:    80740352 kB
Code:
CTID=100
# unified
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.usage_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.limit_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.memsw.usage_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.memsw.limit_in_bytes
cat /sys/fs/cgroup/memory/lxc/$CTID/memory.stat
# cgroupv2
cat /sys/fs/cgroup/lxc/$CTID/memory.max
cat /sys/fs/cgroup/lxc/$CTID/memory.swap.max
cat /sys/fs/cgroup/lxc/$CTID/memory.swap.current
cat /sys/fs/cgroup/lxc/$CTID/memory.stat
Code:
~# cat /sys/fs/cgroup/lxc/$CTID/memory.max
127926272000
~# cat /sys/fs/cgroup/lxc/$CTID/memory.swap.max
104857600000
~# cat /sys/fs/cgroup/lxc/$CTID/memory.swap.current
0
~# cat /sys/fs/cgroup/lxc/$CTID/memory.stat
anon 44064768
file 12940984320
kernel_stack 2113536
pagetables 3108864
percpu 8376832
sock 864256
shmem 12861505536
file_mapped 14598144
file_dirty 2027520
file_writeback 405504
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 12183097344
active_anon 723959808
inactive_file 18923520
active_file 61366272
unevictable 0
slab_reclaimable 129934064
slab_unreclaimable 123851040
slab 253785104
workingset_refault_anon 0
workingset_refault_file 0
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
pgfault 4919322
pgmajfault 58905
pgrefill 0
pgscan 0
pgsteal 0
pgactivate 14648750
pgdeactivate 0
pglazyfree 322839
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0
and also from inside the container post the output of:
Code:
cat /proc/meminfo

[0]: https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat
Code:
~# cat /proc/meminfo
MemTotal:       124928000 kB
MemFree:        111992792 kB
MemAvailable:   124630472 kB
Buffers:               0 kB
Cached:         12637680 kB
SwapCached:            0 kB
Active:           766920 kB
Inactive:       11921184 kB
Active(anon):     706992 kB
Inactive(anon): 11902704 kB
Active(file):      59928 kB
Inactive(file):    18480 kB
Unevictable:           0 kB
Mlocked:           11944 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         49632 kB
Mapped:                0 kB
Shmem:          12560064 kB
KReclaimable:     380608 kB
Slab:               0 kB
SReclaimable:          0 kB
SUnreclaim:            0 kB
KernelStack:       13104 kB
PageTables:        10100 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    200160844 kB
Committed_AS:   16374088 kB
VmallocTotal:   34359738367 kB
VmallocUsed:     1950100 kB
VmallocChunk:          0 kB
Percpu:            42752 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:     1643000 kB
DirectMap2M:    51742720 kB
DirectMap1G:    80740352 kB
 
There is also no entry in /etc/fstab in the container.
for the record this isn't necessary for swap to work in container.

but i can't seem to reproduce your issue with ubuntu templates 20.04, 20.10, 21.04 using privileged containers (as you do).

* is this the only container you're having this issue with?
* which template did you use when creating the container?
* what do you get when you run sysctl -a | grep swappiness on the PVE host?
* cat /lib/systemd/system/lxcfs.service
 
lxcfs: 4.0.8-pve1
also i've just noticed you're not on the latest version of lxcfs (4.0.8-pve2). we added a patch since your version for swap and meminfo. can you try upgrading packages and rebooting to see if the issue still occurs?
 
also i've just noticed you're not on the latest version of lxcfs (4.0.8-pve2). we added a patch since your version for swap and meminfo. can you try upgrading packages and rebooting to see if the issue still occurs?
Fixed: MiB Swap: 100000.0 total

I updated just before posting. What are the odds.....
 
  • Like
Reactions: oguz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!