PDM causes OOM-killer on empty target-node during offline migration

beisser

Well-Known Member
Feb 21, 2023
370
175
48
hi guys,

i have a really weird problem atm.

i have a 3 node proxmox-cluster and 1 sperate node, all managed by PDM.
i am trying to migrate a truenas-vm from one of my cluster-nodes to the seperate node and while the disk is beeing copied the system invokes the oomkiller and prevents the migration from completing.

here a few bits of information.

VM to be migrated:

Code:
root@pve2:~# cat /etc/pve/qemu-server/106.conf
agent: 1
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=10.1.2,ctime=1764169689
name: truenas
net0: virtio=BC:24:11:C8:ED:74,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-lvm:vm-106-disk-1,backup=0,cache=writeback,discard=on,iothread=1,replicate=0,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=9cd41c9e-780b-481d-8bb1-dbaef3070a0e
sockets: 1
startup: order=1,up=60
vmgenid: 16165d01-8b6e-46d4-b1c4-d1ce555a0b2d

nothing special here.

pveversion -v of source node:

Code:
root@pve2:~# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.4 (running version: 9.1.4/5ac30304265fbd8e)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.17.4-1-pve-signed: 6.17.4-1
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
proxmox-kernel-6.14.11-5-pve-signed: 6.14.11-5
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
amd64-microcode: 3.20250311.1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.4
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.4
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.1-1
proxmox-backup-file-restore: 4.1.1-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.3
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
root@pve2:~#

pve-version -v of target node:

Code:
root@rhodan:~# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.4 (running version: 9.1.4/5ac30304265fbd8e)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20251111.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.4
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.4
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.1-1
proxmox-backup-file-restore: 4.1.1-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.3
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
root@rhodan:~#

package-version of the PDM:

Code:
proxmox-datacenter-manager-meta: 1.0.0 (running kernel: 6.17.4-2-pve)
proxmox-datacenter-manager: 1.0.2 (running version: 1.0.2)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
proxmox-kernel-6.17.1-1-pve-signed: 6.17.1-1
proxmox-kernel-6.14.11-5-pve-signed: 6.14.11-5
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14.11-3-pve-signed: 6.14.11-3
proxmox-kernel-6.14.11-2-pve-signed: 6.14.11-2
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.11.11-2-pve-signed: 6.11.11-2
proxmox-kernel-6.11: 6.11.11-2
proxmox-kernel-6.11.11-1-pve-signed: 6.11.11-1
proxmox-kernel-6.8: 6.8.12-15
proxmox-kernel-6.8.12-15-pve-signed: 6.8.12-15
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
ifupdown2: 3.3.0-1+pmx11
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
pve-xtermjs: 5.5.0-3
zfsutils-linux: 2.3.4-pve1

tasklog of the migration on PDM:

Code:
Task Viewer: VM 106 Migrate
2026-01-13 13:51:25 remote: started tunnel worker 'UPID:rhodan:00000846:00001404:69663FCD:qmtunnel:106:root@pam!pdm-admin-pdm:'
tunnel: -> sending command "version" to remote
tunnel: <- got reply
2026-01-13 13:51:26 local WS tunnel version: 2
2026-01-13 13:51:26 remote WS tunnel version: 2
2026-01-13 13:51:26 minimum required WS tunnel version: 2
websocket tunnel started
2026-01-13 13:51:26 starting migration of VM 106 to node 'rhodan' (rhodan.catacombs.lan)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2026-01-13 13:51:26 found local disk 'local-lvm:vm-106-disk-1' (attached)
2026-01-13 13:51:26 copying local disk images
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
tunnel: accepted new connection on '/run/pve/106.storage'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/pve/106.storage'

336592896 bytes (337 MB, 321 MiB) copied, 1 s, 337 MB/s
729350144 bytes (729 MB, 696 MiB) copied, 2 s, 365 MB/s
1150222336 bytes (1.2 GB, 1.1 GiB) copied, 3 s, 383 MB/s
1564999680 bytes (1.6 GB, 1.5 GiB) copied, 4 s, 391 MB/s
1980235776 bytes (2.0 GB, 1.8 GiB) copied, 5 s, 396 MB/s
2389770240 bytes (2.4 GB, 2.2 GiB) copied, 6 s, 398 MB/s
2785935360 bytes (2.8 GB, 2.6 GiB) copied, 7 s, 398 MB/s
3143172096 bytes (3.1 GB, 2.9 GiB) copied, 8 s, 393 MB/s
3531866112 bytes (3.5 GB, 3.3 GiB) copied, 9 s, 392 MB/s
3923771392 bytes (3.9 GB, 3.7 GiB) copied, 10 s, 392 MB/s
4314365952 bytes (4.3 GB, 4.0 GiB) copied, 11 s, 392 MB/s
4819714048 bytes (4.8 GB, 4.5 GiB) copied, 12 s, 402 MB/s
5257822208 bytes (5.3 GB, 4.9 GiB) copied, 13 s, 404 MB/s
5668012032 bytes (5.7 GB, 5.3 GiB) copied, 14 s, 405 MB/s
6079971328 bytes (6.1 GB, 5.7 GiB) copied, 15 s, 405 MB/s
6490619904 bytes (6.5 GB, 6.0 GiB) copied, 16 s, 406 MB/s
6904807424 bytes (6.9 GB, 6.4 GiB) copied, 17 s, 406 MB/s
7313555456 bytes (7.3 GB, 6.8 GiB) copied, 18 s, 406 MB/s
7731544064 bytes (7.7 GB, 7.2 GiB) copied, 19 s, 407 MB/s
8148877312 bytes (8.1 GB, 7.6 GiB) copied, 20 s, 407 MB/s
8566603776 bytes (8.6 GB, 8.0 GiB) copied, 21 s, 408 MB/s
8981774336 bytes (9.0 GB, 8.4 GiB) copied, 22 s, 408 MB/s
9319743488 bytes (9.3 GB, 8.7 GiB) copied, 23 s, 405 MB/s
9741729792 bytes (9.7 GB, 9.1 GiB) copied, 24 s, 406 MB/s
10146349056 bytes (10 GB, 9.4 GiB) copied, 25 s, 406 MB/s
10553065472 bytes (11 GB, 9.8 GiB) copied, 26 s, 406 MB/s
10915610624 bytes (11 GB, 10 GiB) copied, 27 s, 404 MB/s
11219435520 bytes (11 GB, 10 GiB) copied, 28 s, 401 MB/s
11335958528 bytes (11 GB, 11 GiB) copied, 29 s, 389 MB/s
11774197760 bytes (12 GB, 11 GiB) copied, 30 s, 392 MB/s
12296847360 bytes (12 GB, 11 GiB) copied, 31 s, 397 MB/s
12819496960 bytes (13 GB, 12 GiB) copied, 32 s, 401 MB/s
13135577088 bytes (13 GB, 12 GiB) copied, 33 s, 392 MB/s
13231390720 bytes (13 GB, 12 GiB) copied, 34 s, 389 MB/s
13297057792 bytes (13 GB, 12 GiB) copied, 35 s, 380 MB/s
13438091264 bytes (13 GB, 13 GiB) copied, 36 s, 373 MB/s
13800833024 bytes (14 GB, 13 GiB) copied, 37 s, 373 MB/s
14186643456 bytes (14 GB, 13 GiB) copied, 38 s, 373 MB/s
14585626624 bytes (15 GB, 14 GiB) copied, 39 s, 374 MB/s
14886567936 bytes (15 GB, 14 GiB) copied, 41 s, 366 MB/s
15055519744 bytes (15 GB, 14 GiB) copied, 41 s, 367 MB/s
15573516288 bytes (16 GB, 15 GiB) copied, 42 s, 371 MB/s
16078667776 bytes (16 GB, 15 GiB) copied, 43 s, 374 MB/s
16486629376 bytes (16 GB, 15 GiB) copied, 45 s, 368 MB/s
16586113024 bytes (17 GB, 15 GiB) copied, 45 s, 369 MB/s
16991846400 bytes (17 GB, 16 GiB) copied, 46 s, 369 MB/s
17401446400 bytes (17 GB, 16 GiB) copied, 47 s, 370 MB/s
17804623872 bytes (18 GB, 17 GiB) copied, 48 s, 371 MB/s
18218680320 bytes (18 GB, 17 GiB) copied, 49 s, 372 MB/s
18630836224 bytes (19 GB, 17 GiB) copied, 50 s, 373 MB/s
19047186432 bytes (19 GB, 18 GiB) copied, 51 s, 373 MB/s
19460063232 bytes (19 GB, 18 GiB) copied, 52 s, 374 MB/s
19877396480 bytes (20 GB, 19 GiB) copied, 53 s, 375 MB/s
20253900800 bytes (20 GB, 19 GiB) copied, 54 s, 375 MB/s
20645216256 bytes (21 GB, 19 GiB) copied, 55 s, 375 MB/s
20972765184 bytes (21 GB, 20 GiB) copied, 56 s, 375 MB/s
21248540672 bytes (21 GB, 20 GiB) copied, 57 s, 373 MB/s
21412249600 bytes (21 GB, 20 GiB) copied, 59 s, 361 MB/s
21412315136 bytes (21 GB, 20 GiB) copied, 59 s, 361 MB/s
21742157824 bytes (22 GB, 20 GiB) copied, 60 s, 362 MB/s
22203662336 bytes (22 GB, 21 GiB) copied, 61 s, 364 MB/s
22657171456 bytes (23 GB, 21 GiB) copied, 62 s, 365 MB/s
22790799360 bytes (23 GB, 21 GiB) copied, 64 s, 357 MB/s
22816817152 bytes (23 GB, 21 GiB) copied, 64 s, 357 MB/s
23190110208 bytes (23 GB, 22 GiB) copied, 65 s, 357 MB/s
23573430272 bytes (24 GB, 22 GiB) copied, 66 s, 357 MB/s
23957798912 bytes (24 GB, 22 GiB) copied, 67 s, 358 MB/s
24332140544 bytes (24 GB, 23 GiB) copied, 68 s, 358 MB/s
24699142144 bytes (25 GB, 23 GiB) copied, 69 s, 358 MB/s
25061163008 bytes (25 GB, 23 GiB) copied, 70 s, 358 MB/s
25397755904 bytes (25 GB, 24 GiB) copied, 71 s, 358 MB/s
25761218560 bytes (26 GB, 24 GiB) copied, 72 s, 358 MB/s
26140999680 bytes (26 GB, 24 GiB) copied, 73 s, 358 MB/s
26210729984 bytes (26 GB, 24 GiB) copied, 75 s, 347 MB/s
26210795520 bytes (26 GB, 24 GiB) copied, 75 s, 347 MB/s
26443972608 bytes (26 GB, 25 GiB) copied, 76 s, 348 MB/s
26787971072 bytes (27 GB, 25 GiB) copied, 77 s, 348 MB/s
27262910464 bytes (27 GB, 25 GiB) copied, 78 s, 350 MB/s
27666743296 bytes (28 GB, 26 GiB) copied, 79 s, 350 MB/s
28043313152 bytes (28 GB, 26 GiB) copied, 80 s, 351 MB/s
28406775808 bytes (28 GB, 26 GiB) copied, 81 s, 351 MB/s
28719644672 bytes (29 GB, 27 GiB) copied, 82 s, 350 MB/s
29034151936 bytes (29 GB, 27 GiB) copied, 83 s, 350 MB/s
29366681600 bytes (29 GB, 27 GiB) copied, 84 s, 350 MB/s
29694427136 bytes (30 GB, 28 GiB) copied, 85 s, 349 MB/s
30078730240 bytes (30 GB, 28 GiB) copied, 86 s, 350 MB/s
30458052608 bytes (30 GB, 28 GiB) copied, 87 s, 350 MB/s
30828527616 bytes (31 GB, 29 GiB) copied, 88 s, 350 MB/s
31180128256 bytes (31 GB, 29 GiB) copied, 89 s, 350 MB/s
31528321024 bytes (32 GB, 29 GiB) copied, 90 s, 350 MB/s
31676366848 bytes (32 GB, 30 GiB) copied, 94 s, 337 MB/s
31676432384 bytes (32 GB, 30 GiB) copied, 94 s, 337 MB/s
31676497920 bytes (32 GB, 30 GiB) copied, 94 s, 337 MB/s
31681282048 bytes (32 GB, 30 GiB) copied, 94 s, 337 MB/s
32163430400 bytes (32 GB, 30 GiB) copied, 95 s, 339 MB/s
32646168576 bytes (33 GB, 30 GiB) copied, 96 s, 340 MB/s
33130217472 bytes (33 GB, 31 GiB) copied, 97 s, 342 MB/s
33466548224 bytes (33 GB, 31 GiB) copied, 98 s, 341 MB/s
33785708544 bytes (34 GB, 31 GiB) copied, 99 s, 341 MB/s
34131279872 bytes (34 GB, 32 GiB) copied, 100 s, 341 MB/s
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 101.02 s, 340 MB/s
tunnel: -> sending command "query-disk-import" to remote
tunnel: done handling forwarded connection from '/run/pve/106.storage'
2026-01-13 13:53:18 ERROR: no reply to command '{"cmd":"query-disk-import"}': reading from tunnel failed: got timeout
2026-01-13 13:53:18 aborting phase 1 - cleanup resources
tunnel: Tunnel to https://rhodan.catacombs.lan:8006/api2/json/nodes/rhodan/qemu/106/mtunnelwebsocket?ticket=PVETUNNEL%3A69663FCD%3A%3AupRea38WOp1dT9SdrQ1pFzZGyHawYolR%2FZeaLC2%2FZUCe5gYuoCz05VWeJHFWv175kQ1nDeng9R2l%2F1vj6idDXInT6zOYou%2FYjH9DmspURsqlV%2Bhzpb4K7nyQ%2FhTNJFFq%2Fe94hV59e%2Bg3zDicEvDNlrYMWYH8V3HgdBDTYzgj8m2PxURvcVSXX1CrzsRFuAoAXiPZwQFdrEwYzS4FzbokSugShzdvpLn0jx6NZoA%2FVE6zYUNq4AfjWSI0rPr6njMKKndnnvPZw9Zhht3hMcBRQ9NSBVRoV2afrYSVnMVwe7ggS7VSRNJzuT26k7LMJTzw0x1Bv09GxmR2YMHMgwUy8Q%3D%3D&socket=%2Frun%2Fqemu-server%2F106.mtunnel failed - WS closed unexpectedly
tunnel: Error: channel closed
CMD websocket tunnel died: command 'proxmox-websocket-tunnel' failed: exit code 1

2026-01-13 13:54:03 ERROR: no reply to command '{"cmd":"quit","cleanup":1}': reading from tunnel failed: got timeout
2026-01-13 13:54:03 ERROR: migration aborted (duration 00:02:38): no reply to command '{"cmd":"query-disk-import"}': reading from tunnel failed: got timeout
TASK ERROR: migration aborted
 
Last edited:
oomkiller:

Code:
[  205.000527] pmxcfs invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
[  205.000534] CPU: 0 UID: 0 PID: 1899 Comm: pmxcfs Tainted: P           O        6.17.4-2-pve #1 PREEMPT(voluntary)
[  205.000538] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[  205.000539] Hardware name: Intel(R) Client Systems NUC8i5BEH/NUC8BEB, BIOS BECFL357.86A.0097.2024.0221.1015 02/21/2024
[  205.000541] Call Trace:
[  205.000543]  <TASK>
[  205.000546]  dump_stack_lvl+0x5f/0x90
[  205.000552]  dump_stack+0x10/0x18
[  205.000554]  dump_header+0x48/0x1be
[  205.000557]  oom_kill_process.cold+0x8/0x87
[  205.000560]  out_of_memory+0x22f/0x4d0
[  205.000565]  __alloc_frozen_pages_noprof+0x1102/0x12a0
[  205.000572]  alloc_pages_mpol+0x80/0x180
[  205.000575]  folio_alloc_noprof+0x5b/0xc0
[  205.000578]  filemap_alloc_folio_noprof+0xe1/0xf0
[  205.000581]  __filemap_get_folio+0x1ee/0x340
[  205.000585]  filemap_fault+0x10c/0x13d0
[  205.000590]  __do_fault+0x3a/0x190
[  205.000594]  do_fault+0x325/0x550
[  205.000596]  __handle_mm_fault+0x95b/0xfd0
[  205.000602]  handle_mm_fault+0x119/0x370
[  205.000605]  do_user_addr_fault+0x2f8/0x830
[  205.000610]  exc_page_fault+0x7f/0x1b0
[  205.000614]  asm_exc_page_fault+0x27/0x30
[  205.000616] RIP: 0033:0x7c08a2350fc0
[  205.000621] Code: Unable to access opcode bytes at 0x7c08a2350f96.
[  205.000622] RSP: 002b:00007c089a7fb0e0 EFLAGS: 00010206
[  205.000625] RAX: 0000000000000000 RBX: 00005c62c5763b48 RCX: 0000000000000000
[  205.000626] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007c08a23b44a8
[  205.000628] RBP: 00007c08a0f87000 R08: 0000000000000007 R09: 0000000000000000
[  205.000629] R10: 00005c62c577e3f8 R11: 0000000000000000 R12: 00007c089a7fb35c
[  205.000630] R13: 0000000000000000 R14: 00005c62c5763b48 R15: 0000000000000002
[  205.000634]  </TASK>
[  205.000636] Mem-Info:
[  205.000643] active_anon:1601778 inactive_anon:6305303 isolated_anon:0
                active_file:84 inactive_file:102991 isolated_file:0
                unevictable:5605 dirty:0 writeback:102263
                slab_reclaimable:11017 slab_unreclaimable:49449
                mapped:4610 shmem:2813 pagetables:21632
                sec_pagetables:0 bounce:0
                kernel_misc_reclaimable:0
                free:50152 free_pcp:227 free_cma:0
[  205.000649] Node 0 active_anon:6407112kB inactive_anon:25221212kB active_file:336kB inactive_file:411964kB unevictable:22420kB isolated(anon):0kB isolated(file):0kB mapped:18440kB dirty:0kB writeback:409052kB shmem:11252kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:0kB kernel_stack:4832kB pagetables:86528kB sec_pagetables:0kB all_unreclaimable? yes Balloon:0kB
[  205.000654] Node 0 DMA free:11264kB boost:0kB min:28kB low:40kB high:52kB reserved_highatomic:0KB free_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  205.000660] lowmem_reserve[]: 0 1858 31940 31940 31940
[  205.000666] Node 0 DMA32 free:124020kB boost:0kB min:3760kB low:5576kB high:7392kB reserved_highatomic:0KB free_highatomic:0KB active_anon:23668kB inactive_anon:1744900kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1969100kB managed:1903244kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  205.000671] lowmem_reserve[]: 0 0 30081 30081 30081
[  205.000676] Node 0 Normal free:65324kB boost:0kB min:63788kB low:94580kB high:125372kB reserved_highatomic:0KB free_highatomic:0KB active_anon:6383364kB inactive_anon:23476392kB active_file:336kB inactive_file:411964kB unevictable:22420kB writepending:409052kB present:31424512kB managed:30803588kB mlocked:22288kB bounce:0kB free_pcp:868kB local_pcp:0kB free_cma:0kB
[  205.000682] lowmem_reserve[]: 0 0 0 0 0
[  205.000687] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB
[  205.000703] Node 0 DMA32: 133*4kB (UM) 222*8kB (UM) 243*16kB (UM) 616*32kB (UM) 349*64kB (UM) 314*128kB (UM) 139*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 124020kB
[  205.000720] Node 0 Normal: 32*4kB (UE) 39*8kB (UME) 51*16kB (UME) 67*32kB (UME) 361*64kB (UME) 253*128kB (UE) 24*256kB (U) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 65544kB
[  205.000740] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[  205.000742] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  205.000743] 111891 total pagecache pages
[  205.000745] 3689 pages in swap cache
[  205.000746] Free swap  = 232kB
[  205.000747] Total swap = 8388604kB
[  205.000748] 8352401 pages RAM
[  205.000749] 0 pages HighMem/MovableOnly
[  205.000750] 171853 pages reserved
[  205.000751] 0 pages cma reserved
[  205.000752] 0 pages hwpoisoned
[  205.000753] Tasks state (memory values in pages):
[  205.000754] [  pid  ]   uid  tgid total_vm      rss rss_anon rss_file rss_shmem pgtables_bytes swapents oom_score_adj name
[  205.000770] [    407]     0   407     8663      920      288      631         1    94208        0          -250 systemd-journal
[  205.000775] [    422]     0   422    19323     5519     3270     2249         0    94208        0         -1000 dmeventd
[  205.000779] [    434]     0   434     9292     1281      608      673         0    94208        0         -1000 systemd-udevd
[  205.000785] [   1435]     0  1435    19904      407       32      375         0    57344        0             0 pvefw-logger
[  205.000789] [   1474]   103  1474     1642      831       96      735         0    57344        0             0 rpcbind
[  205.000792] [   1559]   992  1559     2081      906       64      842         0    61440        0          -900 dbus-daemon
[  205.000796] [   1569]     0  1569    69034      672       32      640         0    81920        0             0 pve-lxc-syscall
[  205.000799] [   1574]     0  1574   156507      805      160      645         0   155648        0             0 rrdcached
[  205.000802] [   1575]     0  1575     1806      347       51      296         0    57344        0             0 ksmtuned
[  205.000805] [   1578]     0  1578     2622     1174      416      758         0    65536        0             0 smartd
[  205.000808] [   1579]     0  1579     4775     1001      288      713         0    81920        0             0 systemd-logind
[  205.000811] [   1586]     0  1586      609      366        0      366         0    40960        0         -1000 watchdog-mux
[  205.000814] [   1596]     0  1596     1435      276       33      243         0    53248        0             0 qmeventd
[  205.000817] [   1602]     0  1602    42484     1014      320      694         0    86016        0             0 zed
[  205.000821] [   1604]     0  1604    39767      518       64      454         0    65536        0         -1000 lxcfs
[  205.000823] [   1638]     0  1638     1315      359       33      326         0    45056        0             0 blkmapd
[  205.000827] [   1660]   100  1660     4981      774      167      607         0    69632        0             0 chronyd
[  205.000830] [   1661]   100  1661     2899      458      139      319         0    65536        0             0 chronyd
[  205.000833] [   1727]     0  1727     1380      575       32      543         0    53248        0             0 lxc-monitord
[  205.000836] [   1742]     0  1742   143095     3894      900      522      2472   405504     1280             0 pmxcfs
[  205.000840] [   1748]     0  1748     2041      662       32      630         0    57344        0             0 agetty
[  205.000844] [   1753]     0  1753     2943      857      288      569         0    69632        0         -1000 sshd
[  205.000848] [   1883]     0  1883    10997      607      119      488         0    77824        0             0 master
[  205.000852] [   1884]   105  1884    11117      779      128      651         0    73728        0             0 pickup
[  205.000856] [   1885]   105  1885    11130      878      160      718         0    86016        0             0 qmgr
[  205.000860] [   1891]     0  1891     1717      604       32      572         0    57344        0             0 cron
[  205.000864] [   1892]     0  1892     3420      699      224      475         0    69632        0             0 proxmox-firewal
[  205.000867] [   1912]     0  1912    44327     2320     1877      443         0   331776    23872             0 pve-firewall
[  205.000870] [   1913]     0  1913    44669     3861     3253      576        32   352256    22848             0 pvestatd
[  205.000874] [   1940]     0  1940    54741     1598     1172      426         0   421888    35264             0 pvedaemon
[  205.000877] [   1941]     0  1941    56924     1999     1397      602         0   466944    35328             0 pvedaemon worke
[  205.000880] [   1942]     0  1942    56925     1948     1365      583         0   466944    35360             0 pvedaemon worke
[  205.000882] [   1943]     0  1943    56913     3153     2581      572         0   450560    34176             0 pvedaemon worke
[  205.000885] [   1950]     0  1950    51182     3616     3069      547         0   368640    26240             0 pve-ha-crm
[  205.000888] [   1953]    33  1953    55064     2459     2038      421         0   417792    34720             0 pveproxy
[  205.000891] [   1954]    33  1954 11831954  7868116  7867606      510         0 78299136  1877024             0 pveproxy worker
[  205.000894] [   1955]    33  1955    57335     4556     3990      566         0   442368    33216             0 pveproxy worker
[  205.000897] [   1956]    33  1956    57308     4784     4182      602         0   442368    32992             0 pveproxy worker
[  205.000900] [   1962]    33  1962    23029     3024     2550      474         0   184320    10304             0 spiceproxy
[  205.000903] [   1963]    33  1963    23062     3451     2999      452         0   188416     9920             0 spiceproxy work
[  205.000906] [   1965]     0  1965    50997     3652     3163      489         0   372736    26016             0 pve-ha-lrm
[  205.000909] [   1976]     0  1976    50145     3511     3140      371         0   372736    26976             0 pvescheduler
[  205.000912] [   2118]     0  2118    58625     1822     1257      565         0   450560    35456             0 task UPID:rhoda
[  205.000915] [   2119]     0  2119    38413     1034      137      897         0   344064    18272             0 pvesm
[  205.000919] [   2193]     0  2193      656      398       32      366         0    45056        0             0 dd
[  205.000922] [   2384]     0  2384     4942     1239      416      823         0    86016       32             0 sshd-session
[  205.000925] [   2439]     0  2439     5605     1286      608      678         0    86016        0           100 systemd
[  205.000928] [   2441]     0  2441     6093      704      415      289         0    77824        0           100 (sd-pam)
[  205.000931] [   2462]     0  2462     4972      992      409      583         0    86016       96             0 sshd-session
[  205.000934] [   2463]     0  2463     2216      693      128      565         0    57344      416             0 bash
[  205.000937] [   2607]     0  2607    22806     1035      174      861         0   225280     4640             0 apt
[  205.000939] [   2702]     0  2702     1396      444        0      444         0    53248        0             0 sleep
[  205.000942] [   2708]     0  2708     2984      785       64      721         0    73728        0             0 vgs
[  205.000945] [   2711]     0  2711      271       65        0       65         0    36864        0             0 iptables-save
[  205.000948] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=pve-cluster.service,mems_allowed=0,global_oom,task_memcg=/system.slice/pveproxy.service,task=pveproxy worker,pid=1954,uid=33
[  205.000965] Out of memory: Killed process 1954 (pveproxy worker) total-vm:47327816kB, anon-rss:31470552kB, file-rss:2040kB, shmem-rss:0kB, UID:33 pgtables:76464kB oom_score_adj:0
[  207.448182] oom_reaper: reaped process 1954 (pveproxy worker), now anon-rss:0kB, file-rss:340kB, shmem-rss:0kB

this is what the memory graph of the target-node looks like when you migrate:

1768310437659.png

im not sure why its eating a ginormous amount of memory when migrating an offline vm. maybe someone has an idea?
 
Last edited: