Freeze on FreeBSD VM running in PVE 9

ktoczyski

New Member
Oct 13, 2025
10
0
1
Hi,

I have very similar problem to https://forum.proxmox.com/threads/freeze-on-pfsense-vm-running-in-pve-9.171557/. I also upgraded PVE 8 to 9 and I started experiencing freeze of my VM FreeBSD 13x, 14x. (FreeBSD 9.1 works ok). It's not guest OS freeze, and neither host, just qemu becoming unresponsive, even noVNC console / monitor won't work ( timeout ).

My configuration:

pve-manager/9.0.10/deb1ca707ec72a89 (running kernel: 6.14.11-2-pve)

VM config:

Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
efidisk0: pool1:vm-111-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
memory: 8192
name: XXX
net0: virtio=CA:A1:D9:4F:ED:D5,bridge=vmbr0,tag=3966
net1: virtio=0A:AD:82:A3:E5:ED,bridge=vmbr0,tag=3964
net2: virtio=42:FF:7F:C4:39:31,bridge=vmbr0,tag=3963
numa: 0
onboot: 1
scsi0: pool1:vm-111-disk-1,discard=on,iothread=1,size=8G
scsihw: virtio-scsi-single
smbios1: uuid=7f58ea5c-1834-4ec9-a3f1-618d0fbc0fbb
sockets: 1
tablet: 0
tags: Prod
vmgenid: 55258b0f-d3e4-4c4c-be43-f444ff827489

VM status:

Code:
cpus: 2
disk: 0
diskread: 0
diskwrite: 0
maxdisk: 8589934592
maxmem: 8589934592
mem: 5554894848
memhost: 5554894848
name: XXX
netin: 100353525
netout: 105250273
nics:
    tap111i0:
        netin: 98078858
        netout: 103142496
    tap111i1:
        netin: 1199037
        netout: 794694
    tap111i2:
        netin: 1075630
        netout: 1313083
pid: 2195686
pressurecpufull: 0
pressurecpusome: 0
pressureiofull: 0
pressureiosome: 0
pressurememoryfull: 0
pressurememorysome: 0
proxmox-support:
qmpstatus: running
status: running
tags: Prod
uptime: 171960
vmid: 111

Stack trace in attachment.

I observed that VM freeze when PBS make automatic backup (manual works ok). I tried changed CPU (Host to KVM), disable/enable iothred, change storage (ceph/local), qemu guest agnet (enable/disable), freez file system in qemu agent. But nothing changes.
 

Attachments

Hello,
Did you notice a high I/O pressure during the issue time?
You can check the pressure via:
root@pve:~# head /proc/pressure/*

How frequent do you observe this issue?
 
Hi,

Code:
head /proc/pressure/*
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1656264799
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.96 avg60=5.86 avg300=6.04 total=49770012009
full avg10=1.96 avg60=5.82 avg300=5.98 total=49161106663

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=157
full avg10=0.00 avg60=0.00 avg300=0.00 total=142

Everyday. Freeze is about 23:00 when backup is starting. Morning I stop and start VM.
 
Hello,
Did you noticed anything in the log?
Could you please provide syslog using journalctl covering the problem time:
Code:
journalctl --since '2025-10-12' --until '2025-10-13 12:00' > $(hostname)-journal-$(date -I).txt

I think it would be helpful if you monitor the load, for example for the pressure you can set a cron job to check it every 5 minutes from 22:00 - 23:55:
Code:
root@pve: ~# crontab -l
...
*/5 22 * * * head /proc/pressure/* >> /var/log/disk_stats.log 2>> /var/log/disk_stats.err
*/5 23 * * * head /proc/pressure/* >> /var/log/disk_stats.log 2>> /var/log/disk_stats.err
 
Hello,
Thank you for the provided sys log.
As you mentioned as soon as the backup started the vm becomes unresponsive:
Bash:
ct 12 23:09:09 XXX pvescheduler[3413708]: INFO: Finished Backup of VM 106 (00:01:22)
Oct 12 23:09:11 XXX pvescheduler[3413708]: INFO: Starting Backup of VM 111 (qemu)
Oct 12 23:09:22 XXX pvestatd[1963]: status update time (6.512 seconds)
Oct 12 23:10:30 XXX pve-ha-lrm[3419422]: VM 111 qmp command failed - VM 111 qmp command 'query-status' failed - got timeout
Oct 12 23:10:30 XXX pve-ha-lrm[3419422]: VM 111 qmp command 'query-status' failed - got timeout
Oct 12 23:10:33 XXX pvestatd[1963]: VM 111 qmp command failed - VM 111 qmp command 'query-proxmox-support' failed - unable to connect to VM 111 qmp socket - timeout after 51 retries
Oct 12 23:10:37 XXX pvestatd[1963]: status update time (12.113 seconds)

Please share the task log for the backup, for example, the backup task UPID referenced in the provided syslog [0]:
Code:
~# pvenode task log UPID:XXX:003416CC:09838ED6:68EC16D2:vzdump::root@pam:
Please adapt with the correct task log UPID:
Oct 12 23:00:02 XXX pvescheduler[3413707]: <root@pam> starting task UPID:XXX:003416CC:09838ED6:68EC16D2:vzdump::root@pam:


[0] https://pve.proxmox.com/pve-docs/pvenode.1.html
 
Hi,

Code:
pvenode task log UPID:XXX:003416CC:09838ED6:68EC16D2:vzdump::root@pam:
INFO: starting new backup job: vzdump 104 200 114 115 107 106 111 113 --mailnotification always --notification-mode notification-system --quiet 1 --mode snapshot --notes-template '{{guestname}}' --fleecing 0 --storage backup_pl-pia02a
INFO: skip external VMs: 107, 113, 114, 115
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2025-10-12 23:00:02
INFO: status = running
INFO: VM Name: YYY-ans01.ZZZ
INFO: include disk 'virtio0' 'pool1:vm-104-disk-0' 8G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/104/2025-10-12T21:00:02Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '3235f7ea-4262-49a4-8061-f2b3d91aeee4'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (1.1 GiB of 8.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 1.1 GiB dirty of 8.0 GiB total
INFO:  10% (112.0 MiB of 1.1 GiB) in 3s, read: 37.3 MiB/s, write: 37.3 MiB/s
INFO:  16% (184.0 MiB of 1.1 GiB) in 14s, read: 6.5 MiB/s, write: 6.5 MiB/s
INFO:  18% (200.0 MiB of 1.1 GiB) in 17s, read: 5.3 MiB/s, write: 5.3 MiB/s
INFO:  21% (240.0 MiB of 1.1 GiB) in 38s, read: 1.9 MiB/s, write: 1.9 MiB/s
INFO:  22% (252.0 MiB of 1.1 GiB) in 41s, read: 4.0 MiB/s, write: 4.0 MiB/s
INFO:  23% (256.0 MiB of 1.1 GiB) in 51s, read: 409.6 KiB/s, write: 409.6 KiB/s
INFO:  32% (356.0 MiB of 1.1 GiB) in 59s, read: 12.5 MiB/s, write: 12.0 MiB/s
INFO:  50% (564.0 MiB of 1.1 GiB) in 1m 2s, read: 69.3 MiB/s, write: 69.3 MiB/s
INFO:  51% (568.0 MiB of 1.1 GiB) in 3m 8s, read: 32.5 KiB/s, write: 32.5 KiB/s
INFO:  58% (652.0 MiB of 1.1 GiB) in 3m 27s, read: 4.4 MiB/s, write: 4.4 MiB/s
INFO:  59% (664.0 MiB of 1.1 GiB) in 3m 30s, read: 4.0 MiB/s, write: 4.0 MiB/s
INFO:  60% (668.0 MiB of 1.1 GiB) in 3m 50s, read: 204.8 KiB/s, write: 204.8 KiB/s
INFO:  62% (692.0 MiB of 1.1 GiB) in 4m, read: 2.4 MiB/s, write: 2.4 MiB/s
INFO:  63% (700.0 MiB of 1.1 GiB) in 4m 3s, read: 2.7 MiB/s, write: 2.7 MiB/s
INFO:  70% (776.0 MiB of 1.1 GiB) in 4m 20s, read: 4.5 MiB/s, write: 4.5 MiB/s
INFO:  93% (1.0 GiB of 1.1 GiB) in 4m 23s, read: 88.0 MiB/s, write: 88.0 MiB/s
INFO:  94% (1.0 GiB of 1.1 GiB) in 6m 38s, read: 30.3 KiB/s, write: 30.3 KiB/s
INFO: 100% (1.1 GiB of 1.1 GiB) in 7m, read: 2.9 MiB/s, write: 2.9 MiB/s
INFO: backup was done incrementally, reused 6.92 GiB (86%)
INFO: transferred 1.08 GiB in 457 seconds (2.4 MiB/s)
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=2, keep-monthly=2
INFO: running 'proxmox-backup-client prune' for 'vm/104'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 104 (00:07:43)
INFO: Backup finished at 2025-10-12 23:07:45
INFO: Starting Backup of VM 106 (qemu)
INFO: Backup started at 2025-10-12 23:07:47
INFO: status = running
INFO: VM Name: YYY-pkg01.lan.ZZZ
INFO: include disk 'virtio0' 'pool1:vm-106-disk-0' 32G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/106/2025-10-12T21:07:47Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '4854ac77-1ff9-48e7-94c3-5ce9a32eb4d6'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: OK (376.0 MiB of 32.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 376.0 MiB dirty of 32.0 GiB total
INFO:  95% (360.0 MiB of 376.0 MiB) in 3s, read: 120.0 MiB/s, write: 120.0 MiB/s
INFO:  96% (364.0 MiB of 376.0 MiB) in 48s, read: 91.0 KiB/s, write: 91.0 KiB/s
INFO: 100% (376.0 MiB of 376.0 MiB) in 51s, read: 4.0 MiB/s, write: 4.0 MiB/s
INFO: backup was done incrementally, reused 31.63 GiB (98%)
INFO: transferred 376.00 MiB in 69 seconds (5.4 MiB/s)
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=2, keep-monthly=2
INFO: running 'proxmox-backup-client prune' for 'vm/106'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 106 (00:01:22)
INFO: Backup finished at 2025-10-12 23:09:09
INFO: Starting Backup of VM 111 (qemu)
INFO: Backup started at 2025-10-12 23:09:11
INFO: status = running
INFO: VM Name: YYY-ra02.lan.ZZZ
INFO: include disk 'scsi0' 'pool1:vm-111-disk-1' 8G
INFO: include disk 'efidisk0' 'pool1:vm-111-disk-0' 528K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/111/2025-10-12T21:09:11Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'f7dd6f8c-37e6-41b6-adfe-e5eafc31d7d6'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: OK (drive clean)
INFO: scsi0: dirty-bitmap status: OK (908.0 MiB of 8.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 908.0 MiB dirty of 8.0 GiB total
INFO:  41% (380.0 MiB of 908.0 MiB) in 3s, read: 126.7 MiB/s, write: 126.7 MiB/s
INFO:  44% (408.0 MiB of 908.0 MiB) in 6s, read: 9.3 MiB/s, write: 9.3 MiB/s
INFO:  45% (412.0 MiB of 908.0 MiB) in 10s, read: 1.0 MiB/s, write: 1.0 MiB/s
ERROR: VM 111 qmp command 'query-backup' failed - got timeout
INFO: aborting backup job
ERROR: VM 111 qmp command 'backup-cancel' failed - unable to connect to VM 111 qmp socket - timeout after 5984 retries
INFO: resuming VM again
ERROR: Backup of VM 111 failed - VM 111 qmp command 'cont' failed - unable to connect to VM 111 qmp socket - timeout after 450 retries
INFO: Failed at 2025-10-12 23:31:06
INFO: Starting Backup of VM 200 (qemu)
INFO: Backup started at 2025-10-12 23:31:06
INFO: status = running
INFO: VM Name: YYY-vn02.ZZZ.lan
INFO: include disk 'scsi0' 'pool1:vm-200-disk-0' 20G
INFO: include disk 'scsi1' 'pool1:vm-200-disk-1' 100G
INFO: include disk 'efidisk0' 'pool1:vm-200-disk-2' 528K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/200/2025-10-12T21:31:06Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'f759c88d-13c6-4eed-a0e4-5ae4d7a5f06d'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: OK (drive clean)
INFO: scsi0: dirty-bitmap status: OK (260.0 MiB of 20.0 GiB dirty)
INFO: scsi1: dirty-bitmap status: OK (3.9 GiB of 100.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 4.1 GiB dirty of 120.0 GiB total
INFO:   9% (400.0 MiB of 4.1 GiB) in 3s, read: 133.3 MiB/s, write: 133.3 MiB/s
INFO:  16% (708.0 MiB of 4.1 GiB) in 6s, read: 102.7 MiB/s, write: 102.7 MiB/s
INFO:  22% (944.0 MiB of 4.1 GiB) in 9s, read: 78.7 MiB/s, write: 78.7 MiB/s
INFO:  24% (1020.0 MiB of 4.1 GiB) in 13s, read: 19.0 MiB/s, write: 19.0 MiB/s
INFO:  28% (1.2 GiB of 4.1 GiB) in 16s, read: 58.7 MiB/s, write: 58.7 MiB/s
INFO:  31% (1.3 GiB of 4.1 GiB) in 19s, read: 41.3 MiB/s, write: 41.3 MiB/s
INFO:  36% (1.5 GiB of 4.1 GiB) in 25s, read: 38.7 MiB/s, write: 38.7 MiB/s
INFO:  38% (1.6 GiB of 4.1 GiB) in 28s, read: 20.0 MiB/s, write: 20.0 MiB/s
INFO:  42% (1.7 GiB of 4.1 GiB) in 31s, read: 56.0 MiB/s, write: 56.0 MiB/s
INFO:  46% (1.9 GiB of 4.1 GiB) in 34s, read: 58.7 MiB/s, write: 58.7 MiB/s
INFO:  47% (2.0 GiB of 4.1 GiB) in 37s, read: 22.7 MiB/s, write: 22.7 MiB/s
INFO:  52% (2.2 GiB of 4.1 GiB) in 40s, read: 68.0 MiB/s, write: 68.0 MiB/s
INFO:  54% (2.2 GiB of 4.1 GiB) in 43s, read: 24.0 MiB/s, write: 24.0 MiB/s
INFO:  59% (2.4 GiB of 4.1 GiB) in 46s, read: 66.7 MiB/s, write: 66.7 MiB/s
INFO:  64% (2.6 GiB of 4.1 GiB) in 50s, read: 52.0 MiB/s, write: 52.0 MiB/s
INFO:  70% (2.9 GiB of 4.1 GiB) in 53s, read: 86.7 MiB/s, write: 86.7 MiB/s
INFO:  72% (3.0 GiB of 4.1 GiB) in 56s, read: 26.7 MiB/s, write: 26.7 MiB/s
INFO:  74% (3.1 GiB of 4.1 GiB) in 59s, read: 38.7 MiB/s, write: 38.7 MiB/s
INFO:  80% (3.3 GiB of 4.1 GiB) in 1m 2s, read: 77.3 MiB/s, write: 77.3 MiB/s
INFO:  84% (3.5 GiB of 4.1 GiB) in 1m 5s, read: 54.7 MiB/s, write: 54.7 MiB/s
INFO:  88% (3.7 GiB of 4.1 GiB) in 1m 8s, read: 66.7 MiB/s, write: 66.7 MiB/s
INFO:  92% (3.8 GiB of 4.1 GiB) in 1m 11s, read: 50.7 MiB/s, write: 50.7 MiB/s
INFO:  96% (4.0 GiB of 4.1 GiB) in 1m 14s, read: 62.7 MiB/s, write: 62.7 MiB/s
INFO:  98% (4.1 GiB of 4.1 GiB) in 1m 17s, read: 17.3 MiB/s, write: 17.3 MiB/s
INFO: 100% (4.1 GiB of 4.1 GiB) in 1m 23s, read: 12.7 MiB/s, write: 12.7 MiB/s
INFO: backup was done incrementally, reused 115.87 GiB (96%)
INFO: transferred 4.13 GiB in 91 seconds (46.5 MiB/s)
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=2, keep-monthly=2
INFO: running 'proxmox-backup-client prune' for 'vm/200'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 200 (00:01:33)
INFO: Backup finished at 2025-10-12 23:32:39
INFO: Backup job finished with errors
INFO: notified via target `Pushover`
TASK ERROR: job errors
 
Hi,

/var/log/disk_stats.log

Code:
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1731524414
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=3.54 avg60=6.72 avg300=6.52 total=52092638915
full avg10=3.53 avg60=6.60 avg300=6.42 total=51465210427

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1732167766
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=11.29 avg60=6.16 avg300=6.23 total=52112671977
full avg10=11.29 avg60=6.12 avg300=6.16 total=51485110950

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1732802271
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=18.34 avg60=10.08 avg300=6.64 total=52134164076
full avg10=18.26 avg60=10.02 avg300=6.58 total=51506439125

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1733430194
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.16 avg60=4.07 avg300=5.65 total=52151086610
full avg10=2.16 avg60=4.06 avg300=5.61 total=51523257846

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1734120410
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.97 avg60=3.36 avg300=5.57 total=52170903489
full avg10=1.97 avg60=3.34 avg300=5.51 total=51542785247

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1734752293
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=3.08 avg60=7.47 avg300=7.20 total=52197114494
full avg10=3.08 avg60=7.44 avg300=7.16 total=51568867927

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1735391642
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.69 avg60=6.65 avg300=6.49 total=52215422677
full avg10=1.68 avg60=6.61 avg300=6.46 total=51587071363

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1736059979
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.90 avg60=5.23 avg300=5.77 total=52232872028
full avg10=1.82 avg60=5.18 avg300=5.71 total=51604333478

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1736715956
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.68 avg60=2.79 avg300=5.01 total=52250110791
full avg10=2.68 avg60=2.76 avg300=4.95 total=51621426007

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1737353604
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.44 avg60=5.70 avg300=6.73 total=52275900376
full avg10=1.43 avg60=5.64 avg300=6.68 total=51647091528

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1737990781
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.89 avg60=4.14 avg300=5.84 total=52293731167
full avg10=1.89 avg60=4.13 avg300=5.80 total=51664798969

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1738647534
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.02 avg60=4.48 avg300=5.67 total=52312038074
full avg10=2.02 avg60=4.45 avg300=5.63 total=51682918441

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1739288799
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.38 avg60=4.15 avg300=5.63 total=52331164533
full avg10=2.33 avg60=4.06 avg300=5.56 total=51701817742

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1739936456
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.35 avg60=4.15 avg300=5.04 total=52347359159
full avg10=2.35 avg60=4.13 avg300=4.99 total=51717875322

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1740560771
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=10.41 avg60=7.67 avg300=5.14 total=52363613734
full avg10=10.28 avg60=7.56 avg300=5.08 total=51733957199

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1741186395
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=0.86 avg60=3.36 avg300=4.75 total=52379192139
full avg10=0.84 avg60=3.28 avg300=4.67 total=51749400007

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1741794124
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.80 avg60=4.83 avg300=4.43 total=52393253276
full avg10=1.79 avg60=4.76 avg300=4.36 total=51763297614

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1742444075
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.37 avg60=2.92 avg300=4.04 total=52406582183
full avg10=2.36 avg60=2.89 avg300=3.99 total=51776513154

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1743096317
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.55 avg60=3.41 avg300=4.79 total=52425069996
full avg10=2.55 avg60=3.38 avg300=4.73 total=51794858186

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1743740735
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.58 avg60=2.84 avg300=4.73 total=52442239668
full avg10=1.58 avg60=2.83 avg300=4.69 total=51811905917

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1744389478
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.85 avg60=3.00 avg300=4.73 total=52459831981
full avg10=1.85 avg60=2.99 avg300=4.70 total=51829398706

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1745031180
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=1.89 avg60=2.91 avg300=4.83 total=52477963827
full avg10=1.89 avg60=2.88 avg300=4.78 total=51847307128

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1745689919
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=3.20 avg60=3.03 avg300=4.78 total=52495793530
full avg10=3.19 avg60=3.00 avg300=4.73 total=51864986051

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144
==> /proc/pressure/cpu <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=1746333965
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=2.31 avg60=3.08 avg300=4.68 total=52512869866
full avg10=2.31 avg60=3.08 avg300=4.64 total=51881922058

==> /proc/pressure/memory <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=159
full avg10=0.00 avg60=0.00 avg300=0.00 total=144

disk_stats.err is clean.
 
Hello,
Thank you for the provided log.
Based on the /var/log/disk_stats.log the pressure seems to be okay (not critical), some processes were fully stalled waiting on I/O, but only for short bursts.
Please remove the added cronjob.


I see the issue have some similarity also to what was reported in the thread:
qmp command 'backup' failed - got timeout

Are you also using NFS share for the backup storage?
 
One thing you could try is to configure Proxmox VE to skip the freeze-and-thaw cycle during backup[0][1] for the affected VM.
You can set this via GUI: Datacenter(VM 111)OptionsQEMU Guest AgentUncheck Freeze/thaw guest filesystems on backup for consistency

Please mind for the warning mentioned in the wiki[0][2]:
Disabling this option can potentially lead to backups with inconsistent filesystems.

Do you have the guest agent installed on the VM?

[0] https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_qga_fsfreeze
[1] https://pve.proxmox.com/wiki/VM_Backup_Consistency
[2] https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_qemu_agent
 
Hi,

I tried these before potstiong on this forum. My first post:

I tried changed CPU (Host to KVM), disable/enable iothred, change storage (ceph/local), qemu guest agnet (enable/disable), freeze file system in qemu agent. But nothing changes.
 
Did you check the backup task log when QEMU guest agent where disabled?
Because from the above task log seems the VM hang shortly after issuing guest-agent 'fs-freeze' command.

I ran similar test and for me I couldn't re-produce the issue, using kernel: 6.14.11-3-pve and vm config:
Bash:
agent: 1
boot: order=ide0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide0: local-zfs:vm-100-disk-0,size=32G
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=10.0.2,ctime=1760445080
numa: 0
ostype: other
scsihw: virtio-scsi-single
smbios1: uuid=9612727b-c8d2-4972-973e-67e232769cc6
sockets: 1

Task log:
Bash:
INFO: starting new backup job: vzdump 100 --notification-mode notification-system --mode snapshot --quiet 1 --fleecing 0 --storage store1 --notes-template '{{guestname}}' --prune-backups 'keep-last=2'
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2025-10-14 16:40:03
INFO: status = running
INFO: VM Name: freebsd
INFO: include disk 'ide0' 'local-zfs:vm-100-disk-0' 32G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/100/2025-10-14T14:40:03Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '4086e472-5a73-4665-8380-29312d82e867'
INFO: resuming VM again
INFO: ide0: dirty-bitmap status: OK (48.0 MiB of 32.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 48.0 MiB dirty of 32.0 GiB total
INFO: 100% (48.0 MiB of 48.0 MiB) in 1s, read: 48.0 MiB/s, write: 48.0 MiB/s
INFO: backup was done incrementally, reused 31.95 GiB (99%)
INFO: transferred 48.00 MiB in 1 seconds (48.0 MiB/s)
INFO: adding notes to backup
INFO: prune older backups with retention: keep-last=2
INFO: running 'proxmox-backup-client prune' for 'vm/100'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: Finished Backup of VM 100 (00:00:01)
INFO: Backup finished at 2025-10-14 16:40:04
INFO: Backup job finished successfully
INFO: notified via target `mail-to-root`
TASK OK

Bash:
:~ # sysctl -n kern.osrelease kern.ostype
14.3-RELEASE
FreeBSD
 
Hi,

I have very similar problem to https://forum.proxmox.com/threads/freeze-on-pfsense-vm-running-in-pve-9.171557/. I also upgraded PVE 8 to 9 and I started experiencing freeze of my VM FreeBSD 13x, 14x. (FreeBSD 9.1 works ok). It's not guest OS freeze, and neither host, just qemu becoming unresponsive, even noVNC console / monitor won't work ( timeout ).

My configuration:

pve-manager/9.0.10/deb1ca707ec72a89 (running kernel: 6.14.11-2-pve)

VM config:

Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 2
cpu: x86-64-v2-AES
efidisk0: pool1:vm-111-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
memory: 8192
name: XXX
net0: virtio=CA:A1:D9:4F:ED:D5,bridge=vmbr0,tag=3966
net1: virtio=0A:AD:82:A3:E5:ED,bridge=vmbr0,tag=3964
net2: virtio=42:FF:7F:C4:39:31,bridge=vmbr0,tag=3963
numa: 0
onboot: 1
scsi0: pool1:vm-111-disk-1,discard=on,iothread=1,size=8G
scsihw: virtio-scsi-single
smbios1: uuid=7f58ea5c-1834-4ec9-a3f1-618d0fbc0fbb
sockets: 1
tablet: 0
tags: Prod
vmgenid: 55258b0f-d3e4-4c4c-be43-f444ff827489

VM status:

Code:
cpus: 2
disk: 0
diskread: 0
diskwrite: 0
maxdisk: 8589934592
maxmem: 8589934592
mem: 5554894848
memhost: 5554894848
name: XXX
netin: 100353525
netout: 105250273
nics:
    tap111i0:
        netin: 98078858
        netout: 103142496
    tap111i1:
        netin: 1199037
        netout: 794694
    tap111i2:
        netin: 1075630
        netout: 1313083
pid: 2195686
pressurecpufull: 0
pressurecpusome: 0
pressureiofull: 0
pressureiosome: 0
pressurememoryfull: 0
pressurememorysome: 0
proxmox-support:
qmpstatus: running
status: running
tags: Prod
uptime: 171960
vmid: 111

Stack trace in attachment.

I observed that VM freeze when PBS make automatic backup (manual works ok). I tried changed CPU (Host to KVM), disable/enable iothred, change storage (ceph/local), qemu guest agnet (enable/disable), freez file system in qemu agent. But nothing changes.

Hi, I experienced same exact problem too. The issue started the night I upgraded 8.4 to 9.

The SW configuration is really near to be the same: pfsense on a VM under proxmox. Note: under proxmox 8.4 all things runs flawlessy for months.
Every night the pfsense freezee, near the backup time. I made a new VM with OPNsense (Freebsd anyway): same result.

ktoczyski is not alone :/​

 
Hi,

indeed he is/you are not alone.
The upgrade 8.4 -> 9 was the point the trouble started. I have two VMs freezing, both of them are FreeBSD-derivates, one OPNSense, one Xigmanas. I have a second Xigmanas-VM running without problems even after the upgrade and no clue, why it is not affected.
I tried some variations of VM-hardware but they are freezing regardless of setting UEFI/BIOS, virtio-NIC/E1000, Spice or Std., Qemu-Agent or not.

What also changed is the memory-usage. One can clearly see, when I upgraded the host:
Bildschirmfoto vom 2025-10-15 07-35-12.png

This is Month-Average and looks the same on all FreeBSD-VMS, memory usage is a little over the memory-setting of the VM:
Bildschirmfoto vom 2025-10-15 07-43-41.png

BTW, the only obvious difference between the freezing an non-freezing BSDs is the Memory setting, the non-freezing is only 3GiB, the other two are 12 and 32 GiB.

Sorry for spamming all my thoughts on this, but maybe something is helpfull...

Frank
 
Did you check the backup task log when QEMU guest agent where disabled?
Because from the above task log seems the VM hang shortly after issuing guest-agent 'fs-freeze' command.
I tried.

Code:
INFO: Starting Backup of VM 111 (qemu)
INFO: Backup started at 2025-10-15 08:10:27
INFO: status = running
INFO: VM Name: XXX
INFO: include disk 'scsi0' 'pool1:vm-111-disk-1' 8G
INFO: include disk 'efidisk0' 'pool1:vm-111-disk-0' 528K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/111/2025-10-15T06:10:27Z'
INFO: skipping guest-agent 'fs-freeze', disabled in VM options
INFO: started backup task 'b7e1a44e-41c6-4852-adec-1f0a0a59f6fd'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: OK (drive clean)
INFO: scsi0: dirty-bitmap status: OK (524.0 MiB of 8.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 524.0 MiB dirty of 8.0 GiB total
INFO:  76% (400.0 MiB of 524.0 MiB) in 3s, read: 133.3 MiB/s, write: 133.3 MiB/s
INFO:  77% (404.0 MiB of 524.0 MiB) in 6s, read: 1.3 MiB/s, write: 1.3 MiB/s
ERROR: VM 111 qmp command 'query-backup' failed - got timeout

but nothing change.

I have some idea. My PBS server is in different location than PVE cluster (connected via 1 Gbit link). Backup start 23:00 for 10 VMs as 2 jobs. One job work ok, but second received timeout and freeze VM (log above). Maybe problem is between PVE v9<>PBS v4?