fighting weird issues - now backup failed wit h -err5 input/output error

Iacov

Member
Jan 24, 2024
32
0
6
hey

my second pve host (pve2) has issues, but i couldn't nail them down

it's a minisforum n100 based machine with a "primary" 500gb wd red nvme and a backup sata ssd (500gb wd red)
i have two VMs running on that device, 201 (jellyfin) and 210 (pihole)
i have passed through the igpu to jellyfin via this guide: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/
(Device 00.02.0)

i have faced an issue two times, that around or during a backup to my nas (can't tell when exactly) my pve/data switches to read only because of some metadata error (though the meta data usage is low)
i reinstalled PVE once, but the issue happened again after two or three weeks

after the second time, i simply "nuked" pve/data and re-established it via command line

i did a local backup once for testing, that worked fine
but during the scheduled local backup tonight, i got an "-err5 input/output" error

the task starts at 03:30, thats why i pulled the log around that time
can you help me/see what could be the issue or where i could investigate further?
maybe it's a hint to my overall problem

Code:
Jun 29 03:30:02 pve2 pvescheduler[1695605]: <root@pam> starting task UPID:pve2:0019DF76:03AA5924:667F639A:vzdump::root@pam:
Jun 29 03:30:02 pve2 pvescheduler[1695606]: INFO: starting new backup job: vzdump --notes-template '{{guestname}}' --fleecing 0 --mode stop --storage bu_ssd --node pve2 --prune-backups 'keep-monthly=1,keep-weekly=6' --notification-mode notification-system --quiet 1 --compress zstd --all 1
Jun 29 03:30:02 pve2 pvescheduler[1695606]: INFO: Starting Backup of VM 201 (qemu)
Jun 29 03:30:03 pve2 qm[1695614]: shutdown VM 201: UPID:pve2:0019DF7E:03AA5972:667F639B:qmshutdown:201:root@pam:
Jun 29 03:30:03 pve2 qm[1695613]: <root@pam> starting task UPID:pve2:0019DF7E:03AA5972:667F639B:qmshutdown:201:root@pam:
Jun 29 03:30:52 pve2 kernel: tap201i0: left allmulticast mode
Jun 29 03:30:52 pve2 kernel: fwbr201i0: port 2(tap201i0) entered disabled state
Jun 29 03:30:52 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered disabled state
Jun 29 03:30:52 pve2 kernel: vmbr0: port 2(fwpr201p0) entered disabled state
Jun 29 03:30:52 pve2 kernel: fwln201i0 (unregistering): left allmulticast mode
Jun 29 03:30:52 pve2 kernel: fwln201i0 (unregistering): left promiscuous mode
Jun 29 03:30:52 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered disabled state
Jun 29 03:30:52 pve2 kernel: fwpr201p0 (unregistering): left allmulticast mode
Jun 29 03:30:52 pve2 kernel: fwpr201p0 (unregistering): left promiscuous mode
Jun 29 03:30:52 pve2 kernel: vmbr0: port 2(fwpr201p0) entered disabled state
Jun 29 03:30:52 pve2 qmeventd[582]: read: Connection reset by peer
Jun 29 03:30:52 pve2 qm[1695613]: <root@pam> end task UPID:pve2:0019DF7E:03AA5972:667F639B:qmshutdown:201:root@pam: OK
Jun 29 03:30:52 pve2 systemd[1]: 201.scope: Deactivated successfully.
Jun 29 03:30:52 pve2 systemd[1]: 201.scope: Consumed 54min 46.216s CPU time.
Jun 29 03:30:53 pve2 systemd[1]: Started 201.scope.
Jun 29 03:30:53 pve2 qmeventd[1695770]: Starting cleanup for 201
Jun 29 03:30:53 pve2 qmeventd[1695770]: trying to acquire lock...
Jun 29 03:30:53 pve2 kernel: tap201i0: entered promiscuous mode
Jun 29 03:30:53 pve2 kernel: vmbr0: port 2(fwpr201p0) entered blocking state
Jun 29 03:30:53 pve2 kernel: vmbr0: port 2(fwpr201p0) entered disabled state
Jun 29 03:30:53 pve2 kernel: fwpr201p0: entered allmulticast mode
Jun 29 03:30:53 pve2 kernel: fwpr201p0: entered promiscuous mode
Jun 29 03:30:53 pve2 kernel: vmbr0: port 2(fwpr201p0) entered blocking state
Jun 29 03:30:53 pve2 kernel: vmbr0: port 2(fwpr201p0) entered forwarding state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered blocking state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered disabled state
Jun 29 03:30:53 pve2 kernel: fwln201i0: entered allmulticast mode
Jun 29 03:30:53 pve2 kernel: fwln201i0: entered promiscuous mode
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered blocking state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 1(fwln201i0) entered forwarding state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 2(tap201i0) entered blocking state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 2(tap201i0) entered disabled state
Jun 29 03:30:53 pve2 kernel: tap201i0: entered allmulticast mode
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 2(tap201i0) entered blocking state
Jun 29 03:30:53 pve2 kernel: fwbr201i0: port 2(tap201i0) entered forwarding state
Jun 29 03:30:54 pve2 kernel: vfio-pci 0000:00:02.0: enabling device (0000 -> 0003)
Jun 29 03:30:55 pve2 qmeventd[1695770]:  OK
Jun 29 03:30:55 pve2 qmeventd[1695770]: vm still running
Jun 29 03:31:05 pve2 kernel: kvm: kvm [1695782]: ignored rdmsr: 0xc0011029 data 0x0
Jun 29 03:31:24 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:24 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x54f553c000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:24 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:24 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x1f04e6d000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:24 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:24 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x49f5631000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:24 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:29 pve2 kernel: dmar_fault: 851 callbacks suppressed
Jun 29 03:31:29 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:29 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x5d5d199000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:29 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:29 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x1ba1e18000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:29 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:29 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x662e6e7000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:31:29 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: btree spine: node_check failed: blocknr 0 != wanted 387
Jun 29 03:31:31 pve2 kernel: device-mapper: block manager: btree_node validator check failed for block 387
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:31 pve2 kernel: device-mapper: thin: process_cell: dm_thin_find_block() failed: error = -15
Jun 29 03:31:32 pve2 pvescheduler[1695606]: ERROR: Backup of VM 201 failed - job failed with err -5 - Input/output error
Jun 29 03:31:32 pve2 pvescheduler[1695606]: INFO: Starting Backup of VM 210 (qemu)
Jun 29 03:31:33 pve2 qm[1696030]: shutdown VM 210: UPID:pve2:0019E11E:03AA7C8C:667F63F5:qmshutdown:210:root@pam:
Jun 29 03:31:33 pve2 qm[1696026]: <root@pam> starting task UPID:pve2:0019E11E:03AA7C8C:667F63F5:qmshutdown:210:root@pam:
Jun 29 03:31:35 pve2 kernel: tap210i0: left allmulticast mode
Jun 29 03:31:35 pve2 kernel: fwbr210i0: port 2(tap210i0) entered disabled state
Jun 29 03:31:35 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Jun 29 03:31:35 pve2 kernel: vmbr0: port 3(fwpr210p0) entered disabled state
Jun 29 03:31:35 pve2 kernel: fwln210i0 (unregistering): left allmulticast mode
Jun 29 03:31:35 pve2 kernel: fwln210i0 (unregistering): left promiscuous mode
Jun 29 03:31:35 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Jun 29 03:31:35 pve2 kernel: fwpr210p0 (unregistering): left allmulticast mode
Jun 29 03:31:35 pve2 kernel: fwpr210p0 (unregistering): left promiscuous mode
Jun 29 03:31:35 pve2 kernel: vmbr0: port 3(fwpr210p0) entered disabled state
Jun 29 03:31:35 pve2 qmeventd[582]: read: Connection reset by peer
Jun 29 03:31:36 pve2 qm[1696026]: <root@pam> end task UPID:pve2:0019E11E:03AA7C8C:667F63F5:qmshutdown:210:root@pam: OK
Jun 29 03:31:36 pve2 systemd[1]: 210.scope: Deactivated successfully.
Jun 29 03:31:36 pve2 systemd[1]: 210.scope: Consumed 22min 16.139s CPU time.
Jun 29 03:31:36 pve2 systemd[1]: Started 210.scope.
Jun 29 03:31:36 pve2 qmeventd[1696046]: Starting cleanup for 210
Jun 29 03:31:36 pve2 qmeventd[1696046]: trying to acquire lock...
Jun 29 03:31:36 pve2 kernel: tap210i0: entered promiscuous mode
Jun 29 03:31:36 pve2 kernel: vmbr0: port 3(fwpr210p0) entered blocking state
Jun 29 03:31:36 pve2 kernel: vmbr0: port 3(fwpr210p0) entered disabled state
Jun 29 03:31:36 pve2 kernel: fwpr210p0: entered allmulticast mode
Jun 29 03:31:36 pve2 kernel: fwpr210p0: entered promiscuous mode
Jun 29 03:31:36 pve2 kernel: vmbr0: port 3(fwpr210p0) entered blocking state
Jun 29 03:31:36 pve2 kernel: vmbr0: port 3(fwpr210p0) entered forwarding state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered blocking state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered disabled state
Jun 29 03:31:36 pve2 kernel: fwln210i0: entered allmulticast mode
Jun 29 03:31:36 pve2 kernel: fwln210i0: entered promiscuous mode
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered blocking state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 1(fwln210i0) entered forwarding state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 2(tap210i0) entered blocking state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 2(tap210i0) entered disabled state
Jun 29 03:31:36 pve2 kernel: tap210i0: entered allmulticast mode
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 2(tap210i0) entered blocking state
Jun 29 03:31:36 pve2 kernel: fwbr210i0: port 2(tap210i0) entered forwarding state
Jun 29 03:31:36 pve2 qmeventd[1696046]:  OK
Jun 29 03:31:36 pve2 qmeventd[1696046]: vm still running
Jun 29 03:31:43 pve2 kernel: kvm: kvm [1696056]: ignored rdmsr: 0xc0011029 data 0x0
Jun 29 03:31:52 pve2 pvescheduler[1695606]: INFO: Finished Backup of VM 210 (00:00:20)
Jun 29 03:31:52 pve2 pvescheduler[1695606]: INFO: Backup job finished with errors
Jun 29 03:31:52 pve2 pvescheduler[1695606]: job errors
Jun 29 03:36:00 pve2 kernel: dmar_fault: 11864 callbacks suppressed
Jun 29 03:36:00 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:00 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x1b7dc2c000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:00 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:00 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x324857d000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:00 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:00 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x67193d0000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:00 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:18 pve2 kernel: dmar_fault: 2 callbacks suppressed
Jun 29 03:36:18 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:18 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x7cb023c000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:18 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:18 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x23be166000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:18 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:18 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x191fe19000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:19 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:23 pve2 kernel: dmar_fault: 104 callbacks suppressed
Jun 29 03:36:23 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:23 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x14e79c7000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:23 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:23 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x56208df000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:23 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:36:23 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x29bbf67000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:36:23 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:40:52 pve2 smartd[580]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 52 to 51
Jun 29 03:46:11 pve2 kernel: dmar_fault: 23 callbacks suppressed
Jun 29 03:46:11 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:46:11 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x99ac6d000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:46:11 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:46:11 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x2a89b75000 [fault reason 0x05] PTE Write access is not set
Jun 29 03:46:11 pve2 kernel: DMAR: DRHD: handling fault status reg 3
Jun 29 03:46:11 pve2 kernel: DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x3d496aa000 [fault reason 0x05] PTE Write access is not set

thank you very much!
 
Last edited:
(sorry for adding it via post, but it wouldn't let me edit the op for length reasons)
the backup log

Code:
Details
=======
VMID    Name           Status    Time        Size         Filename                                                      
201     SV-Jellyfin    err       1min 30s    0 B          null                                                          
210     PiHole2        ok        20s         697.3 MiB    /mnt/bu_ssd/dump/vzdump-qemu-210-2024_06_29-03_31_32.vma.zst  

Total running time: 1min 50s
Total size: 697.3 MiB

Logs
====
vzdump --notes-template '{{guestname}}' --fleecing 0 --mode stop --storage bu_ssd --node pve2 --prune-backups 'keep-monthly=1,keep-weekly=6' --notification-mode notification-system --quiet 1 --compress zstd --all 1


201: 2024-06-29 03:30:02 INFO: Starting Backup of VM 201 (qemu)
201: 2024-06-29 03:30:02 INFO: status = running
201: 2024-06-29 03:30:02 INFO: backup mode: stop
201: 2024-06-29 03:30:02 INFO: ionice priority: 7
201: 2024-06-29 03:30:02 INFO: VM Name: SV-Jellyfin
201: 2024-06-29 03:30:02 INFO: include disk 'scsi0' 'local-lvm:vm-201-disk-0' 150G
201: 2024-06-29 03:30:03 INFO: stopping virtual guest
201: 2024-06-29 03:30:52 INFO: creating vzdump archive '/mnt/bu_ssd/dump/vzdump-qemu-201-2024_06_29-03_30_02.vma.zst'
201: 2024-06-29 03:30:52 INFO: starting kvm to execute backup task
201: 2024-06-29 03:30:55 INFO: started backup task '08d13dab-c907-4576-a3a6-090a9b7b4f1f'
201: 2024-06-29 03:30:55 INFO: resuming VM again after 52 seconds
201: 2024-06-29 03:30:58 INFO:   1% (2.3 GiB of 150.0 GiB) in 3s, read: 798.7 MiB/s, write: 277.5 MiB/s
201: 2024-06-29 03:31:02 INFO:   2% (3.2 GiB of 150.0 GiB) in 7s, read: 213.0 MiB/s, write: 200.2 MiB/s
201: 2024-06-29 03:31:08 INFO:   3% (4.7 GiB of 150.0 GiB) in 13s, read: 256.2 MiB/s, write: 227.8 MiB/s
201: 2024-06-29 03:31:12 INFO:   4% (6.3 GiB of 150.0 GiB) in 17s, read: 411.9 MiB/s, write: 181.5 MiB/s
201: 2024-06-29 03:31:15 INFO:   6% (10.0 GiB of 150.0 GiB) in 20s, read: 1.2 GiB/s, write: 123.6 MiB/s
201: 2024-06-29 03:31:18 INFO:   7% (10.7 GiB of 150.0 GiB) in 23s, read: 246.5 MiB/s, write: 205.4 MiB/s
201: 2024-06-29 03:31:26 INFO:   8% (12.3 GiB of 150.0 GiB) in 31s, read: 202.5 MiB/s, write: 186.0 MiB/s
201: 2024-06-29 03:31:31 INFO:   9% (13.8 GiB of 150.0 GiB) in 36s, read: 299.4 MiB/s, write: 280.5 MiB/s
201: 2024-06-29 03:31:32 INFO:   9% (13.9 GiB of 150.0 GiB) in 37s, read: 167.8 MiB/s, write: 154.6 MiB/s
201: 2024-06-29 03:31:32 ERROR: job failed with err -5 - Input/output error
201: 2024-06-29 03:31:32 INFO: aborting backup job
201: 2024-06-29 03:31:32 INFO: resuming VM again
201: 2024-06-29 03:31:32 ERROR: Backup of VM 201 failed - job failed with err -5 - Input/output error

210: 2024-06-29 03:31:32 INFO: Starting Backup of VM 210 (qemu)
210: 2024-06-29 03:31:32 INFO: status = running
210: 2024-06-29 03:31:32 INFO: backup mode: stop
210: 2024-06-29 03:31:32 INFO: ionice priority: 7
210: 2024-06-29 03:31:32 INFO: VM Name: PiHole2
210: 2024-06-29 03:31:32 INFO: include disk 'scsi0' 'local-lvm:vm-210-disk-0' 10G
210: 2024-06-29 03:31:32 INFO: stopping virtual guest
210: 2024-06-29 03:31:36 INFO: creating vzdump archive '/mnt/bu_ssd/dump/vzdump-qemu-210-2024_06_29-03_31_32.vma.zst'
210: 2024-06-29 03:31:36 INFO: starting kvm to execute backup task
210: 2024-06-29 03:31:36 INFO: started backup task 'ab11456b-f1bb-42b3-8f37-8d726848e3cb'
210: 2024-06-29 03:31:36 INFO: resuming VM again after 4 seconds
210: 2024-06-29 03:31:40 INFO:   8% (895.4 MiB of 10.0 GiB) in 4s, read: 223.8 MiB/s, write: 158.2 MiB/s
210: 2024-06-29 03:31:43 INFO:  17% (1.7 GiB of 10.0 GiB) in 7s, read: 291.2 MiB/s, write: 168.7 MiB/s
210: 2024-06-29 03:31:46 INFO:  24% (2.4 GiB of 10.0 GiB) in 10s, read: 235.6 MiB/s, write: 149.0 MiB/s
210: 2024-06-29 03:31:49 INFO:  50% (5.1 GiB of 10.0 GiB) in 13s, read: 914.7 MiB/s, write: 101.3 MiB/s
210: 2024-06-29 03:31:52 INFO: 100% (10.0 GiB of 10.0 GiB) in 16s, read: 1.6 GiB/s, write: 26.9 MiB/s
210: 2024-06-29 03:31:52 INFO: backup is sparse: 8.08 GiB (80%) total zero data
210: 2024-06-29 03:31:52 INFO: transferred 10.00 GiB in 16 seconds (640.0 MiB/s)
210: 2024-06-29 03:31:52 INFO: archive file size: 697MB
210: 2024-06-29 03:31:52 INFO: adding notes to backup
210: 2024-06-29 03:31:52 INFO: prune older backups with retention: keep-monthly=1, keep-weekly=6
210: 2024-06-29 03:31:52 INFO: removing backup 'bu_ssd:backup/vzdump-qemu-210-2024_06_27-08_37_34.vma.zst'
210: 2024-06-29 03:31:52 INFO: pruned 1 backup(s) not covered by keep-retention policy
210: 2024-06-29 03:31:52 INFO: Finished Backup of VM 210 (00:00:20)

edit: a manually started backup job (after restarting the node) worked successfully and without issue
the SMART reports for the nvme and sata ssd seem unsuspicious
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!