I'm having an issue with my nvme controller going offline https://forum.proxmox.com/threads/p...ffff-pci_status-0x10.88604/page-2#post-471159
As you'll see in that thread, there are lots of possible causes, but I've now turned off all PCIe power state management:
Initially I only ever saw this issue while playing a game in VFIO, but I've actually been able to replicate it just by backing up the Windows VM while stopped:
In the logs, I see this is down to the SSD controller going offline:
One thing surprised my while running the backup on the stopped VM, with the "stop" backup mode - the QM process starts up for that machine, and sometimes uses up to 800% cpu (i.e avg of 8 out of 24 cores)
Now I don't suppose the backup process itself is broken, but I'd like to know what it's doing that might highlight this problem?
Another important thing - the disk tha's going offline is the one the VM disks are on. The backups are written to another NVMe that's working fine.
As you'll see in that thread, there are lots of possible causes, but I've now turned off all PCIe power state management:
Code:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.13.19-6-pve root=/dev/mapper/pve-root ro quiet video=efifb:off acpi_enforce_resources=lax nvme_core.default_ps_max_latency_us=0 pcie_aspm=off
Initially I only ever saw this issue while playing a game in VFIO, but I've actually been able to replicate it just by backing up the Windows VM while stopped:
Code:
NFO: starting new backup job: vzdump 101 --storage pve --mode stop --compress zstd --node pve --remove 0
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2022-05-19 10:17:20
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: desktop
INFO: include disk 'scsi0' 'vms:vm-101-disk-1' 128G
INFO: include disk 'scsi1' 'vms:vm-101-disk-2' 256G
INFO: include disk 'efidisk0' 'vms:vm-101-disk-0' 4M
INFO: creating vzdump archive '/mnt/data/pve/dump/vzdump-qemu-101-2022_05_19-10_17_20.vma.zst'
INFO: starting kvm to execute backup task
INFO: started backup task 'd5cb495e-6ef2-4a2e-b804-acb19891d2fb'
INFO: 0% (786.6 MiB of 384.0 GiB) in 3s, read: 262.2 MiB/s, write: 246.4 MiB/s
INFO: 1% (3.9 GiB of 384.0 GiB) in 16s, read: 246.8 MiB/s, write: 226.9 MiB/s
...
INFO: 87% (335.1 GiB of 384.0 GiB) in 16m 37s, read: 22.9 MiB/s, write: 18.1 MiB/s
ERROR: job failed with err -125 - Operation canceled
INFO: aborting backup job
INFO: stopping kvm after backup task
trying to acquire lock...
OK
In the logs, I see this is down to the SSD controller going offline:
Code:
216.472650] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
[ 1216.512634] blk_update_request: I/O error, dev nvme0n1, sector 32597504 op 0x0:(READ) flags 0x80700 phys_seg 48 prio class 0
[ 1216.512662] blk_update_request: I/O error, dev nvme0n1, sector 32583552 op 0x0:(READ) flags 0x80700 phys_seg 40 prio class 0
[ 1216.512687] blk_update_request: I/O error, dev nvme0n1, sector 32591232 op 0x0:(READ) flags 0x80700 phys_seg 21 prio class 0
One thing surprised my while running the backup on the stopped VM, with the "stop" backup mode - the QM process starts up for that machine, and sometimes uses up to 800% cpu (i.e avg of 8 out of 24 cores)
Now I don't suppose the backup process itself is broken, but I'd like to know what it's doing that might highlight this problem?
Another important thing - the disk tha's going offline is the one the VM disks are on. The backups are written to another NVMe that's working fine.
Last edited: