Warning: UPDATING these packages broke my pci passthrough.

Sorry to borrow this topic. After updating the latest pve related components, compared with the original KVM that could not be started, now KVM can be started, but there are warnings.
References are as follows:

error writing '1' to '/sys/bus/pci/devices/0000:01:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:01:00.0', but trying to continue as not all devices need a reset
swtpm_setup: Not overwriting existing state file.
kvm: warning: host doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]

Among them, the swtpm and warnings exist on all virtual machines. The PCI device is my NV graphics card, which currently does not affect the normal startup of KVM and device passthrough.
 
Sorry to borrow this topic. After updating the latest pve related components, compared with the original KVM that could not be started, now KVM can be started, but there are warnings.
References are as follows:

error writing '1' to '/sys/bus/pci/devices/0000:01:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:01:00.0', but trying to continue as not all devices need a reset
swtpm_setup: Not overwriting existing state file.
kvm: warning: host doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]

Among them, the swtpm and warnings exist on all virtual machines. The PCI device is my NV graphics card, which currently does not affect the normal startup of KVM and device passthrough.

yes the pci warnings are new and intentional. Previously we tried e.g. to reset the device, but failed silently not knowing if it worked or not
those warnings are not bad per se, but could indicate a problem if e.g. something in the guest is not working right
 
  • Like
Reactions: mariol
In particular it should be fixed thanks to @dcsapak in libpve-common-perl >= 8.2.7 and qemu-server >= 8.2.6 both available in the testing repository at the time of this writing.

If you'd like to install the package, you can temporarily enable the repository (e.g. via the Repositories section in the UI), run apt update, run apt install libpve-common-perl qemu-server and disable the repository again, then run apt update again.

Just wanted to let the community know that the workaround Fiona suggested worked in my case.

I experienced issues on hosts with 2 CPU sockets and passing through two Nvidia L40(s) cards to a VM where only one of the GPUs was recognised in the VM. I do not exactly know which update broke the passtrough, but adding the testing apt repostiory, updating the two packages Fiona suggested and reverting back to the subscription repository, followed by an apt update and finally a host reboot, helped. Both GPUs show up in the VM now.
 
  • Like
Reactions: xmesaj2 and mariol
AMD IGPU 780M
task error:can't reset 'c6:00.0' pci device
same here with proxmox 8.3.0:

Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset
TASK ERROR: timeout waiting on systemd

seems that RadeonResetBugFix does not work anymore https://github.com/inga-lovinde/RadeonResetBugFix/tree/master

how can I fix that?
 
Hi,
same here with proxmox 8.3.0:

Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset
TASK ERROR: timeout waiting on systemd

seems that RadeonResetBugFix does not work anymore https://github.com/inga-lovinde/RadeonResetBugFix/tree/master

how can I fix that?
please share the output of pveversion -v and the VM configuration qm config <ID> as well as an excerpt of the system logs/journal around the time the issue happened. Does it work if you start the VM with a higher timeout, e.g. qm start <ID> --timeout 900? If yes, how long does it take to actually start?
 
Hello,
thank you for your support:

pveversion -v:
Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.3.0 (running version: 8.3.0/c1689ccb1065a83b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
amd64-microcode: 3.20240820.1~deb12u1
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.2.9
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.0-1
proxmox-backup-file-restore: 3.3.0-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-1
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.0
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

qm config <ID> (Win11 vm with iGPU pass):
Code:
agent: 1
args: -cpu 'host,-hypervisor,kvm=off'
bios: ovmf
boot: order=scsi0;net0
cores: 16
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:c6:00.0,pcie=1,romfile=vbios_8845.bin,x-vga=1
hostpci1: 0000:c6:00.1,pcie=1,romfile=AMDGopDriver_8845hs.rom
machine: pc-q35-9.0
memory: 16384
meta: creation-qemu=9.0.2,ctime=1732920848
name: win11
net0: virtio=BC:24:11:46:0D:C3,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-lvm:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=200G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=3be659fd-57b6-4496-9f8b-37b4dc200bad
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
vga: none
vmgenid: 9cdc5539-d08b-473d-b970-a2b6bbc76522

the "excerpt of the system logs/journal around the time the issue" it's only about this message as before, even if I start it with 900 timeout:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset
timeout waiting on systemd
with a HDMI connected to the machine, after win vm go off, monitor stay blank, does not return to the host...

thank you again for your time
 
  • Like
Reactions: wustrong
the "excerpt of the system logs/journal around the time the issue" it's only about this message as before, even if I start it with 900 timeout:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset
timeout waiting on systemd
with a HDMI connected to the machine, after win vm go off, monitor stay blank, does not return to the host...
That does not look like journalctl output, e.g. timestamp and process IDs are missing. There also should be log lines mentioning the qmstart task.
 
oh sorry, I didn't undestand, I guess this:

Code:
Dec 02 10:52:57 proxmox login[166633]: ROOT LOGIN  on '/dev/pts/0'
Dec 02 10:52:59 proxmox qm[166665]: <root@pam> starting task UPID:proxmox:00028B0A:0047DD96:674D837B:qmstart:100:root@pam:
Dec 02 10:52:59 proxmox qm[166666]: start VM 100: UPID:proxmox:00028B0A:0047DD96:674D837B:qmstart:100:root@pam:
Dec 02 10:53:00 proxmox systemd[1]: 100.scope: Deactivated successfully.
Dec 02 10:53:00 proxmox systemd[1]: Stopped 100.scope.
Dec 02 10:53:00 proxmox systemd[1]: 100.scope: Consumed 5h 41min 49.973s CPU time.
Dec 02 10:53:20 proxmox qm[166666]: timeout waiting on systemd
Dec 02 10:53:20 proxmox qm[166665]: <root@pam> end task UPID:proxmox:00028B0A:0047DD96:674D837B:qmstart:100:root@pam: timeout waiting on systemd
Dec 02 10:54:22 proxmox qm[166950]: <root@pam> starting task UPID:proxmox:00028C27:0047FDF6:674D83CE:qmstart:100:root@pam:
Dec 02 10:54:22 proxmox qm[166951]: start VM 100: UPID:proxmox:00028C27:0047FDF6:674D83CE:qmstart:100:root@pam:
Dec 02 10:54:42 proxmox qm[166951]: timeout waiting on systemd
Dec 02 10:54:42 proxmox qm[166950]: <root@pam> end task UPID:proxmox:00028C27:0047FDF6:674D83CE:qmstart:100:root@pam: timeout waiting on systemd
 
That does not look like journalctl output, e.g. timestamp and process IDs are missing. There also should be log lines mentioning the qmstart task.
Just checking in to see if there has been any progress on this? I am also experiencing this issue - happy to help anyway I can.
 
That does not look like journalctl output, e.g. timestamp and process IDs are missing. There also should be log lines mentioning the qmstart task.
I'm experiencing the same issue.

I’m using an AMD Ryzen 7 8745HS with a 780M iGPU, running PVE 8.3.2.

After applying the RadeonResetBugFix, every time I reboot or shut down the Windows VM, it seems the iGPU isn't released. The VM cannot restart, and the status shows the following error message:

Code:
error writing '1' to '/sys/bus/pci/devices/0000:65:00.0/reset': Inappropriate ioctl for device

The only way I can get the VM running again is by restarting the host machine.
 
error writing '1' to '/sys/bus/pci/devices/0000:65:00.0/reset': Inappropriate ioctl for device

I encounter the same issue with my TrueNAS Scale VM. The PCIe devices (JBOD controllers) are not released after a VM reboots, causing the error message above (and the VM not starting).

I need to reboot the host to boot the VM.
 
For me I am passing through an AMG 7900 GRE, it passes through fine initially, then at some point the VM will freeze. When I try and start the VM again, I get:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:26:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:26:00.0', but trying to continue as not all devices need a reset
kvm: ../hw/pci/pci.c:1633: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
TASK ERROR: start failed: QEMU exited with code 1

Only way to start the VM again is to reboot the host.
 
For me I am passing through an AMG 7900 GRE, it passes through fine initially, then at some point the VM will freeze. When I try and start the VM again, I get:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:26:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:26:00.0', but trying to continue as not all devices need a reset
kvm: ../hw/pci/pci.c:1633: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
TASK ERROR: start failed: QEMU exited with code 1

Only way to start the VM again is to reboot the host.
same here, do you think is a proxmox's problem o ResetBugFix on AMD does not work anymore?
 
Hello,
thank you for your support:

pveversion -v:
Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.3.0 (running version: 8.3.0/c1689ccb1065a83b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
amd64-microcode: 3.20240820.1~deb12u1
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.2.9
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.0-1
proxmox-backup-file-restore: 3.3.0-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-1
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.0
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

qm config <ID> (Win11 vm with iGPU pass):
Code:
agent: 1
args: -cpu 'host,-hypervisor,kvm=off'
bios: ovmf
boot: order=scsi0;net0
cores: 16
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:c6:00.0,pcie=1,romfile=vbios_8845.bin,x-vga=1
hostpci1: 0000:c6:00.1,pcie=1,romfile=AMDGopDriver_8845hs.rom
machine: pc-q35-9.0
memory: 16384
meta: creation-qemu=9.0.2,ctime=1732920848
name: win11
net0: virtio=BC:24:11:46:0D:C3,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-lvm:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=200G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=3be659fd-57b6-4496-9f8b-37b4dc200bad
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
vga: none
vmgenid: 9cdc5539-d08b-473d-b970-a2b6bbc76522

the "excerpt of the system logs/journal around the time the issue" it's only about this message as before, even if I start it with 900 timeout:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset
timeout waiting on systemd
with a HDMI connected to the machine, after win vm go off, monitor stay blank, does not return to the host...

thank you again for your time
have you fixed that?what to do?
 
For me I am passing through an AMG 7900 GRE, it passes through fine initially, then at some point the VM will freeze. When I try and start the VM again, I get:
Code:
error writing '1' to '/sys/bus/pci/devices/0000:26:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:26:00.0', but trying to continue as not all devices need a reset
kvm: ../hw/pci/pci.c:1633: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
TASK ERROR: start failed: QEMU exited with code 1

Only way to start the VM again is to reboot the host.
everytime!!!
 
Hi,

please share the output of pveversion -v and the VM configuration qm config <ID> as well as an excerpt of the system logs/journal around the time the issue happened. Does it work if you start the VM with a higher timeout, e.g. qm start <ID> --timeout 900? If yes, how long does it take to actually start?

Hey there, looking forward to your help on this one: I'm also getting this error when trying to do a GPU passthrough to a VM
Code:
qm start 104
error writing '1' to '/sys/bus/pci/devices/0000:01:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:01:00.0', but trying to continue as not all devices need a reset
swtpm_setup: Not overwriting existing state file.


Here's my pveversion -v:
Code:
proxmox-ve: 8.3.0 (running kernel: 6.2.16-5-pve)
pve-manager: 8.3.2 (running version: 8.3.2/3e76eec21c4a14a7)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-6
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
pve-kernel-6.2.16-5-pve: 6.2.16-6
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

qm config
Code:
root@jphomelabs:~# qm config 104
agent: 1
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_tlbflush,hv_ipi,kvm=off'
bios: ovmf
boot: order=scsi0;ide0;net0
bootdisk: scsi0
cores: 4
cpu: host
description:  CPU configuration for compatibility and optimization%0A Enable GPU passthrough%0A Enable QEMU Agent%0A UEFI BIOS for Windows 11%0A Boot options%0A VM hardware%0A VM Name%0A Network configuration%0A NUMA configuration%0A OS type%0A VirtIO SCSI for disk performance%0A TPM (for Windows 11 compatibility)%0A Storage%0A VM UUID%0A Sockets and VM generation ID
efidisk0: local-lvm:vm-104-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci1: 0000:01:00,pcie=1
ide0: local:iso/virtio-win-0.1.266.iso,media=cdrom,size=707456K
ide2: none,media=cdrom
machine: pc-q35-9.0
memory: 8096
meta: creation-qemu=9.0.2,ctime=1735468633
name: LLMVM-Windows11
net0: virtio=BC:24:11:5A:FC:7D,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=fdf63a51-30db-45db-bd44-2e7c667431db
sockets: 1
tpmstate0: local-lvm:vm-104-disk-1,size=4M,version=v2.0
virtio0: local-lvm:vm-104-disk-2,iothread=1,size=200G
vmgenid: 9b8b7827-b678-435f-a168-14a55acfeda6

journalctl
Code:
Aug 05 23:29:09 jphomelabs kernel: Linux version 6.8.4-2-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-2 (2024-04-10T17:36Z) ()
Aug 05 23:29:09 jphomelabs kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.4-2-pve root=/dev/mapper/pve-root ro quiet
Aug 05 23:29:09 jphomelabs kernel: KERNEL supported cpus:
Aug 05 23:29:09 jphomelabs kernel:   Intel GenuineIntel
Aug 05 23:29:09 jphomelabs kernel:   AMD AuthenticAMD
Aug 05 23:29:09 jphomelabs kernel:   Hygon HygonGenuine
Aug 05 23:29:09 jphomelabs kernel:   Centaur CentaurHauls
Aug 05 23:29:09 jphomelabs kernel:   zhaoxin   Shanghai
Aug 05 23:29:09 jphomelabs kernel: BIOS-provided physical RAM map:
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000907ff] usable
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000000090800-0x000000000009ffff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000000100000-0x000000006482ffff] usable
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000064830000-0x0000000064830fff] ACPI NVS
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000064831000-0x0000000078606fff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000078607000-0x0000000078648fff] ACPI data
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000078649000-0x0000000078fbefff] ACPI NVS
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000078fbf000-0x00000000795fefff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Aug 05 23:29:09 jphomelabs kernel: BIOS-e820: [mem 0x0000000100000000-0x000000047f7fffff] usable
Aug 05 23:29:09 jphomelabs kernel: NX (Execute Disable) protection: active
Aug 05 23:29:09 jphomelabs kernel: APIC: Static calls initialized
 
Hi,
Hey there, looking forward to your help on this one: I'm also getting this error when trying to do a GPU passthrough to a VM
Code:
qm start 104
error writing '1' to '/sys/bus/pci/devices/0000:01:00.0/reset': Inappropriate ioctl for device
failed to reset PCI device '0000:01:00.0', but trying to continue as not all devices need a reset
that message by itself is not necessarily an issue: https://git.proxmox.com/?p=qemu-server.git;a=commit;h=458b487bed3f4f03cf55ed2b06620a5c84089530

What is the exact issue you are facing? E.g. VM starting up but not detecting the GPU? Or something else?

Your journalctl output shows the beginning of boot, not the interesting part.
 
Hi,

I’m experiencing the same issue as Kimbus and receiving identical alerts. I’ve configured a Windows 11 VM with GPU passthrough enabled. The VM runs smoothly and functions normally with passthrough working as expected.

However, the problem arises when I stop the VM and attempt to start it again. The VM fails to start, and I encounter the following alerts:

Code:
error writing '1' to '/sys/bus/pci/devices/0000:c6:00.0/reset': Inappropriate ioctl for device 
failed to reset PCI device '0000:c6:00.0', but trying to continue as not all devices need a reset 
timeout waiting on systemd

On the VM, I’ve installed the AMD RadeonResetBugFix as recommended to address this issue (since it's a known AMD problem requiring a restart/reboot to release the GPU for passthrough). Despite applying the patch, the issue persists.

I’ve just installed Proxmox to set up PCI passthrough and encountered this issue. I’m new to this I’m not sure whether it’s a Proxmox related problem or an AMD issue.

Apologies if I’m missing something obvious, any assistance would be greatly appreciated.

Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!