Sudden Bulk stop of all VMs ?

Excellent hint Fiona i just found by fortune aout the same since one VM was using SCSI without iothread and didnt crash.. Btw since i was trying to upgrade qemu guest agent - is there any automatic procedure? by doing it manually it lost IPs everywhere and NICs needed to be reconfigured manually? thank you Franz
 
The guest agent is not provided by Proxmox VE, but by whatever OS you run inside the guest. So you need to use the guest's upgrade mechanism and there is no automated way (you'd need to script it for each guest OS differently). How are you configuring IPs via the guest agent?
 
Code:
root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
root@pve:~# uname -a
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64 GNU/Linux
i've got the same bug, not sure what to do about it - how easy is it to downgrade back to proxmox 8? at least that didn't seem to cause me issues

can i just download the iso and have it overwrite on top?
 
Hi,
i've got the same bug, not sure what to do about it - how easy is it to downgrade back to proxmox 8? at least that didn't seem to cause me issues
there's multiple issues reported in this thread. Please clarify what you mean by "the same bug" by describing the symptoms of the issue. Please also check your system logs/journal around the time the issue happens. If Proxmox VE 8.0 did not have the issue, you might want to try and boot into an older kernel to see if it's a regression there.
 
Code:
root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
root@pve:~# uname -a
Linux pve 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64 GNU/Linux
i've got the same bug, not sure what to do about it - how easy is it to downgrade back to proxmox 8? at least that didn't seem to cause me issues

can i just download the iso and have it overwrite on top?
no you just say apt install pve-qemu-kvm 8.1.5-6 (ror whatever ist stable for you) dont forget to deactivate the test apt repo.
 
Hi,

there's multiple issues reported in this thread. Please clarify what you mean by "the same bug" by describing the symptoms of the issue. Please also check your system logs/journal around the time the issue happens. If Proxmox VE 8.0 did not have the issue, you might want to try and boot into an older kernel to see if it's a regression there.
sorry if it wasn't clear, it seemed like only a single bug in this thread.

i have an amd 7950x like others in this thread who also have a 7950x. they are noticing random boots with no useful information from the proxmox logs. they are also noticing that previously this did not occur, for example I had 180 uptime at my best.

I have since downgraded the kernel and it has been stabel for 10 hours, but that is meaningless because its such a low period. I will keep updating but the diagnosis from other similar threads on seemingly this exact issue is that it stems from the 6.8.x kernel.

because of the lack of useful logs, people are just throwing things at the wall in the hopes that something sticks (including me) but the shared commonality between us all seems to be that kernel and running the most up to date proxmox.
 
@intelliIT Out of curiosity, on the hosts with this problem are you doing any kind of PCIe sharing?

Also, and separate to that, are the systems configured to use an IOMMU?

Also, for completeness, what's the kernel command line they're booting with?

For example on a 5950X system here:

Code:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction

That info might help figure out what's going wrong. :)
@justinclift
no pcie sharing.
if you mean by configured enabled/ready for usage i dont think so but i would have to check in bios (dmesg | grep -e DMAR -e IOMMU prints empty)
kernel command line: initrd=\EFI\proxmox\6.5.11-7-pve\initrd.img-6.5.11-7-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
 
I have since downgraded the kernel and it has been stabel for 10 hours, but that is meaningless because its such a low period. I will keep updating but the diagnosis from other similar threads on seemingly this exact issue is that it stems from the 6.8.x kernel.
The original report by @ProxyUser was for kernel 6.5 though.
because of the lack of useful logs, people are just throwing things at the wall in the hopes that something sticks (including me) but the shared commonality between us all seems to be that kernel and running the most up to date proxmox.
And also having the same CPU model. I'd suspect it's a quirk/bug specific to that (and maybe even in combination with something else) or I'd expect many more reports. We have some workstations with Ryzen 7900X, but none of my coworkers ever complained about such issues. Unfortunately we don't have an 7950x/reproducer so it's not like we can hunt the issue down from our side.
 
The original report by @ProxyUser was for kernel 6.5 though.

And also having the same CPU model. I'd suspect it's a quirk/bug specific to that (and maybe even in combination with something else) or I'd expect many more reports. We have some workstations with Ryzen 7900X, but none of my coworkers ever complained about such issues. Unfortunately we don't have an 7950x/reproducer so it's not like we can hunt the issue down from our side.
ah good spot, im probably posting in the wrong thread then
root@pve:~# uname -a
Linux pve 6.5.13-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-5 (2024-04-05T11:03Z) x86_64 GNU/Linux

i've downgraded to this and i'll see how i get on, looking good so far though
root@pve:~# uptime
11:13:37 up 1 day, 7:44, 1 user, load average: 4.59, 4.35, 4.33
 
Hi,

did you already try the 6.8 opt-in kernel? Do you have the latest BIOS updates and CPU microcode installed? See: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_firmware_cpu ?

Is there anything in the system logs/journal (if not you could still try to run journalctl -f from another system via SSH as the logs might not make it to disk)?
@fiona
got my hands on a test-setup now; 7950x and identical hardware-components.
i ran an 8.1 installer and did updates/upgrades right away, now on: initrd=\EFI\proxmox\6.8.8-1-pve\initrd.img-6.8.8-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs. i assume this is the newest stable-kernel and was opt-in at the time of this post.
i will set up a replica of my production-environment and will monitor the hosts.
can you guide me through some dumps/logs you guys would like to see, if the new kernel doesnt fix the issue?
i will try to somehow capture the physical screen, maybe something pops up there.
 
  • Like
Reactions: fiona
@fiona
got my hands on a test-setup now; 7950x and identical hardware-components.
i ran an 8.1 installer and did updates/upgrades right away, now on: initrd=\EFI\proxmox\6.8.8-1-pve\initrd.img-6.8.8-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs. i assume this is the newest stable-kernel and was opt-in at the time of this post.
i will set up a replica of my production-environment and will monitor the hosts.
can you guide me through some dumps/logs you guys would like to see, if the new kernel doesnt fix the issue?
i will try to somehow capture the physical screen, maybe something pops up there.
Any kind of log would be better than nothing. To start out, you could try is booting the kernel without the quiet commandline option.

What you can also do is setup kdump and try to obtain a kernel crash dump, see:
https://forum.proxmox.com/threads/kernel-crash-issue.100593/
and since your root is ZFS: https://forum.proxmox.com/threads/kernel-crash-issue.100593/post-435055
 
it might be helpful if people start posting their outputs from
uname -a
pveversion -v
and whatever additional grub commands they have

since i reverted back, managed to get a 30+ day uptime but then randomly had a reboot

since then i've tried adding the ACS bypass methods to grub, i dont know if that will change anything but i kind of want to upgrade again and see if it will

mine are listed here
GRUB_DEFAULT=0
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="ro quiet pcie_acs_override=downstream,multifunction amd_iommu=on iommu=pt initcall_blacklist=sysfb_init"
GRUB_CMDLINE_LINUX=""

this was the setup which gave me 30+ days uptime, only thing to note i guess is that i also use BTRFS as a backing base storage instead of ZFS but i dont think that maters

root@pve:~# uname -a
Linux pve 6.5.11-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-4 (2023-11-20T10:19Z) x86_64 GNU/Linux
root@pve:~# pveversion -v

proxmox-ve: 8.1.0 (running kernel: 6.5.11-4-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.0.9
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.5: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.9
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-1
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3
 
  • Like
Reactions: justinclift
I believe I'm affected as well by random reboots but on slightly different CPU AMD Ryzen 9 5950X.

I have 2 nodes in my cluster, both are exact same type hosts, same kernel, same versions from Hetzner but with the hint of microcode in this thread I see a difference:

working Node with 3 of my most important VMs having a total of 20vCPUs, all in host CPU mode:
VM1: 2sockets, 4 cores
VM2: 2sockets, 4 cores
VM3: 2sockets, 2 cores

root@n2:~# grep microcode /proc/cpuinfo | uniq
microcode : 0xa20102b

problematic Node with 6 lest important VMs having a total of 32vCPU, all in host CPU mode:
root@n1:~# grep microcode /proc/cpuinfo | uniq
microcode : 0xa20120e
VM1: 8 (2sockets, 4 cores)
VM2: 4 (2sockets, 2 cores)
VM3: 4 (2sockets, 2 cores)
VM4: 4 (2sockets, 2 cores)
VM5: 8 (2sockets, 4 cores)
VM6: 4 (2sockets, 2 cores)

( I switched them all now for testing to qemu64, but I will lose about 20% performance (tested with 7z b)

root@n1:~# uname -a
Linux n1 6.8.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-2 (2024-06-24T09:00Z) x86_64 GNU/Linux
root@n1:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.2
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.3-1
proxmox-backup-file-restore: 3.2.3-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
We have the same issue with Ryzen 7950X3D and 7900 nodes in clusters that randomly reboot without any hint in the logs.
We will pin them on kenel 6.5.13-3-pve to see if it helps.
Currtenly we are on Proxmox 8.2.2 with kernel 6.8.4-3-pve, and an average of one week between crashes of one given node.
No nested virtualisation, mix of Kvm64 windows and linux guests.

@intelliIT have you had any improvement since your last post in this thread ?
 
just an update to this, but it looks like i've resolved all the problems, i had already made sure that my bios was up to date but i stumbled upon the pve helper script updater for microcode https://tteck.github.io/Proxmox/#proxmox-ve-processor-microcode and ran that

afterwards i was rock solid, so my guess at least for my own problem seemed to stem from AMD and nothing to do with proxmox at all

in addition to that, i also added the acs override, so my guess is strongly within one of those 2 areas
Code:
GRUB_CMDLINE_LINUX_DEFAULT="ro quiet pcie_acs_override=downstream,multifunction amd_iommu=on iommu=pt initcall_blacklist=sysfb_init amd_iommu=force_enable"

goodluck to any others who have this problem and are looking to resolve it, try this stuff out above if you want and please report back if it also fixed it for you too

the following below wouldn't have been possible previously, you can read my crying posts about it not working a few pages back

Code:
root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-3-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-3
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
amd64-microcode: 3.20240116.2+nmu1
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
root@pve:~# uname -a
Linux pve 6.8.8-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-3 (2024-07-16T16:16Z) x86_64 GNU/Linux
 
Last edited:
I've run the microcode updater but it looks like I'm already on the latest one it didn't change:
I tried both deb versions in the script.

root@n1:~# journalctl -k | grep -E "microcode" | head -n 1
Aug 09 06:40:58 n1 kernel: microcode: Current revision: 0x0a20120e

I probably rather need a downgrade as my other node with exactly same hardware is running fine with 0x0a20102b (uptime 29days)

I changed GRUB_CMDLINE_DEFAULT from
GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=1"

to

GRUB_CMDLINE_LINUX_DEFAULT="ro quiet pcie_acs_override=downstream,multifunction amd_iommu=on iommu=pt initcall_blacklist=sysfb_init amd_iommu=force_enable"

lets see how this goes.
 
Many thanks for your feedback, I will check out the microcode update, adapt the command line and will post feedback here too.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!