Hello!
--------------------------
TL;DR:
Proxmox server with PCI passthrough to Windows and Linux VMs was working.
BIOS and/or microcode updates broke PCI passthrough to Linux VMs only.
Booting Linux VM w/ PCI initially crashed whole proxmox server.
With some changes, Linux VMs w/ PCI now 'boot' without crashes, but are unable to be pinged, show high CPU/RAM useage, and do not display on monitor.
---------------------------
I have a bit of a head-scratcher going on, though maybe I'm missing something simple. I was running my proxmox server for over a year with no problems, passing my GPU through to either a Linux or Windows VM. Unfortunately, the server would occasionally crash. I recently got around to updating the BIOS and the microcode, which seems to have totally fixed the crashes. I am able to run my Windows VM with GPU passthrough just fine, and have been for a month or two, but for the life of my I cannot get any of my Linux VMs to boot correctly when PCI hardware is attached. The computer has an ASUS X-570 PLUS mobo, Ryzen 7 CPU, and 1070 GPU.
Following my BIOS/microcode updates, I went through the BIOS settings and made the changes to enable virtualization and PCI passthrough, following the proxmox PCI passthrough wiki. My Windows VM w/ PCI started working immediately, but my Ubuntu VM w/ PCI caused the whole system to crash immediately on startup. I made some changes, mostly following this guide. Now, my Ubuntu and Mint VMs 'boot' without crashing the proxmox server, but are unable to be reached. They both show high CPU/RAM useage in the proxmox summary, but do not display to my monitor and cannot be pinged. Interestingly, my monitor wakes up when the VM starts, but ultimately does not receive a signal.
I have been testing various things to fix my issue, in the BIOS settings, GRUB variables, and with the VM hardware options, but at this poing I am stumped. I would appreciate any help or suggestions, and I can provide error logs, any current settings, whatever is useful.
Thanks!
--------------------------
TL;DR:
Proxmox server with PCI passthrough to Windows and Linux VMs was working.
BIOS and/or microcode updates broke PCI passthrough to Linux VMs only.
Booting Linux VM w/ PCI initially crashed whole proxmox server.
With some changes, Linux VMs w/ PCI now 'boot' without crashes, but are unable to be pinged, show high CPU/RAM useage, and do not display on monitor.
---------------------------
I have a bit of a head-scratcher going on, though maybe I'm missing something simple. I was running my proxmox server for over a year with no problems, passing my GPU through to either a Linux or Windows VM. Unfortunately, the server would occasionally crash. I recently got around to updating the BIOS and the microcode, which seems to have totally fixed the crashes. I am able to run my Windows VM with GPU passthrough just fine, and have been for a month or two, but for the life of my I cannot get any of my Linux VMs to boot correctly when PCI hardware is attached. The computer has an ASUS X-570 PLUS mobo, Ryzen 7 CPU, and 1070 GPU.
Following my BIOS/microcode updates, I went through the BIOS settings and made the changes to enable virtualization and PCI passthrough, following the proxmox PCI passthrough wiki. My Windows VM w/ PCI started working immediately, but my Ubuntu VM w/ PCI caused the whole system to crash immediately on startup. I made some changes, mostly following this guide. Now, my Ubuntu and Mint VMs 'boot' without crashing the proxmox server, but are unable to be reached. They both show high CPU/RAM useage in the proxmox summary, but do not display to my monitor and cannot be pinged. Interestingly, my monitor wakes up when the VM starts, but ultimately does not receive a signal.
I have been testing various things to fix my issue, in the BIOS settings, GRUB variables, and with the VM hardware options, but at this poing I am stumped. I would appreciate any help or suggestions, and I can provide error logs, any current settings, whatever is useful.
Thanks!