Going to add as much context as possible, one aspect being that I'm a scrub at Proxmox
Got an old Dell T3610 with a Intel® Xeon® E5-1607, it has an integrated Intel 82579. I had a PCI NIC laying around, TEG-PCITXR with a RTK8169 on it. (don't know if hardware info is relevant).
When I first flash Proxmox VE 7.3.1 (kernel: 5.14.74-1-pve), I create a VM and try adding a PCI device via the Hardware menu. When doing so, I get a highlighted alert saying "No IOMMU detected, please activate". I checked BIOS and found that "Intel Virtualization Technology" and "VT for Direct I/O" were both enabled. When running
cat /proc/cmdline, I was getting BOOT_IMAGE=/vmlinuz-5.15.74-1-pve root=/dev/mapper/pve-root ro quiet, it was missing intel_iommu=on.
When running proxmox-boot-tool refresh, I get
I followed this thread's recommendation to switch legacy-boot to Proxmox-Boot-Tool. I found the 512M partition via lslk, ran proxmox-boot-tool format /dev/sdb2 --force, then proxmox-boot-tool init /dev/sdb2. At this point, I ran proxmox-boot-tool refresh and rebooted the system. Again I ran cat /proc/cmdline, still didn't see intel_iommu=on, so I added it to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Reran
proxmox-boot-tool refresh, rebooted, and now intel_iommu shows in cat /proc/cmdline. And I can add the PCI device via the webgui at https://192.168.1.1:8006..
however...
When I add the PCI device via hardware and attempt the start the vm, the web gui crashes. My question is, what specific logs should I be reading to find more info on why it's crashing? I can still run commands on the physical machine and can even still ssh into it. It's just the web gui, it endlessly loads if that qm is in a running state. I then have to stop it from the commandline before the webgui will work again.
Got an old Dell T3610 with a Intel® Xeon® E5-1607, it has an integrated Intel 82579. I had a PCI NIC laying around, TEG-PCITXR with a RTK8169 on it. (don't know if hardware info is relevant).
When I first flash Proxmox VE 7.3.1 (kernel: 5.14.74-1-pve), I create a VM and try adding a PCI device via the Hardware menu. When doing so, I get a highlighted alert saying "No IOMMU detected, please activate". I checked BIOS and found that "Intel Virtualization Technology" and "VT for Direct I/O" were both enabled. When running
cat /proc/cmdline, I was getting BOOT_IMAGE=/vmlinuz-5.15.74-1-pve root=/dev/mapper/pve-root ro quiet, it was missing intel_iommu=on.
When running proxmox-boot-tool refresh, I get
Code:
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private
mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
I followed this thread's recommendation to switch legacy-boot to Proxmox-Boot-Tool. I found the 512M partition via lslk, ran proxmox-boot-tool format /dev/sdb2 --force, then proxmox-boot-tool init /dev/sdb2. At this point, I ran proxmox-boot-tool refresh and rebooted the system. Again I ran cat /proc/cmdline, still didn't see intel_iommu=on, so I added it to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Reran
proxmox-boot-tool refresh, rebooted, and now intel_iommu shows in cat /proc/cmdline. And I can add the PCI device via the webgui at https://192.168.1.1:8006..
however...
When I add the PCI device via hardware and attempt the start the vm, the web gui crashes. My question is, what specific logs should I be reading to find more info on why it's crashing? I can still run commands on the physical machine and can even still ssh into it. It's just the web gui, it endlessly loads if that qm is in a running state. I then have to stop it from the commandline before the webgui will work again.
Last edited: