I recently installed Proxmox 8 in a homelab. I had used Proxmox 7.x in the past and easily configured PCI passthrough - passing a disk to a virtual Proxmox backup VM, etc.
I have 4 HP Z series machines - the latest being a Z6 G4. I had used the other 3 (Z440, Z240, Z230) with Proxmox 7 and enabled IOMMU with zero fuss - maybe 6 months ago? With the latest V8, I decided to setup the lab again and I was pulling my hair out trying to get IOMMU enabled. I use the default install - which defaults to ext4 and I did it on all 4 machines - verifying BIOS configs before starting. I even was using sed to update /etc/default/grub and /etc/modules so I could copy/paste after triple-checking I had the right syntax and options, etc. I eventually even tried going back to a 7.4 and a 7.1 Proxmox install - on the Z6 at least. Nothing worked until I finally found that even though the install was using ext4, it was using system.d so my grub edits were being ignored....
I have never selected zfs for the install (since I don't know it well) but at this point I tried it -using raid1 on 2 nvme drives. I followed the IOMMU enable instructions for system.d and it worked the first time - after probably 20 different default installs - different disks and machines. The only thing I can think of is that I have updated the HP BIOS that made a difference? When I installed Proxmox 7.1 AND did not do any updates at all, I figured I would be using the same (or older) kernel that I used in the past so it shouldn't be that.
So, I probably can now force the other 3 machines to use system.d (zfs) on the install, but am curious if anyone can explain why this is happening now when the same procedures I used in the past - updating /etc/default/grub and /etc/modules doesn't matter - since I was using grub by default before?
I have 4 HP Z series machines - the latest being a Z6 G4. I had used the other 3 (Z440, Z240, Z230) with Proxmox 7 and enabled IOMMU with zero fuss - maybe 6 months ago? With the latest V8, I decided to setup the lab again and I was pulling my hair out trying to get IOMMU enabled. I use the default install - which defaults to ext4 and I did it on all 4 machines - verifying BIOS configs before starting. I even was using sed to update /etc/default/grub and /etc/modules so I could copy/paste after triple-checking I had the right syntax and options, etc. I eventually even tried going back to a 7.4 and a 7.1 Proxmox install - on the Z6 at least. Nothing worked until I finally found that even though the install was using ext4, it was using system.d so my grub edits were being ignored....
I have never selected zfs for the install (since I don't know it well) but at this point I tried it -using raid1 on 2 nvme drives. I followed the IOMMU enable instructions for system.d and it worked the first time - after probably 20 different default installs - different disks and machines. The only thing I can think of is that I have updated the HP BIOS that made a difference? When I installed Proxmox 7.1 AND did not do any updates at all, I figured I would be using the same (or older) kernel that I used in the past so it shouldn't be that.
So, I probably can now force the other 3 machines to use system.d (zfs) on the install, but am curious if anyone can explain why this is happening now when the same procedures I used in the past - updating /etc/default/grub and /etc/modules doesn't matter - since I was using grub by default before?