As per title, I've searched at length in the docs, forum and on the net and could not find a proper complete guide on how to correctly exposed a local host disk (say a nvme drive) to a VM. So I though about starting this post, hoping to define a guide for future use cases.
My use case: my VM needs to be able "natively" see the underlying disk as the VM itself needs to be able to manage the LVM on disk. This is just for a second disk - it doens't need to be the VM's OS boot disk. But I guess this might be a generic answer applicable to many.
Questions:
1. How to I correctly achieve this achieving correct PCIe speed and hardware performance - my drive is a NVMe considerably fast (~50k IOPS) and will need the storage media for a BD?
2. Should I use fx440 or q35 since it is PCIe?
3. Should I enable vfio and hardware passthorugh immou=pt? Which is the exact parameter for GRUB for AMD-Vi (amd immou)?
4. Should I blacklist the PCI device at host level so that it does not appear as a PVE disk / cannot be used by the host?
Thank you so much in advance your support.
My use case: my VM needs to be able "natively" see the underlying disk as the VM itself needs to be able to manage the LVM on disk. This is just for a second disk - it doens't need to be the VM's OS boot disk. But I guess this might be a generic answer applicable to many.
Questions:
1. How to I correctly achieve this achieving correct PCIe speed and hardware performance - my drive is a NVMe considerably fast (~50k IOPS) and will need the storage media for a BD?
2. Should I use fx440 or q35 since it is PCIe?
3. Should I enable vfio and hardware passthorugh immou=pt? Which is the exact parameter for GRUB for AMD-Vi (amd immou)?
4. Should I blacklist the PCI device at host level so that it does not appear as a PVE disk / cannot be used by the host?
Thank you so much in advance your support.