Hello there!
I have freshly installed PVE 8.0 on lab server, trying to run some performance tests for different storage backends.
Server configuration (if matters): Xeon W-2145 on Supermicro X11SRA-RF with pair of NVMe Micron 7300 Pro 1.92tb each.
nvme0n1 used as LVM PV with PVE installed + /boot
nvme1n1 is a thing i wanted to passthrough to VM and use after for some tests as qcow2/lvm
I've read carefully https://pve.proxmox.com/wiki/PCI(e)_Passthrough instructions.
With step-by-step i have all required and present modules loaded:
But i've met issue i have no idea how to fix for now:
Since both drives has same vendor id and same kernel driver and module in use - im sure its a bad idea to blacklist "nvme" driver or assign 1344:51a2 or 1344:3000 to vfio-pci
I've found way to unbind single device from nvme driver:
but have no idea how to bind to vfio-pci (and how to keep it between reboots)
Since i should have 4x same NVMes per server distributed to pair of NUMA nodes in planned production cluster i want some of them to be dedicated for heavy loaded VMs (in case of big performance differences) and others to be shared storage.
Is there any way to have to similar NVMes with one exposed to base system and another working in passthrough mode?
P.S> With xcp-ng (which i experienced with) it should be simple thing (just to add xen-pciback.hide=(0000:06:00.0) to grub options) and it was based on physical pci address, not vendor id.
I have freshly installed PVE 8.0 on lab server, trying to run some performance tests for different storage backends.
Server configuration (if matters): Xeon W-2145 on Supermicro X11SRA-RF with pair of NVMe Micron 7300 Pro 1.92tb each.
nvme0n1 used as LVM PV with PVE installed + /boot
nvme1n1 is a thing i wanted to passthrough to VM and use after for some tests as qcow2/lvm
I've read carefully https://pve.proxmox.com/wiki/PCI(e)_Passthrough instructions.
With step-by-step i have all required and present modules loaded:
Bash:
# lsmod | grep vfio
vfio_pci 16384 0
vfio_pci_core 94208 1 vfio_pci
irqbypass 16384 2 vfio_pci_core,kvm
vfio_iommu_type1 49152 0
vfio 57344 3 vfio_pci_core,vfio_iommu_type1,vfio_pci
iommufd 73728 1 vfio
But i've met issue i have no idea how to fix for now:
Bash:
05:00.0 Non-Volatile memory controller [0108]: Micron Technology Inc 7300 PRO NVMe SSD [1344:51a2] (rev 01)
Subsystem: Micron Technology Inc 1920GB U.2 [1344:3000]
Kernel driver in use: nvme
Kernel modules: nvme
06:00.0 Non-Volatile memory controller [0108]: Micron Technology Inc 7300 PRO NVMe SSD [1344:51a2] (rev 01)
Subsystem: Micron Technology Inc 1920GB U.2 [1344:3000]
Kernel driver in use: nvme
Kernel modules: nvme
Since both drives has same vendor id and same kernel driver and module in use - im sure its a bad idea to blacklist "nvme" driver or assign 1344:51a2 or 1344:3000 to vfio-pci
I've found way to unbind single device from nvme driver:
Code:
echo 0000:06:00.0 > /sys/bus/pci/drivers/nvme/unbind
Code:
# echo 0000:06:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
-bash: echo: write error: No such device
Since i should have 4x same NVMes per server distributed to pair of NUMA nodes in planned production cluster i want some of them to be dedicated for heavy loaded VMs (in case of big performance differences) and others to be shared storage.
Is there any way to have to similar NVMes with one exposed to base system and another working in passthrough mode?
P.S> With xcp-ng (which i experienced with) it should be simple thing (just to add xen-pciback.hide=(0000:06:00.0) to grub options) and it was based on physical pci address, not vendor id.