Passthrough individual NVMe drives to different VMs?

Nexalapp

New Member
Mar 21, 2024
10
1
3
I am experimenting with Proxmox for the first time by trying to virtualize my Unraid environment alongside a new Windows environment. I am using PCI passthrough on my HBA so that Unraid can see my disks no differently than running on bare metal. I also have two NVMe SSDs that Unraid uses for a BRTFS RAID1 cache drive and a third NVMe SSD that I hoped to use with my new Windows VM. However, when I go to add the NVMe drives to their respective VMs, I am only seeing the one NVMe controller and not the individual drives:

1711102396940.png
Code:
root@nas:~# lspci
00:00.0 Host bridge: Intel Corporation Device a740 (rev 01)
00:01.0 PCI bridge: Intel Corporation Device a70d (rev 01)
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)
00:06.0 PCI bridge: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port (rev 01)
00:14.0 USB controller: Intel Corporation Device 7a60 (rev 11)
00:14.2 RAM memory: Intel Corporation Device 7a27 (rev 11)
00:14.3 Network controller: Intel Corporation Device 7a70 (rev 11)
00:15.0 Serial bus controller: Intel Corporation Device 7a4c (rev 11)
00:15.1 Serial bus controller: Intel Corporation Device 7a4d (rev 11)
00:15.2 Serial bus controller: Intel Corporation Device 7a4e (rev 11)
00:15.3 Serial bus controller: Intel Corporation Device 7a4f (rev 11)
00:16.0 Communication controller: Intel Corporation Device 7a68 (rev 11)
00:17.0 SATA controller: Intel Corporation Device 7a62 (rev 11)
00:19.0 Serial bus controller: Intel Corporation Device 7a7c (rev 11)
00:19.1 Serial bus controller: Intel Corporation Device 7a7d (rev 11)
00:1b.0 PCI bridge: Intel Corporation Device 7a40 (rev 11)
00:1b.4 PCI bridge: Intel Corporation Device 7a44 (rev 11)
00:1c.0 PCI bridge: Intel Corporation Device 7a38 (rev 11)
00:1c.2 PCI bridge: Intel Corporation Device 7a3a (rev 11)
00:1c.4 PCI bridge: Intel Corporation Device 7a3c (rev 11)
00:1d.0 PCI bridge: Intel Corporation Device 7a30 (rev 11)
00:1d.4 PCI bridge: Intel Corporation Device 7a34 (rev 11)
00:1f.0 ISA bridge: Intel Corporation Device 7a04 (rev 11)
00:1f.3 Audio device: Intel Corporation Device 7a50 (rev 11)
00:1f.4 SMBus: Intel Corporation Device 7a23 (rev 11)
00:1f.5 Serial bus controller: Intel Corporation Device 7a24 (rev 11)
01:00.0 VGA compatible controller: NVIDIA Corporation Device 2702 (rev a1)
01:00.1 Audio device: NVIDIA Corporation Device 22bb (rev a1)
02:00.0 Non-Volatile memory controller: Realtek Semiconductor Co., Ltd. RTS5763DL NVMe SSD Controller (rev 01)
04:00.0 Non-Volatile memory controller: Realtek Semiconductor Co., Ltd. RTS5763DL NVMe SSD Controller (rev 01)
06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03)
07:00.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca)
08:00.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca)
08:08.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca)
08:09.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca)
09:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
0b:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
0c:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
0c:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
0d:00.0 Non-Volatile memory controller: Realtek Semiconductor Co., Ltd. RTS5763DL NVMe SSD Controller (rev 01)
root@nas:~#

Is this expected? Or is there a way of doing passthrough at the drive level (as opposed to the controller level) for NVMe drives? Thanks!
 
Hi,

we have a dedicated guide for this: Passthrough Physical Disk to Virtual Machine (VM).
In the example -scsiN is used to attach it as a SCSI disk, but using -virtioN should work as well.
But keep in mind that with this the VM is still working with virtual disks and not directly accessing the real physical NVMe SSDs, like PCI passthrough of the SSDs would do.

I am only seeing the one NVMe controller and not the individual drives
I see three different NVMe controllers.
 
Thanks to you both.

@Dunuin I had read through the list more times than I'd like to admit and didn't see the other NVMe controllers, but of course thanks to your second pair of eyes, I do see them now. I have no idea how I missed them.

However, before seeing this message I had already experimented with the instructions @cheiss shared and was able to get the system to boot.

Does the system access the virtual SCSI drives differently than if they are via passthrough? Any information you can share to help me understand the ramifications of PCI passthrough vs the SCSI approach would be much appreciated. Thanks!
 
  • Like
Reactions: soend
With the disk passthrough cheiss mentioned your VM is working with a virtual disk that is mapped to your physical disk. So your VM is only accessing the SSD indirectly with virtualization in between (thats why you see stuff like 512B/512B logical/physical sectors even if your physical disk might use 512B/4K, stuff like health monitoring inside the VM won't work and so on). So VM writes to QEMU virtual disk, QEMU writes to physical disk.
With PCI passthrough you give the VM exclusive and direct access to the real hardware. You skip the virtualization and the VM will directly read/write to that physical disk.
 
Last edited:
With the disk passthrough cheiss mentioned your VM is working with a virtual disk that is mapped to your physical disk. So your VM is only accessing the SSD indirectly with virtualization in between (thats why you see stuff like 512B/512B logical/physical sectors even if your physical disk might use 512B/4K, stuff like health monitoring inside the VM won't work and so on). So VM writes to QEMU virtual disk, QEMU writes to physical disk.
With PCI passthrough you give the VM exclusive and direct access to the real hardware. You skip the virtualization and the VM will directly read/write to that physical disk.
Thank you for the additional information. Would I be able to simply swap out the SCSI virtual drive with the passthrough NVMe controller? Or should I expect complications due to it once being a QEMU virtual disk? I finally got this VM working to my satisfaction, so I'm hesitant to make too many additional changes without having some level of confidence this would not cause additional issues. Another huge thanks for the continued support.
 
Or should I expect complications due to it once being a QEMU virtual disk?
I think that should work but not aure how well with different sector sizes and so on as the filesystems on it might now be optimized for 512B/512B and so on.
 
I did try to use PCI passthrough with the drives I setup via QEMU virtual disks, but I got a windows launch error. It seemed to work well enough for windows to attempt loading, but not well enough for it to complete loading. Documenting for anyone else who finds this thread in the future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!