[SOLVED] Proxmox PCIE Passthrough - My experience

DustinB

New Member
Nov 22, 2016
3
0
1
40
First off Specs -

CPU - Intel 5820K (Haswell-E)
RAM - 64GB
Motherboard - Asus X99 Deluxe (1st edition)
Video Card 1 - Nvidia Quadro FX 580 (PCI Express Slot 1)
Video Card 2 - Sapphire Nitro R9 380 (PCI Express Slot 3)
Keyboard 1 - Logitech K400 (KB w/ Trackpad built in)
Keyboard 2 - Corsair K70
Mouse - Logitech G502
Several Monitors...
9x 128GB SSDs 1x for Proxmox/Isos 8x for ZFS Raid 0 (Yes.. I know...)
1x 240GB SSD (Running Windows 10 as a fallback drive disabled in the BIOS)

Bios Configuration -
CSM - Auto (all subsections set to EFI first)
Intel VT-X - Enabled
Intel VD-D - Enabled
ACS Control - Enabled (I don't know if this is required but...)

Proxmox VE version: 4.3

----------------- Possibly skip this section -----------------

This is my production machine so I need to have a good fallback in case this takes a whole weekend...
The idea is to disable specific drives in the BIOS so that I can flip back and fourth without Proxmox toching Windows and without Windows touching Proxmox. (think disaster recovery or if my fancy ZFS pool craps the bed)

Proxmox will be installed to and run from the Samsung drive connected to SATA Port 1
A ZFS pool will be created to run on the drives running on SATA Ports 2-9
Windows will run from the Intel Drive connected to SATA Port 10

To set this up correctly I did the following:
Installed all of my SSDs taking note of where everything is plugged in. I know my Intel SSD is plugged into SATA10. I went into the BIOS and disabled SATA Ports 1-9. I then booted off my trusty Windows 10 USB key and did a standard install of Windows 10...

Once Windows was installed and running again...
Back to the BIOS!
Re-Enable SATA Drives 1-9 (Note at this point all drives are enabled)
Boot back into Windows
Bonus because I'm lazy... All my SSD's had data on them and from experience its just easier for me to wipe them using Diskpart in Windows... You can skip this part if your drives are already blank...

Use Diskpart to wipe all of the Samsung SSDs
Diskpart (from an admin command prompt)
List Disk
(note the disk numbers of your non Intel Drives)
For each drive:
Select Disk X
Clean
Repeat until all of the Samsung drives are wiped

DONT Skip this Step: go to Device Manager and DISABLE all of the Samsung Drives. (My logic here is that I dont want windows to even think about touching my ZFS pool.)

----------------- Install Proxmox -----------------

Install Proxmox:

So let me start off with a disclaimer... I suck at working with Linux...

This is probably me being anal here but it is important to me (especially with so many drives) to know exactly which drive(s) contain operating systems, because of this (and because this burned me previously) I go into the BIOS and disable ALL drives except the drive I am installing an operating system on. In this case we're installing Proxmox on SATA Port 1

Go back into the BIOS
Disable SATA Ports 2-10

Install Proxmox using whatever installation media you have (in my case a USB Key)
Note here that the Proxmox GUI installer cant create a ZFS pool > 8 drives (This is why I disabled the drives... I'll just create it manually in a few minutes anyways)

I just nexted all the way through the installer (obviously you need to set a root password etc...)

ProTip: You will be using the console a LOT... Make sure you a.) remember the root password you set and b.) give yourself a static IP that you can remember easily...

Once Proxmox is installed and you are about to reboot, take a moment to go back into the bios and re-enable drives 2-9 (Note here that my Windows SSD is on Port 10 and will remain DISABLED from this point forward)


From here on out we're going to use putty (or whatever your preferred terminal software is...) from another computer...

SSH to your proxmox host and log in with your root credentials
Maybe take a moment to create another account and use sudo for any commands that require elevation (I'm lazy and just used root...)

----------------- ZFS Pool Configuration -----------------

Ok, first off you would think that /dev/sda = SATA1, /dev/sdb = SATA2, etc... Well apparently that isnt the case... Lets figure out which drivse you can use in your ZFS Pool...

Commands:
Code:
lsblk
zpool create Tank /dev/sda /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
zfs set compression=on Tank
vi /etc/modprobe.d/zfs.conf
zpool status
reboot

1.) lsblk lists the drives you have connected. Really you are just using it to determine which drive is your OS drive so you dont include it in your ZFS pool (or else you will get an error). Its pretty obvious which one is your OS drive because it will have several partitions the gimme is looking for the drive with a mountpoint of /

This is what mine looks like:
Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 119.2G  0 disk
├─sda1               8:1    0 119.2G  0 part
└─sda9               8:9    0     8M  0 part
sdb                  8:16   0 119.2G  0 disk
├─sdb1               8:17   0 119.2G  0 part
└─sdb9               8:25   0     8M  0 part
sdc                  8:32   0 119.2G  0 disk
├─sdc1               8:33   0 119.2G  0 part
└─sdc9               8:41   0     8M  0 part
sdd                  8:48   0 119.2G  0 disk
├─sdd1               8:49   0  1007K  0 part
├─sdd2               8:50   0   127M  0 part
└─sdd3               8:51   0 119.1G  0 part
  ├─pve-root       251:0    0  29.8G  0 lvm  /
  ├─pve-swap       251:1    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta 251:2    0    68M  0 lvm
  │ └─pve-data     251:4    0  66.5G  0 lvm
  └─pve-data_tdata 251:3    0  66.5G  0 lvm
    └─pve-data     251:4    0  66.5G  0 lvm
sde                  8:64   0 119.2G  0 disk
├─sde1               8:65   0 119.2G  0 part
└─sde9               8:73   0     8M  0 part
sdf                  8:80   0 119.2G  0 disk
├─sdf1               8:81   0 119.2G  0 part
└─sdf9               8:89   0     8M  0 part
sdg                  8:96   0 119.2G  0 disk
├─sdg1               8:97   0 119.2G  0 part
└─sdg9               8:105  0     8M  0 part
sdh                  8:112  0 119.2G  0 disk
├─sdh1               8:113  0 119.2G  0 part
└─sdh9               8:121  0     8M  0 part
sdi                  8:128  0 119.2G  0 disk
├─sdi1               8:129  0 119.2G  0 part
└─sdi9               8:137  0     8M  0 part
zd0                230:0    0    40G  0 disk
├─zd0p1            230:1    0   500M  0 part
└─zd0p2            230:2    0  39.5G  0 part
zd16               230:16   0   150G  0 disk
├─zd16p1           230:17   0   450M  0 part
├─zd16p2           230:18   0   100M  0 part
├─zd16p3           230:19   0    16M  0 part
└─zd16p4           230:20   0 149.5G  0 part

2.) the zpool create command is pretty simple: zpool create [Name of your zpool] [Type of zpool without this it assumes raid0] [drive1] [drive2] [drive3] [drive...]
Note that I did not include /dev/sdd because for whatever reason SATA1 = /dev/sdd (this is why we use lsblk above)

3.) Enable ZFS Compression (all signs point to this being a good thing... do your own research...)

4.) Add the following to /etc/modprobe.d/zfs.conf (this will limit ram usage for ZFS to using between 4 and 8GB otherwise ZFS will eat ALL your RAM in short order)
Code:
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=8589934592

If vi is not something you are familiar with check out this guide here: http://heather.cs.ucdavis.edu/~matloff/UnixAndC/Editors/ViIntro.html
(We are going to use vi A LOT)

5.) zpool Status just lets you know the status of your zpool... you will either see your drives or that you dont have a zpool...

6.) I'm a fan of reboots...
 
----------------- PCI Express Passthrough Configuration -----------------

So I'm going to point out that this is a guide for Haswell-E / Asus X99 boards the following may not apply to you exactly (check out the Proxmox wiki it helped me get to this point)...

Commands:
Code:
vi /etc/default/grub
update-grub
dmesg | grep ecap
find /sys/kernel/iommu_groups/ -type l
lspci
lspci -n -s 02:00
echo "options vfio-pci ids=1002:6939,1002:aad8 disable_vga=1" >> /etc/modprobe.d/vfio.conf
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
reboot

1.) Use vi to add intel_iommu=on
Code:
Change: GRUB_CMDLINE_LINUX_DEFAULT="quiet"
To: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
2.) Update Grub - I'm not 100% here but this seems to enable IOMMU immediately

3.) This basically tells you if you have a board capable of supporting IOMMU without hacks... x99+Haswell-E = skip this step... its supports it (assuming you configured your bios correctly)

4.) This command should return a list of IOMMU groups, however... if it comes up empty, reboot and try again... (again x99+Haswell-E = skip this step)

5.) lspci lists the location of everything on the pci bus... Find the devices you want to pass to the vm in the list and note the addresses:
Code:
00:1b.0 Audio device: Intel Corporation C610/X99 series chipset HD Audio Controller (rev 05)
02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tonga PRO [Radeon R9 285/380] (rev f1)
02:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Tonga HDMI Audio [Radeon R9 285/380]

For now, just write the addresses down (note that in this case you can either refer to 02:00.0 and 02:00.1 separately OR they can both be referred to as 02:00)

6.) lspci -n -s [video card address] tells you the device IDs your video card. We need to tell Proxmox to ignore this device but to do so we need the device IDs

In my case I get:
Code:
02:00.0 0300: 1002:6939 (rev f1)
02:00.1 0403: 1002:aad8

The Device IDs are: 1002:6939 and 1002:aad8

7.) Edit this command to match your device IDs!

echo "options vfio-pci ids=1002:6939,1002:aad8 disable_vga=1" >> /etc/modprobe.d/vfio.conf

8-11.) Blacklist Video Drivers so they dont run at boot (this is a sledgehammer approach...):

echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf

12.) got any of them reboots?

----------------- Virtual Machine Creation -----------------

Log into the Proxmox web console and create a VM

Make a note of the VM ID as you will need that moving forward (in the examples below I created my VM with the ID of 101)
Under the OS tab select "Microsoft Windows 8.x/10/2012/r2"
For the CD/DVD Drive just point it at the ISO to install Windows (I'm assuming at this point you have already placed the windows 10 iso on the server)
For Hard Disk - Change the Bus/Device to "VirtIO"
For CPU / Memory just set it to whatever you intend (I chose 6 cores / 16GB Fixed RAM )
Change the Network to VirtIO as well

At this point I would suggest the following

Locate the latest VirtIO Driver ISO for Windows (google...)
Add a secondary CD/DVD drive and mount that iso there.
Use the console view to install/configure/update Windows 10 at this time using the drivers on the ISO as needed I would very much suggest updating up to and including the anniversary update. I would also suggest downloading the latest AMD Crimson driver, running it to extract the drivers (note the location, probably: under C:\AMD) and then cancel the setup. This will come in handy later...

Once the machine is updated now we can start adding passthrough devices. You can passthrough everything from the start but at least in my case every video restart not only kills the VM but also takes down the host. The anniversary update even went so far as to completely hose the OS leading to a reinstall...

Before continuing I would strongly advise you take a snapshot of the VM you just spent the better part of an afternoon configuring...

----------------- Determining and Passing through USB -----------------

Before you shut down your VM you need to identify the HW IDs of your keyboard and mouse

From the console:
Code:
qm monitor 101
from inside QM: info usbhost
from inside QM: quit
From the Proxmox web console: Shutdown your VM

1.) qm monitor [vmid] - puts you in VM monitoring mode. Your VM must be turned on for this to work.
2.) info usbhost will list all of your connected USB devices and their associated HW IDs
In my case:
Code:
Class 00: USB device 046d:c07d, Gaming Mouse G502
Class 00: USB device 1b1c:1b13, Corsair K70 RGB Gaming Keyboard

The Device ID's are 046d:c07d and 1b1c:1b13
3.) quit - drops out of the qm console
4.) We shutdown the VM because changes to the .conf only take effect on power on (restarts dont count)


----------------- Potatoes with Meat -----------------

Ok here we go. Remember how previously we took a few minutes to write down the BUS IDs of our devices? Now we will actually use them...


SSH to your Proxmox host and login as root
Edit /etc/pve/qemu-server/[VM ID from above].conf

Commands:
Code:
vi /etc/pve/qemu-server/101.conf

Add the following lines to the .conf:

Code:
bios: ovmf
hostpci0: 02:00
hostpci1: 00:1b
usb0: host=046d:c07d
usb1: host=1b1c:1b13
  vga: qxl

(I put the bios at the top of the file and the rest at the bottom but I dont think it matters)

If you remember above BUS ID 02:00 = 02:00.0 (Video Card) + 02:00.1 (Video Card HDMI Audio), Bus ID 00:1B = Onboard Audio, 046d:c07d = Logitech g502 Mouse, 1b1c:1b13 = Corsair Keyboard

Once this is done you can save the file and launch the VM. With any luck it will boot up without issue... the reason we added the vga: qxl to the .conf is so Windows boots with that as the primary graphics card but still recognizes that there is a PCI card installed... Keep in mind as soon as you pass this keyboard and mouse to the VM its dead to the host until you reboot... Get a second keyboard...

Once you are in the OS the next step is to go to device manager and manually install the latest AMD Crimson Driver. With any luck you should just see that the driver is installed (probably not initialized for whatever reason which is just fine) Shutdown the VM

use the console to change your vm .conf:
Code:
vi /etc/pve/qemu-server/101.conf
Code:
Change
vga: qxl
to
vga: none

Save the .conf

Cross your fingers and boot up your VM from the web console.

With any luck you should be good to go


----------------- Notes, Issues I ran into and divergence from the Proxmox wiki -----------------

the wiki is a bit confusing on few topics but the big things I noticed

My build process actually began with installing Windows 7 and attempting to upgrade to Windows 10. Dont do this it ends poorly... When you give up and create a new VM take a moment to copy your "smbios1: uuid=" from your old .conf to the new before imaging. I'm not saying this will make a huge difference but activation was less painful...

appending ,pcie=1 and ,x-vga=on actually seemed to do more harm than good for my video card. Obviously YMMV so it might not be a bad idea to start out:
machine: q35 (required if you want to use pcie=1) also causes issues...
hostpci0: 02:00,pcie=1,x-vga=on and lop off commands if you run into issues (pcie for example = horrible graphics corruption and x-vga doesnt seem to make any difference)
if you are feeling adventurous you can pass the entire USB controller to the VM, just keep in mind that for whatever reason ctrl+alt+delete (which is a NMI) causes the host to reboot outside your VM...

This is what worked for me... Hopefully its helpful to others.
 
How about audio passthough from the graphics card? HDMI? Is the auto perfect (without crackle?)
I might be able to check on Monday. All I have for HDMI Audio is crappy monitor speakers... I passed the onboard audio and its perfect so far.
 
I dont have any onboard Audio unfortunately. I have a Dell PE T410 Server. I would have to buy a PCI card for that (which I may do if I can get some kind of solid response that I will not have audio issues ;-P. Thanks for your write up... I wish it was posted before all my trial and errors.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!