if you use q35 you can only use ide slot 0 and 2kvm: -device ide-hd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=100: Can't create IDE unit 1, bus supports only 1 units
if you use q35 you can only use ide slot 0 and 2kvm: -device ide-hd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=100: Can't create IDE unit 1, bus supports only 1 units
if you use q35 you can only use ide slot 0 and 2
the complete node it is frozen, and the only way is restart manually the node.
I had the same problem with my RX480. Which is pretty much the same card. Scary thing is this can bring the whole node down. I wonder if this is because I don't have proper ACS on my system (Intel c236) and the card is trying to talk to something it should not. Even though it is in its own IOMMU group and this should not affect the host OS in any way?Same problem here (RX 580 - guest OS is Windows 10). It boots up and after two minutes everything (guest and host) freezes. No problem without GPU Passthrough.
Aim:
To host a headless VM with full access to a modern GPU, in order to stream games from.
Assumptions:
Instructions:
- Recent CPU and motherboard that supports VT-d, interrupt mapping.
- Recent GPU that has a UEFI bios.
1) Enable in BIOS: UEFI, VT-d, Multi-monitor mode
This is done via the BIOS. Can be confirmed using dmesg (search for efi strings), or the existence of /sys/firmware/efi on the filesystem and "vmx" in /proc/cpuinfo. Multi Monitor mode had to be enabled in my bios otherwise the card wasn't detected at all (even by the host using lspci).
2) Enable IOMMU via grub (Repeat post upgrade!)
edit /etc/default/grub and change
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
to
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifbff"
then run update-grub
Confirm using dmesg | grep -e DMAR -e IOMMU - this should produce output.
As of PVE 5, I had to also disable efifb.
3) Blacklist nvidia/nouveu so that Proxmox doesn't load the card (Repeat post upgrade!)
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
Run update-initramfs -u to apply the above. Confirm using lspci -v - this will tell you if a driver has been loaded or not by the VGA adaptor.
4) Load kernel modules for virtual IO
Add to /etc/modules the following:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
I'm not sure how to confirm the above.
5) Get GPU IDs and addresses
Run lspci -v to list all the devices in your PC. Find the relevant VGA card entry. For example:
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) (prog-if 00 [VGA controller])
You may also have an audio device (probably for HDMI sound):
01:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
Take note of the numbers at the front, in this case 01:00.0 and 01:00.1.
Using this number run lspci -n -s 01:00. This will give you the vendor ids. For example:
01:00.0 0000: 10de:1b81 (rev a1)
01:00.1 0000: 10de:10f0 (rev a1)
Take note of these vendor IDs, in this case 10de:1b81 and 10de:10f0.
6) Assign GPU to vfio
Use this to create the file that assigns the HW to vfio:
echo "options vfio-pci ids=10de:1b81,10de:10f0" > /etc/modprobe.d/vfio.conf
After rebooting, running lspci -v will confirm that the GPU and Audio device are using the vfio driver:
Kernel driver in use: vfio-pci
7) Create VM (but do not start it!)
Do this as normal, using SCSI VirtIO, VirtIO net and balloon virtual hardware. Also add the following to the vm's conf file (/etc/pve/qemu-server/<vmid>.conf):
bios: ovmf
machine: q35
8) Install Windows 10 in the VM
You can now install Win10, with it being aware of the UEFI bios. You may (will) need to provide VirtIO drivers during install.
Once up and running, TURN ON REMOTE DESKTOP. Passing through the GPU will disable the virtual display, so you will not be able to access it via Proxmox/VNC. Remote desktop will be handy if you don't have a monitor connected or keyboard passed through.
9) Pass through the GPU!
This is the actual installing of the GPU into the VM. Add the following to the vm's conf file:
hostpci0: <device address>,x-vga=on,pcie=1
In the examples above, using 01:00 as the address will pass through both 01:00.0 and 01:00.1, which is probably what you want. x-vga will do some compatibility magic, as well as disabling the basic VGA adaptor.
You can verify the passthrough by starting the VM and entering info pci into the respective VM monitor tab in the Proxmox webui. This should list the VGA and audio device, with an id of hostpci0.0 and hostpci0.1.
Windows should automatically install a driver. You can allow this and confirm in device manager that the card is loaded correctly (ie without any "code 43" errors). Once that's done continue to set up the card (drivers etc).
# modprobe vfio
# modprobe vfio_pci
# echo 10de 1c82 | tee /sys/bus/pci/drivers/vfio-pci/new_id
10de 1c82
# echo 10de 0fb9 | tee /sys/bus/pci/drivers/vfio-pci/new_id
10de 0fb9
# dmesg | grep -i vfio
[ 2810.602064] VFIO - User Level meta-driver version: 0.3
[ 2817.223859] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 2817.241650] vfio_pci: add [10de:1c82[ffff:ffff]] class 0x000000/00000000
[ 2817.241654] vfio_pci: add [10de:0fb9[ffff:ffff]] class 0x000000/00000000
# lspci -nnk -d 10de:1c82
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] GP107 [GeForce GTX 1050 Ti] [1462:8c96]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
# lspci -nnk -d 10de:0fb9
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:8c96]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
# dmesg | grep -i vfio
nothing
Kernel driver in use is not displayed or is not vfio-pci.
# lspci -vnn
...
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] GP107 [GeForce GTX 1050 Ti] [1462:8c96]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [size=128]
Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Legacy Endpoint, MSI 00
Capabilities: [100] Virtual Channel
Capabilities: [250] Latency Tolerance Reporting
Capabilities: [128] Power Budgeting <?>
Capabilities: [420] Advanced Error Reporting
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Capabilities: [900] #19
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:8c96]
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at f7080000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
...
# uname -r
4.13.13-2-pve
# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/5/devices/0000:00:1a.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.3
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/11/devices/0000:01:00.1
/sys/kernel/iommu_groups/11/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.1
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/4/devices/0000:00:19.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:14.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0
The "/etc/default/grub" file has:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff pcie_acs_override=downstream"
# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.13.13-2-pve
Found initrd image: /boot/initrd.img-4.13.13-2-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
done
# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.13.13-2-pve
The "/etc/modprobe.d/vfio.conf" file:
options vfio-pci ids=10de:1c82,10de:0fb9
The "/etc/modprobe.d/blacklist.conf" file:
blacklist radeon
blacklist nouveau
blacklist nvidia
I tried again but this time i can't see in use the vfio-pci driver, only if the following commands are executed after each restart:
Code:# modprobe vfio # modprobe vfio_pci # echo 10de 1c82 | tee /sys/bus/pci/drivers/vfio-pci/new_id 10de 1c82 # echo 10de 0fb9 | tee /sys/bus/pci/drivers/vfio-pci/new_id 10de 0fb9 # dmesg | grep -i vfio [ 2810.602064] VFIO - User Level meta-driver version: 0.3 [ 2817.223859] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 2817.241650] vfio_pci: add [10de:1c82[ffff:ffff]] class 0x000000/00000000 [ 2817.241654] vfio_pci: add [10de:0fb9[ffff:ffff]] class 0x000000/00000000 # lspci -nnk -d 10de:1c82 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] GP107 [GeForce GTX 1050 Ti] [1462:8c96] Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau # lspci -nnk -d 10de:0fb9 01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:8c96] Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel
Otherwise this is what i get:
Code:# dmesg | grep -i vfio nothing Kernel driver in use is not displayed or is not vfio-pci. # lspci -vnn ... 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Micro-Star International Co., Ltd. [MSI] GP107 [GeForce GTX 1050 Ti] [1462:8c96] Flags: bus master, fast devsel, latency 0, IRQ 11 Memory at f6000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at f0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900] #19 Kernel modules: nvidiafb, nouveau 01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:8c96] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f7080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel ...
This is what i have:
Code:# uname -r 4.13.13-2-pve # find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/7/devices/0000:00:1c.0 /sys/kernel/iommu_groups/5/devices/0000:00:1a.0 /sys/kernel/iommu_groups/3/devices/0000:00:16.3 /sys/kernel/iommu_groups/3/devices/0000:00:16.0 /sys/kernel/iommu_groups/11/devices/0000:01:00.1 /sys/kernel/iommu_groups/11/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/8/devices/0000:00:1c.1 /sys/kernel/iommu_groups/6/devices/0000:00:1b.0 /sys/kernel/iommu_groups/4/devices/0000:00:19.0 /sys/kernel/iommu_groups/12/devices/0000:03:00.0 /sys/kernel/iommu_groups/2/devices/0000:00:14.0 /sys/kernel/iommu_groups/10/devices/0000:00:1f.3 /sys/kernel/iommu_groups/10/devices/0000:00:1f.2 /sys/kernel/iommu_groups/10/devices/0000:00:1f.0 /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/9/devices/0000:00:1d.0 The "/etc/default/grub" file has: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff pcie_acs_override=downstream" # update-grub Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.13.13-2-pve Found initrd image: /boot/initrd.img-4.13.13-2-pve Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin done # update-initramfs -u update-initramfs: Generating /boot/initrd.img-4.13.13-2-pve The "/etc/modprobe.d/vfio.conf" file: options vfio-pci ids=10de:1c82,10de:0fb9 The "/etc/modprobe.d/blacklist.conf" file: blacklist radeon blacklist nouveau blacklist nvidia
# modprobe vfio
# modprobe vfio_pci
# echo 10de 1c82 | tee /sys/bus/pci/drivers/vfio-pci/new_id
# echo 10de 0fb9 | tee /sys/bus/pci/drivers/vfio-pci/new_id
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:off pcie_acs_override=downstream"
balloon: 0
bios: ovmf
boot: dc
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-100-disk-2,size=128K
machine: q35
memory: 16384
net0: e1000=##:##:##:##:##:##,bridge=vmbr0
numa: 0
ostype: win10
sata2: local:iso/Windows10.iso,media=cdrom,size=3553536K
sata3: local:iso/virtio-win-0.1.126.iso,media=cdrom,size=152204K
scsi0: local-zfs:vm-100-disk-1,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=########-####-####-####-############
sockets: 1
hostpci0: 01:00,pcie=1,x-vga=on,romfile=vbios.bin
I just want to thank you
NVIDIA GeForce GTX 1060 (brand Zotax) work like a charm with:
bios: ovmf
machine: q35
hostpci0: 82:00,x-vga=on,pcie=1
balloon: 3072
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-lvm:vm-130-disk-2,size=128K
hostpci0: 01:00.0,pcie=1
hostpci1: 00:1f.3,pcie=1
ide0: local:iso/virtio-win-0.1.141.iso,media=cdrom,size=309208K
ide2: local:iso/Win10_1803_English_x64.iso,media=cdrom
machine: q35
memory: 4096
name: Cindy
net0: virtio=4A:D7:94:CE:91:B0,bridge=vmbr0,tag=11
numa: 1
ostype: win10
scsi0: local-lvm:vm-130-disk-1,size=160G
scsihw: virtio-scsi-pci
smbios1: uuid=51f589b7-aa7b-4271-82f9-e9fecf1b5b47
sockets: 1
usb0: host=046d:c52b
usb1: host=413c:2011
usb2: host=0b0e:034a
usb3: host=1-3,usb3=1
usb4: host=1-6.2.3
Yes (VT-d) and No (container), have a look here: https://pve.proxmox.com/wiki/Pci_passthroughHi, I have a few questions
my cpu (i5 4670k) has no VT-d only VT-x,
does the CPU need VT-d?
does GPU passthrough work in the container as well?
HI,I Have sucessfully setup GPU passthrough. but audio is messed up.
hostpci0: 01:00.0,pcie=1
hostpci1: 00:1f.3,pcie=1
yes this is a bug and already fixed in git (so should be fixed with the next pve-manager package update)as from GUI pve5.3 adds rombar=0/1,romfile=undefined that prevent VM from booting with error "do not find file 'undefined' in directory ..."
you could add a second virtual gpu in the 'args' linenow i'm trying to trick proxmox with saying my GTX is not my primary card,
yes this is a bug and already fixed in git (so should be fixed with the next pve-manager package update)
you could add a second virtual gpu in the 'args' line
maybe i extend the config so that we can pass through a gpu and still have the 'hv_vendor_id' set for nvidia, then it would be only a matter of omitting the 'x-vga' part in the config
no this just means if the rom bar from the device gets mapped into the guest memoryas per rombar=0/1 :
0 mean you wont see bootscreen until OS is ready, thus hidding proxmox boot logo
1 mean you want to see proxmox bootlog to know your VM is firing up
no this just means if the rom bar from the device gets mapped into the guest memory
that may just be a side effectI don't know if it is expected or related to my hardware no having a 'rom bar' to get mapped
but this is NOT what I have on my system.
on MY system :
rombar=0 hide my proxmox boot logo and system only show at "login screen"
rombar=1 show me proxmox boot/bios logo and progress bar, then GRUB, then os boot sequence, then "login screen"