Hi all,
I have been following the guide in the Proxmox Wiki on the topic, but I have run into issues.
First off, let me share my plan:
My proxmox box will also serve as a HTPC for two different TV's using two separate Geforce 720 GT cards passed through to VM's running Ubuntu and KODI. Since it is the latest LTS, I have started my testing with Ubuntu 16.04 in my VM.
I know iommu is working properly and passthrough is functioning (I previously passed through a couple of LSI SAS controllers to a different guest), but this GPU is giving me trouble.
First off, it says that the OVMF method is recommended, but doesn't say why. Can anyone share any more information here on why this is the preferred method?
I did the rom test, and I came up with negative results. (At least so I think, I got the first few lines then it ended with an error, something about the end of the file or something, forgot to take it down)
While the 720GT certainly is new enough to support UEFI, it seems like this particular model doesn't have it in the firmware. Either way, my host is a LGA1366 Xeon, pre-UEFI, so I don't think it would have worked anyway.
Because of this, I first tried the seabios PCIe method, followed by the PCI method, and neither are working for me.
nouveau is blacklisted properly and not running in either the VM or the host, and furthermore the nvidia GPU and audio are both added to the pci-stub in the host.
Here's my vmid.conf
Both the GPU and sound successfully pass through to the guest and are visible in lspci as follows:
I actually get console output from the Ubuntu 16.04 VM on the screen connected to the GPU, so it is clearly passed through and functioning (albeit as a vga only console right now)
Then when I install the nvidia binary driver blob (361.42, latest in Ubuntu repository) it appears to work, but afterwards the x server fails to start, and nvidia-smi when run complains as follows:
The /dev/nvidia files are present, and permissions look right:
But a look at dmesg shows that there is some sort of problem:
At this point I am totally stuck, and would appreciate any suggestions anyone might have.
If I were able to get UEFI working (I could possibly get firmware from a different GT720 vendor) would that help? Since that is the recommended mode, is it more reliable? Would it even work on my older non-UEFI server?
I'll take any suggestions I can get at this point.
Much obliged,
Matt
I have been following the guide in the Proxmox Wiki on the topic, but I have run into issues.
First off, let me share my plan:
My proxmox box will also serve as a HTPC for two different TV's using two separate Geforce 720 GT cards passed through to VM's running Ubuntu and KODI. Since it is the latest LTS, I have started my testing with Ubuntu 16.04 in my VM.
I know iommu is working properly and passthrough is functioning (I previously passed through a couple of LSI SAS controllers to a different guest), but this GPU is giving me trouble.
First off, it says that the OVMF method is recommended, but doesn't say why. Can anyone share any more information here on why this is the preferred method?
I did the rom test, and I came up with negative results. (At least so I think, I got the first few lines then it ended with an error, something about the end of the file or something, forgot to take it down)
While the 720GT certainly is new enough to support UEFI, it seems like this particular model doesn't have it in the firmware. Either way, my host is a LGA1366 Xeon, pre-UEFI, so I don't think it would have worked anyway.
Because of this, I first tried the seabios PCIe method, followed by the PCI method, and neither are working for me.
nouveau is blacklisted properly and not running in either the VM or the host, and furthermore the nvidia GPU and audio are both added to the pci-stub in the host.
Here's my vmid.conf
Code:
args: -machine pc,max-ram-below-4g=1G
bootdisk: virtio0
cores: 2
cpu: host
cpuunits: 4096
hostpci0: 06:00.0,pcie=1,x-vga=on
hostpci1: 06:00.1,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 2048
name: htpc1
net0: bridge=vmbr0,virtio=3A:63:62:33:32:36
net1: bridge=vmbr1,virtio=36:64:37:39:34:31
numa: 0
ostype: l26
smbios1: uuid=e0ccb955-9f1d-4810-9a6e-332f1fce5a94
sockets: 1
virtio0: local:151/vm-151-disk-1.qcow2,cache=writethrough,size=32G
Both the GPU and sound successfully pass through to the guest and are visible in lspci as follows:
Code:
$ lspci |grep -i nv
01:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 720] (rev a1)
02:00.0 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
I actually get console output from the Ubuntu 16.04 VM on the screen connected to the GPU, so it is clearly passed through and functioning (albeit as a vga only console right now)
Then when I install the nvidia binary driver blob (361.42, latest in Ubuntu repository) it appears to work, but afterwards the x server fails to start, and nvidia-smi when run complains as follows:
Code:
$ nvidia-smi
No devices were found. Please make sure /dev/nvidia* files are readable by current user.
The /dev/nvidia files are present, and permissions look right:
Code:
$ ls -l /dev/nv*
crw-rw-rw- 1 root root 195, 0 May 3 21:18 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 May 3 21:18 /dev/nvidiactl
crw-rw-rw- 1 root root 246, 0 May 3 21:18 /dev/nvidia-uvm
But a look at dmesg shows that there is some sort of problem:
Code:
[ 8.891906] nvidia: module license 'NVIDIA' taints kernel.
[ 8.963904] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[ 9.081268] nvidia-nvlink: Nvlink Core is being initialized, major device number 247
[ 9.083513] [drm] Initialized nvidia-drm 0.0.0 20150116 for 0000:01:00.0 on minor 0
[ 9.083527] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 361.42 Tue Mar 22 18:10:58 PDT 2016
[ 9.158369] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1c.1/0000:02:00.0/sound/card1/input5
[ 9.158467] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:1c.1/0000:02:00.0/sound/card1/input6
[ 9.712317] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 361.42 Tue Mar 22 17:29:54 PDT 2016
[ 9.743507] nvidia-uvm: Loaded the UVM driver in lite mode, major device number 246
[ 10.459331] NVRM: RmInitAdapter failed! (0x25:0x40:1170)
[ 10.459458] NVRM: rm_init_adapter failed for device bearing minor number 0
[ 12.573279] NVRM: RmInitAdapter failed! (0x25:0x40:1170)
[ 12.573503] NVRM: rm_init_adapter failed for device bearing minor number 0
[ 1079.558553] NVRM: RmInitAdapter failed! (0x25:0x40:1170)
[ 1079.558827] NVRM: rm_init_adapter failed for device bearing minor number 0
At this point I am totally stuck, and would appreciate any suggestions anyone might have.
If I were able to get UEFI working (I could possibly get firmware from a different GT720 vendor) would that help? Since that is the recommended mode, is it more reliable? Would it even work on my older non-UEFI server?
I'll take any suggestions I can get at this point.
Much obliged,
Matt