Redirecting the ProxMox kvm to a TightVNC server running on VM with GPU passthrough

serhiy1

New Member
Oct 31, 2020
1
0
1
26
Hi guys,

I have a VM that has an RTX5000 passed though to it.

I understand that when you passthrough a GPU to a VM the integrated ProxMox KVM no longer works. But I was wondering If I could work around this limitation, by telling ProxMox to connect to a TightVNC server running on the VM itself.

I've tried following the guide found here: https://pve.proxmox.com/wiki/VNC_Client_Access, but I get the following error message:
Code:
Error: VNC display not active

This seems to happen no matter what IP & port settings I try to put in, So I'm wondering if this just a hard check that for a GPU passthrough that stops this working.

I want this to work mainly for the convivence, since I will have 4 other VM's in an identical configuration, and managing 5 instances of remote desktop is a bit of a nightmare.
 
No, this is not possible. Our GUI connects only to the VNC instance started by QEMU, and if you enable GPU passthrough (well, if you disable "VGA", you can *theoretically* keep it enabled and it might do something but be prepared for lots of weirdness) then QEMU can't show you an image anymore. That is not something changeable either, as QEMU simply cannot access the GPU when it is passed through, the IOMMU forbids this and it would completely break the isolation model of the VM.

I've tried following the guide found here: https://pve.proxmox.com/wiki/VNC_Client_Access, but I get the following error message:
The guide is for connecting an external VNC client to the QEMU provided server, but as I said, the problem is that this server can't work for passthrough, so no client can save you.

You can of course install a regular VNC server on your VM and then use TightVNC or whatever to connect, but it won't work via the PVE GUI.
 
@Stefan_R

Totally understand that this is impossible for GPU Passthrough.

Is there a way I can see vGPU virtual display in proxmox? This is supported by qemu, and can be configured for libvirt for example.

Currently when I set up NVIDIA mdev and disable vga, I dont see any console output.

in libvirt, for a reference it can be configured like so:

Ensure the mediated device's XML configuration includes the display='on' parameter. For example:
Code:
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'>

   <source>

      <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/>

   </source>

</hostdev>
 
That is currently not supported in PVE. If QEMU plays nicely it should be doable though, so if you want you can open an enhancement request on our bugtracker.

If you're prepared to some experimenting yourself, you can try to do it manually and see what happens. Apply the following (entirely untested and OTOH) patch to /usr/share/perl5/PVE/QemuServer/PCI.pm:

Code:
diff --git a/PCI.pm b/PCI.pm
index 2ee142f..93de065 100644
--- a/PCI.pm
+++ b/PCI.pm
@@ -451,6 +451,8 @@ sub print_hostpci_devices {
         $devicestr .= ",multifunction=on" if $multifunction;
         $devicestr .= ",romfile=/usr/share/kvm/$d->{romfile}" if $d->{romfile};
         $devicestr .= ",bootindex=$bootorder->{$id}" if $bootorder->{$id};
+        $devicestr .= ",display=on" if $d->{mdev};
+        $devicestr .= ",ramfb=on" if $d->{mdev}; # optional
        }
 
        push @$devices, '-device', $devicestr;

...and restart pveproxy.service and pvedaemon.service.

Then start the VM and see what happens. No guarantees, don't use in production, this is just what I found after quickly looking at how libvirt and QEMU do what they do.
 
That is currently not supported in PVE. If QEMU plays nicely it should be doable though, so if you want you can open an enhancement request on our bugtracker.

If you're prepared to some experimenting yourself, you can try to do it manually and see what happens. Apply the following (entirely untested and OTOH) patch to /usr/share/perl5/PVE/QemuServer/PCI.pm:

Code:
diff --git a/PCI.pm b/PCI.pm
index 2ee142f..93de065 100644
--- a/PCI.pm
+++ b/PCI.pm
@@ -451,6 +451,8 @@ sub print_hostpci_devices {
         $devicestr .= ",multifunction=on" if $multifunction;
         $devicestr .= ",romfile=/usr/share/kvm/$d->{romfile}" if $d->{romfile};
         $devicestr .= ",bootindex=$bootorder->{$id}" if $bootorder->{$id};
+        $devicestr .= ",display=on" if $d->{mdev};
+        $devicestr .= ",ramfb=on" if $d->{mdev}; # optional
        }

        push @$devices, '-device', $devicestr;

...and restart pveproxy.service and pvedaemon.service.

Then start the VM and see what happens. No guarantees, don't use in production, this is just what I found after quickly looking at how libvirt and QEMU do what they do.

Thanks a lot! I remember was fighting with this already, but my knowledge of PVE source code is very limited :).

I just tried, it is almost working.
2 comments:
- ramfb did not work, had to delete that line.
- another item is required to make it work - to remove -nographic parameter (or make it configurable) from qemu commandline. I just started my vm manually via ssh by running /usr/bin/kvm directly with all parameters including both display=on and no -nographic - and I do see my vGPU display in proxmox GUI!

P.S. I now recall, I created enhancement already and found it - https://bugzilla.proxmox.com/show_bug.cgi?id=2959
However, I am not sure If lay it out properly.
 
  • Like
Reactions: Stefan_R
@Stefan_R: this is what I mean by removing -nographic parameter:
Code:
diff --git a/./QemuServer.pm b/usr/share/perl5/PVE/QemuServer.pm
index 43b11c3..3b81836 100644
--- a/./QemuServer.pm
+++ b/usr/share/perl5/PVE/QemuServer.pm
@@ -3320,7 +3320,6 @@ sub config_to_command {
        push @$cmd,  '-vnc', "unix:$socket,password";
     } else {
        push @$cmd, '-vga', 'none' if $vga->{type} eq 'none';
-       push @$cmd, '-nographic';
     }

     # time drift fix

Now PVE console display is working with every vm with vgpu!
 
Last edited:
@Stefan_R: this is what I mean by removing -nographic parameter:
Code:
diff --git a/./QemuServer.pm b/usr/share/perl5/PVE/QemuServer.pm
index 43b11c3..3b81836 100644
--- a/./QemuServer.pm
+++ b/usr/share/perl5/PVE/QemuServer.pm
@@ -3320,7 +3320,6 @@ sub config_to_command {
        push @$cmd,  '-vnc', "unix:$socket,password";
     } else {
        push @$cmd, '-vga', 'none' if $vga->{type} eq 'none';
-       push @$cmd, '-nographic';
     }

     # time drift fix

Now PVE console display is working with every vm with vgpu!
Hi, may I know where did you insert this? I have the same problem, I already finished configure unlock vGpu and installed NVIDIA Tesla t4 in my VM, it appers in device manager, but in my display it shows that there is no GPU only the default display, so when I used moonlight there is an error if the display is on?
 
Hi, may I know where did you insert this? I have the same problem, I already finished configure unlock vGpu and installed NVIDIA Tesla t4 in my VM, it appers in device manager, but in my display it shows that there is no GPU only the default display, so when I used moonlight there is an error if the display is on?
you need to edit two files of Proxmox code /usr/share/perl5/PVE/QemuServer.pm (here you need to remove push @$cmd, '-nographic'; string) and /usr/share/perl5/PVE/QemuServer/PCI.pm (add $devicestr .= ",display=on" if $d->{mdev}; so Proxmox adds this param for your mdev device )

Edit both and restart pve daemons, in theory should still work for pve8. In theory because I did not try yet on version 8, just tried to run a vm via cmdline with required parameters (-nographic param absent and display=on added for mdev device) and vm's screen shown up in GUI.

Reminder - this is all very much experimental and definitely not supported way to run vms.
 
Last edited:
Hi, may I know where did you insert this? I have the same problem, I already finished configure unlock vGpu and installed NVIDIA Tesla t4 in my VM, it appers in device manager, but in my display it shows that there is no GPU only the default display, so when I used moonlight there is an error if the display is on?
Another note - this works for vgpu setup with nvidia drivers installed on both host and guest. In passthrough mode it will be just a gpu attached directly to the guest - exactly like any other GPU, so you would need to use physical display interfaces which t4 as far as I know does not have.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!