virglrenderer for 3d support

Hi,

AFAIK one must add multiple new options for venus to work, according to the docs:

`virtio-gpu-gl,hostmem=8G,blob=true,venus=true`
(the 8G are just as an example)

but i tried here on a test machine (with an radeon RX560 on the host and i always got `error_out_of_host_memory` when running vulkaninfo on the guest.
Since the docs are not very expansive, I'm not sure if there is something missing (vulkan drivers are installed on host and guest, kernels should be new enough, qemu and mesa as well...)
 
How did you do it? I still get:
Code:
kvm: -device virtio-gpu-gl,hostmem=8G,blob=true,venus=true: old virglrenderer, blob resources unsupported
So i am not even able to start the VM. Can you show me your QEMU version so i know it's not a QEMU problem?
 
Last edited:
Ah that explains it. I'm still on 8 until 9 is final (and venus fixed). Maybe you can also test DRM native context as well? (Should be "-device virtio-vga-gl,blob=true,context_init=true,hostmem=4G")
 
that property (context_init) does not exist here, i'll have to look if there is some compile flag that has to be activated...
 
It is October 5 2025 and Proxmox 9.x is released. Has Proxmox been able to test DRM Native Context successfully and will full, documentable support be made available in a 9.x patch upgrade?

DRM Native Context will be a game changer for Proxmox Workstation.

If Virtio-gpu Venus is relatively straight forward to implement right now is this another option that can be made available for users?
Cheers.
 
Last edited:
I've managed to get Vulkan running in PVE 9:
  • the package virgl-server needs to be additionally installed. See also this Debian bug.
  • /usr/libexec/virgl_render_server needs to be symlinked to /builds/virgl/virglrenderer/install/libexec/virgl_render_server as this is the default path libvirglrenderer.so expects it at (see strings /usr/lib/x86_64-linux-gnu/libvirglrenderer.so.1 | grep -i render_server). The RENDER_SERVER_EXEC_PATH is an environment variable, but I haven't found a way to set them via qm.conf on the quick (which would be soo much less ugly than that symlink).
  • The VMs qm.conf needs to be adapted as follows:
    • args: -device virtio-vga-gl,hostmem=8G,venus=on,blob=on -display egl-headless,gl=on (you might want to use less than 8G here, mine is for LLM experiments with llama.cpp)
    • vga: none
While my host is a bit messy, I think this should be enough to get Vulkan running in the VM (of course you'll need up to date Mesa and lib(e)gl and so on). My VM is Ubuntu 25.04 btw.
 
Last edited:
I've managed to get Vulkan running in PVE 9:
  • the package virgl-server needs to be additionally installed. See also this Debian bug.
  • /usr/libexec/virgl_render_server needs to be symlinked to /builds/virgl/virglrenderer/install/libexec/virgl_render_server as this is the default path libvirglrenderer.so expects it at (see strings /usr/lib/x86_64-linux-gnu/libvirglrenderer.so.1 | grep -i render_server). The RENDER_SERVER_EXEC_PATH is an environment variable, but I haven't found a way to set them via qm.conf on the quick (which would be soo much less ugly than that symlink).
  • The VMs qm.conf needs to be adapted as follows:
    • args: -device virtio-vga-gl,hostmem=8G,venus=on,blob=on -display egl-headless,gl=on (you might want to use less than 8G here, mine is for LLM experiments with llama.cpp)
    • vga: none
While my host is a bit messy, I think this should be enough to get Vulkan running in the VM (of course you'll need up to date Mesa and lib(e)gl and so on). My VM is Ubuntu 25.04 btw.
Gave this a try on PVE 9.0.10 with latest enterprise updates. when i run `strings /usr/lib/x86_64-linux-gnu/libvirglrenderer.so.1 | grep -i render_server` it shows the correct path of `/usr/libexec/virgl_render_server` so no symlink needed anymore.

In the VM (Arch Linux, up to date) with vulkan-virtio package installed, everything seems to work! vulkaninfo shows the GPU, vkcube works, etc. So all you need to do to get this working is install packages - vulkan driver on host proxmox, virgl-server, etc, and then the vulkan driver in a up-to-date VM, and it "just works"!
 
Does anyone here know of a way to debug this, on either the host or the guest?

I have changed my qm.conf file to use virtio-vga-gl with Venus, and the VM does boot and I can access the guest remotely however when attempting to determine what graphics driver is in use, it is still returning llvmpipe (so still using CPU renderer)

EDIT: issue appears to be caused because i'm using gnome-remote-desktop, which doesn't appear to pick up the GPU. Very weird.
 
Last edited:
Does anyone here know of a way to debug this, on either the host or the guest?

I have changed my qm.conf file to use virtio-vga-gl with Venus, and the VM does boot and I can access the guest remotely however when attempting to determine what graphics driver is in use, it is still returning llvmpipe (so still using CPU renderer)

EDIT: issue appears to be caused because i'm using gnome-remote-desktop, which doesn't appear to pick up the GPU. Very weird.
Actually I had the same issue on PopOS inside proxmox, dont ask me why i decided this linux, I should of just went with debian server lol.....
But you should verify the permissions and make sure your current user has access to these groups. Thats what did it for me.

Code:
sudo adduser $USER render
sudo adduser $USER video  # Add this too, just in case

Here is my config with 64G fo ram for the gpu.
By the way lemonade-server and AI work just as well as if it wasnt behind a host.
Its only missing one feature.

Code:
ggml_vulkan: 0 = Virtio-GPU Venus (AMD Radeon Graphics (RADV GFX1151)) (venus) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none

Code:
args: -device virtio-vga-gl,blob=true,hostmem=64G,venus=true -display egl-headless,gl=on -object memory-backend-memfd,id=mem1,size=64G -machine memory-backend=mem1
balloon: 8192
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
hugepages: 0
ide2: local:iso/pop-os_22.04.iso,media=cdrom,size=2624M
keephugepages: 0
machine: q35
memory: 65536
meta: creation-qemu=10.1.2,ctime=1763248587
net0: virtio=BC:24:11:C5:BB:20,bridge=vmbr1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=384G
scsihw: virtio-scsi-single
smbios1: uuid=3dd09f26-b080-4b20-9abe-b0158816b3d0
sockets: 1
vga: none
vmgenid: 4a506f2c-f08d-4667-baa1-95709a0aee25
 
Code:
Nov 26 05:08:53 pve QEMU[1433652]: EGL is not supported on this platform
Nov 26 05:08:53 pve QEMU[1433652]: failed to initialize vrend winsyskvm: virgl could not be initialized: -1

what does this mean? i used the line -device virtio-vga-gl,hostmem=8G,venus=on,blob=on -display egl-headless,gl=on
also do i need to start virgl_test_server manually?
 
Code:
Nov 26 05:08:53 pve QEMU[1433652]: EGL is not supported on this platform
Nov 26 05:08:53 pve QEMU[1433652]: failed to initialize vrend winsyskvm: virgl could not be initialized: -1

what does this mean? i used the line -device virtio-vga-gl,hostmem=8G,venus=on,blob=on -display egl-headless,gl=on
also do i need to start virgl_test_server manually?
Are you trying to run a windows guest? Please provide the entire config.
 
Code:
agent: 1
args: -device virtio-vga-gl,blob=true,hostmem=16G,venus=true -display egl-headless,gl=on -object memory-backend-memfd,id=mem1,size=16G -machine memory-backend=mem1
bios: ovmf
boot: order=scsi0;net0
cores: 16
cpu: host
efidisk0: zpool-nfs:110/vm-110-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
memory: 16384
meta: creation-qemu=8.1.5,ctime=1718311224
name: archlinux
net0: virtio=BC:24:11:13:F9:49,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: zpool-nfs:110/vm-110-disk-1.qcow2,discard=on,iothread=1,size=256G
scsihw: virtio-scsi-single
smbios1: uuid=5a595ab9-72a4-46e1-a395-06135be6c769
sockets: 1
vga: none

no, guest is arch-(linux)
 
OK, sorry. The issue was me. I had an old self compiled version of libvirglrenderer.so in /usr/local/share. Removed it and now it works.