Hi
EDIT: This post is a bit OT, sorry about that. My real question is, on the Proxmox/KVM side, are there any pointers about getting things working for GPU passthrough with a monitor connected/xvga=1? Thanks!
----
Just looking at Proxmox as a hybrid cloud solution, so far so good. Actually used it before a long time ago and migrated onto GCP. It's great to see the progress, and I'm looking forward to setting up a big pre-prod experiment. Everyone keeps telling me I'm mad and should be going to ESXI. But I'm a FOSS/Debian guy and the only real argument I keep hearing is market share. Puh.
We'll have some AI workloads and so far have had success doing CUDA stuff via PCIe passthrough. Looking promising, going to play with LXC passthrough later. I'll probably ask around to see if anyone else is doing GPU heavy stuff later.
As a side project, looking to do some game streaming from Windows and hitting a snag. I've setup a couple of nodes, each with 2 GPUs, and had the same results. We have here a 3060, a P40, an M40 and a 1070. I'm using KVM/qemu VMs, example conf attatched below. I believe the setup ticks most boxes, EUFI bios, q35 machine type. I have /proc/cmdline as follows:
Because everything is working apart from Sunshine, as it seems this is a catch 22 problem around physical monitor connection, but happy to be enlightened otherwise.
Fresh install of 7.4, haven't really touched much. From my understanding, the 5.15 kernel introduced a change in the way GPU passthrough was handled depending on whether a monitor was hooked up to the GPU.
The only way I've been able to get KVM PCIe GPU passthrough working is without xvga=1 and with no monitor hooked up to the GPU. This is the case in Windows guests and Ubuntu 22.04.
With Ubuntu guest, Sunshine has worked flawlessly, but under Windows 10, it has trouble detecting the GPU, even though the Nvidia drivers seem to install and recognise the GPUs. I read that Sunshine does need the monitor connected for things to work. But connecting the monitor or setting xvga=1 prevents the VM starting.
Both cluster nodes have a consumer GPU and an accelerator card, so tried a few different combos.
Thank you for taking the time to read, it's appreciated. Apologies if I've missed any important info or should be posting this somewhere else.
EDIT: This post is a bit OT, sorry about that. My real question is, on the Proxmox/KVM side, are there any pointers about getting things working for GPU passthrough with a monitor connected/xvga=1? Thanks!
----
Just looking at Proxmox as a hybrid cloud solution, so far so good. Actually used it before a long time ago and migrated onto GCP. It's great to see the progress, and I'm looking forward to setting up a big pre-prod experiment. Everyone keeps telling me I'm mad and should be going to ESXI. But I'm a FOSS/Debian guy and the only real argument I keep hearing is market share. Puh.
We'll have some AI workloads and so far have had success doing CUDA stuff via PCIe passthrough. Looking promising, going to play with LXC passthrough later. I'll probably ask around to see if anyone else is doing GPU heavy stuff later.
As a side project, looking to do some game streaming from Windows and hitting a snag. I've setup a couple of nodes, each with 2 GPUs, and had the same results. We have here a 3060, a P40, an M40 and a 1070. I'm using KVM/qemu VMs, example conf attatched below. I believe the setup ticks most boxes, EUFI bios, q35 machine type. I have /proc/cmdline as follows:
Code:
BOOT_IMAGE=/boot/vmlinuz-5.15.102-1-pve root=/dev/mapper/pve-root ro quiet nomodeset video=vesafb:off video=efifb:off
Because everything is working apart from Sunshine, as it seems this is a catch 22 problem around physical monitor connection, but happy to be enlightened otherwise.
Fresh install of 7.4, haven't really touched much. From my understanding, the 5.15 kernel introduced a change in the way GPU passthrough was handled depending on whether a monitor was hooked up to the GPU.
The only way I've been able to get KVM PCIe GPU passthrough working is without xvga=1 and with no monitor hooked up to the GPU. This is the case in Windows guests and Ubuntu 22.04.
With Ubuntu guest, Sunshine has worked flawlessly, but under Windows 10, it has trouble detecting the GPU, even though the Nvidia drivers seem to install and recognise the GPUs. I read that Sunshine does need the monitor connected for things to work. But connecting the monitor or setting xvga=1 prevents the VM starting.
Both cluster nodes have a consumer GPU and an accelerator card, so tried a few different combos.
Thank you for taking the time to read, it's appreciated. Apologies if I've missed any important info or should be posting this somewhere else.
Code:
bios: ovmf
boot: order=ide0;ide2;net0
cores: 4
efidisk0: local-lvm:vm-106-disk-0,efitype=4m,size=4M
hostpci0: 0000:02:00,pcie=1
ide0: local-lvm:vm-106-disk-1,size=32G
ide2: local:iso/Win10_22H2_EnglishInternational_x64.iso,media=cdrom,size=5969910K
machine: pc-q35-7.2
memory: 8192
meta: creation-qemu=7.2.0,ctime=1680374044
name: gd3
net0: e1000=96:D0:8D:E6:33:82,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=dd9cd5e4-0ff6-4148-8107-80abb890dc85
sockets: 1
vmgenid: 6e92726f-7589-4ccd-87de-7341179060d1
Last edited: