No. Unfortunately, devices with a vGPU cannot be live migrated. You must shut them down to migrate them. Someone can correct me if I'm wrong, but it's my understanding that there's currently no way to live migrate the occupied memory in the GPU to the new host.
Not overly concerned about the licensing and cost. I need something commercially supported and is easy to use for the end-user. I'm migrating away from the VMware Horizon View platform and already have a budget for VDI.
Is Deskpool around anymore? They don't seem to be responding to queries.
I'm testing UDS from Virtual Cable right now. It seems to work well with Proxmox, but the UI could use a serious overhaul.
I figured this out. It was a case of looking at his problem for an entire day and not seeing the obvious solution. I hadn't created the resource mappings. Everything else was correct.
Hey all,
This is a long post. Hopefully I'm giving enough information to go on.
I'm trying to complete a trial of a VDI solution we're hoping to migrate away from VMware Horizon View to. The software has support for Proxmox, which we've used on our primary virtualization cluster for a few...
I posted this over on /r/Proxmox as well.
The organization I work at where I'm the Director of IT is heavily invested in VMware Horizon View - we have 14 Dell servers with dual EPYC CPUs and 3 datacenter class Nvidia GPUs with GRID vGPU licenses in each server. We host, on average, 200...
Getting the following error while attempting to renew a Let's Encrypt cert using CloudFlare DNS verification:
Loading ACME account details
Placing ACME order
Order URL: https://acme-v02.api.letsencrypt.org/acme/order/****/****
Getting authorization details from...
It's definitely a bug. Not with Proxmox, but in QEMU itself. I've been running mine on the aforementioned pve-qemu-kvm-6.0.0-4 package without a single problem.
This might help:
https://gitlab.com/qemu-project/qemu/-/issues/649
I ran into the same I/O errors on an iSCSI-backed Proxmox deployment, but my FibreChannel one seems unaffected. Downgrading back to QEMU 6.0.0 (pve-qemu-kvm 6.0.0-4) will resolve the issue without having to change from virtio...
Incidentally, I'm having the same issue, but only on one of my Proxmox clusters.
In my home lab, where the problem exists, I'm running 2 PVE-7 hosts and one host with PBS-1.1 (haven't upgraded yet). The PVE host VM and container storage connect via LVM over iSCSI on 10Gbe. I have about 9 QEMU...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.