This is in the release note for v5.3
vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS.
Anyone tried to share vGPU on proxmox? I'm particularly interested to have a few KVMs...
Yes, I've installed balloon service and memory usage is very much lower and accurately reported on host that matches memory usage on Windows OS performance monitor. VM startup is also much faster due to more memory available to the host now.
After a few days of testing on older version of PVE4.4, I think I may I found out why Windows guest VM boot time is so long on PVE5. The discussion and possible solution is in this thread:
https://forum.proxmox.com/threads/pve-5-windows-guest-vm-slow-to-start.50924/#post-238931
I think I may have resolved the slow boot up issue by 'unchecking' the default balloon option in memory setting.
It seemed the boot up is much faster without ballooning.
I suspected it's a bug in PVE5 but no one seemed to have similar issues. :(
I will try to install a new host with PVE4 to see if Windows guest VM runs faster. Although, I already have a few hosts on PVE4 and some on PVE5 - only those KVMs on PVE5 showing slow boot. Moreover, hosts hardware is...
support for vGPU for sharing GPU just like vCPU for sharing CPU.
Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3
vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias...
Hello,
I'm running PVE 4 and PVE 5. I noticed 'Windows server 2012 R2' VM on PVE 5 are slower to start than on PVE 4. Anyone experiencing this problem?
I'm using templates VM migrated from PVE 4 to PVE 5. It's fast to boot on PVE 4 but really slow to boot on PVE 5.
I have limited public IPs and would like to know if it is possible to share one public IP for the same or identical service port in each VM with different private IP addresses.
Hi,
Is it possible for Promox host to use iptables sharing a single public IP and handle incoming traffic redirect to multiple VMs with the same port no. as well? I know it is possible if incoming traffic port numbers are different. However, if the incoming port is fixed, how can it be...
Yes, this helps thanks. If that is the case due to high latency, you can create multiple sets of clusters - one in each DC for central management?
Any idea if cluster can support linked-clone offline migration?
Linked-clones are linked to a base VM template. So migrating linked-clones across node needs the base template to be migrated as well. As I found out, Backing up linked clone with vzdump is possible but a restore will turned the linked-clone to a full-clone.
Anyone knows if is is possible to...
Interesting topic in discussion. Is it feasible to setup a cluster across WAN - just for central management (single web gui) and offline vm migration across nodes. No need for HA.
In particular, does Proxmox cluster supports offline migration of 'linked-clone' across nodes i.e. moving...
Using latest Proxmox 4.1-13 and facing problems cloning Win10 pro VM. It seems the NIC/Mac address is still using original/template VM and not automatically updated to the new clone VM's Mac address. Using latest VirtIO drivers (virtio-win-0.1.112).
Workaround is to manually remove/re-add the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.