Search results

  1. vGPU on Proxmox

    This is in the release note for v5.3 vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS. Anyone tried to share vGPU on proxmox? I'm particularly interested to have a few KVMs...
  2. PVE 5 Windows guest VM slow to start

    Yes, I've installed balloon service and memory usage is very much lower and accurately reported on host that matches memory usage on Windows OS performance monitor. VM startup is also much faster due to more memory available to the host now.
  3. Slow Windows Server 2012 R2 guests (becoming faster during interactive use)

    After a few days of testing on older version of PVE4.4, I think I may I found out why Windows guest VM boot time is so long on PVE5. The discussion and possible solution is in this thread: https://forum.proxmox.com/threads/pve-5-windows-guest-vm-slow-to-start.50924/#post-238931
  4. PVE 5 Windows guest VM slow to start

    I think I may have resolved the slow boot up issue by 'unchecking' the default balloon option in memory setting. It seemed the boot up is much faster without ballooning.
  5. PVE 5 Windows guest VM slow to start

    Hi Dominik, Would unchecking ballooning in v5 the same as pre-v4.4 "use fixed sized memory" option?
  6. PVE 5 Windows guest VM slow to start

    Now I suspect default memory ballooning checkbox that was introduced in v4.4 that may the cause of this issue.
  7. What I would like to see in Proxmox

    Bring back storage summary page to show 'available' disk remaining. It was gone somewhere in PVE ver. 4.4
  8. Slow Windows Server 2012 R2 guests (becoming faster during interactive use)

    I suspected it's a bug in PVE5 but no one seemed to have similar issues. :( I will try to install a new host with PVE4 to see if Windows guest VM runs faster. Although, I already have a few hosts on PVE4 and some on PVE5 - only those KVMs on PVE5 showing slow boot. Moreover, hosts hardware is...
  9. What I would like to see in Proxmox

    support for vGPU for sharing GPU just like vCPU for sharing CPU. Release notes https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3 vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias...
  10. Slow Windows Server 2012 R2 guests (becoming faster during interactive use)

    I didn't face such issue with slow boot on PVE4. It boots up within a few seconds on PVE4. On PVE5, it can take up to few minutes to boot up.
  11. Slow Windows Server 2012 R2 guests (becoming faster during interactive use)

    I'm also having similar problems with windows 2012 R2 guest VM from PVE 4 migrated to PVE 5 booting up really slow.
  12. PVE 5 Windows guest VM slow to start

    Hello, I'm running PVE 4 and PVE 5. I noticed 'Windows server 2012 R2' VM on PVE 5 are slower to start than on PVE 4. Anyone experiencing this problem? I'm using templates VM migrated from PVE 4 to PVE 5. It's fast to boot on PVE 4 but really slow to boot on PVE 5.
  13. Port forwarding same port to multiple VMs using same port as well

    I have limited public IPs and would like to know if it is possible to share one public IP for the same or identical service port in each VM with different private IP addresses.
  14. Port forwarding same port to multiple VMs using same port as well

    Hi, Is it possible for Promox host to use iptables sharing a single public IP and handle incoming traffic redirect to multiple VMs with the same port no. as well? I know it is possible if incoming traffic port numbers are different. However, if the incoming port is fixed, how can it be...
  15. Windows 10 pro VM clone - NIC not updating

    Update: There seems to be a bug for VirtIO NIC drivers for Win10. No issues if you select default e1000 NIC for Win10.
  16. Cluster over high latency WAN?

    Yes, this helps thanks. If that is the case due to high latency, you can create multiple sets of clusters - one in each DC for central management? Any idea if cluster can support linked-clone offline migration?
  17. Migrating a linked clone doesn't show as linked clone

    Linked-clones are linked to a base VM template. So migrating linked-clones across node needs the base template to be migrated as well. As I found out, Backing up linked clone with vzdump is possible but a restore will turned the linked-clone to a full-clone. Anyone knows if is is possible to...
  18. Cluster over high latency WAN?

    Interesting topic in discussion. Is it feasible to setup a cluster across WAN - just for central management (single web gui) and offline vm migration across nodes. No need for HA. In particular, does Proxmox cluster supports offline migration of 'linked-clone' across nodes i.e. moving...
  19. Windows 10 pro VM clone - NIC not updating

    Using latest Proxmox 4.1-13 and facing problems cloning Win10 pro VM. It seems the NIC/Mac address is still using original/template VM and not automatically updated to the new clone VM's Mac address. Using latest VirtIO drivers (virtio-win-0.1.112). Workaround is to manually remove/re-add the...
  20. [SOLVED] No Console Access via noVNC - Unable to create RFB client

    I'm getting this error when accessing NoVNC on pve 4.0 gui. "Unable to create RFB client -- ReferenceError: inflator is not defined"

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!