Search results

  1. M

    PVE 4.0 Beta2 + qemu + acpi

    Same issue on my DTAP cluster (Dell R730's).
  2. M

    Windows Clients Freeze on Proxmox 3.4-6

    Please use kernel of pve-kernel-3.10.0. Fixed my problems.
  3. M

    official full encryption support

    Hi, What if you install just clean OS Debian 7 and turn on the encryption? Does that work? Regards, MeyRNL
  4. M

    Windows 2008 R2 + HOST CPU = BSOD

    Have you tried with kernel: 3.10.0-8-pve/3.10.0-9-pve(updated yesterday)?
  5. M

    Live migration of VM network issue

    Hi Udo, Thank you for your reply. I've tried with the precursor kernels of 2.6 and 3.10, and unfortunately not working. I am not able to downgrade, because then I have to reboot all VM's of my customers (>150). Kind regards, MeyRNL
  6. M

    web interface not working?

    Hi Rrenno1, Please run the following on your hypervisor: # pvecm updatecerts # /etc/init.d/pveproxy restart Will this solve your problem? Kind regards, MeyRNL
  7. M

    Live migration of VM network issue

    Is there none of Proxmox who can help us with this issue?
  8. M

    Very high load of the node

    How do you want us to meassure the load then? In my opinion seeing only this difference in Unix 'load' is saying something is not right..
  9. M

    Multiple disk restore

    Hi Tom, Unfortunately it is not possible to select which drives needs to be restored in example except for the OS drive. By configuring the harddrives for the VM you can choose for the option to disable backups for that specfic drive. Kind regards, MeyrNL
  10. M

    Backup of larger VM, second sync takes too long time

    Hi Postcd, Does your current setup now allow to use LVM2 for backing up? LVM2 is zero-downtime. If not, it think the reason why your system is down for sow long, is that it needs to recalculate how many files needs to be checked (especially for VMs with a lot of files in the journal). Please...
  11. M

    Very high load of the node

    The results after migrating VMs of hypervisor07 to hypervisor02, and vice versa: hypervisor02 load average: 6.37, 7.22, 5.77 <- Running kernel pve-kernel-3.10.0-8-pve - VMs: 32 hypervisor07 load average: 0.68, 1.02, 0.26 <- Running kernel pve-kernel-2.6.32-37-pve - VMs: 31 It's evident...
  12. M

    Live migration of VM network issue

    I did several tests by changing the options in Linux bridges bridge_stp off and bridge_fd 0, but still not works. TCP dump shows the VM sends out an ARP correctly, but simply doesn't come through. Now have do to a arping after migration, even this was not before using version 3.4.
  13. M

    Very high load of the node

    I am using: proxmox-ve-2.6.32: 3.4-150 (running kernel: 3.10.0-8-pve) And below a list of all our hypervisors: hypervisor01 load average: 7.90, 6.79, 7.84 - VMs: 35 hypervisor02 load average: 3.37, 4.06, 4.23 - VMs: 31 hypervisor03 load average: 5.22, 5.32, 5.74 - VMs: 27 hypervisor04 load...
  14. M

    [HOW-TO] Separate migration network - dirty fix

    Hi all, On the forum I saw topics about questions how to let Proxmox Cluster use a different network for migration traffic. Currently for all my clusters I made a code change in the QemuServer.pm to change the listening IP (unfortunately hard-coded) of the migration task. Currently I have 5...
  15. M

    Windows 2012 R2 RDP slowness

    We have similar setup and experiencing also the same issues. One of our important issues is this: http://forum.proxmox.com/threads/21139-Live-migration-of-VM-network-issue. Currently we have ~70 Windows instances running, 20 of them are migrated to 3 hypervisors with version 3.3, and the issues...
  16. M

    Live migration of VM network issue

    Update on this: reinstalled 3 hypervisors with version 3.3 (a06c9f73) and the problem does NOT exists between those reinstalled hypervisors. So it is definitely related to the latest release, so please check the last release.
  17. M

    Live migration of VM network issue

    Hi all, Since the upgrade to version 3.4 I am experiencing serious problems with the (live) migration functionality of my VMs to hypervisors in the same cluster. On the moment of the hypervisor switch-over of the VM, the network becomes unavailable and is not replying on any ICMP packets. After...
  18. M

    Proxmox VE 3.4 released!

    After upgrading all my hypervisors to version 3.4 I am not able to do any migration, because the virtual server will become unavailable and in some cases completely freeze! First I thought it had something to do with the ARP time-out, but this is not the case. Before the upgrade migrating was...
  19. M

    How does a migration affect a VM's network connection?

    I have the same issue after upgrading all hypervisors to version 3.4. The “Here I am” stage in the QEMU migration progress looks like not being triggered. Before the upgrade the functionality was working fine without any problems. Is this may be related to the - E1000/disconnected -...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!