VM freezes for a few minutes after migration and gets time offset

ScOut3R

Member
Oct 2, 2013
55
5
6
Hi,

I experience VM lockups after live migration. These lockups usually lasts for 2-5 minutes with 100% CPU usage and the VM is not responding and the network and disk graphs are showing a 200-400 petabyte spike. After the lockup the clock inside the VM is off from 16 to 360 secods. The VMs are Ubuntu's with 3.8, 3.11 and 3.13 kernels or Debians with 3.10 and 3.14 kernels. It's a bit hard to willingly reproduce because it seems to me that it happens randomly, but at least 50% of the migrations ending up like this. I'm experiencing this problem since 3.2. With 3.1 running 2.6.32 I didn't have this lockup problem. I have moved to the 3.10 host kernel because I have a few FreeBSD VMs which weren't able to boot on a host with the 2.6.32 kernel.
The VMs using the kvm64 CPU type and the hosts have identical CPUs. I have frequency scaling enabled, but I've tested the migration with scaling disabled and the issue was present. Do you have any suggestion I should try?

Best regards,
Mate

Code:
[COLOR=#000000][FONT=Andale Mono]proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-4-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-4-pve: 3.10.0-17[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-32-pve: 2.6.32-136[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-5-pve: 3.10.0-19[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-28-pve: 2.6.32-124[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-2-pve: 3.10.0-10[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-30-pve: 2.6.32-130[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-29-pve: 2.6.32-126[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-34-pve: 2.6.32-140[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-3-pve: 3.10.0-11[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-31-pve: 2.6.32-132[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-26-pve: 2.6.32-114[/FONT]
[FONT=Andale Mono]lvm2: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]clvm: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]corosync-pve: 1.4.7-1[/FONT]
[FONT=Andale Mono]openais-pve: 1.1.4-3[/FONT]
[FONT=Andale Mono]libqb0: 0.11.1-2[/FONT]
[FONT=Andale Mono]redhat-cluster-pve: 3.2.0-2[/FONT]
[FONT=Andale Mono]resource-agents-pve: 3.9.2-4[/FONT]
[FONT=Andale Mono]fence-agents-pve: 4.0.10-1[/FONT]
[FONT=Andale Mono]pve-cluster: 3.0-15[/FONT]
[FONT=Andale Mono]qemu-server: 3.3-3[/FONT]
[FONT=Andale Mono]pve-firmware: 1.1-3[/FONT]
[FONT=Andale Mono]libpve-common-perl: 3.0-19[/FONT]
[FONT=Andale Mono]libpve-access-control: 3.0-15[/FONT]
[FONT=Andale Mono]libpve-storage-perl: 3.0-25[/FONT]
[FONT=Andale Mono]pve-libspice-server1: 0.12.4-3[/FONT]
[FONT=Andale Mono]vncterm: 1.1-8[/FONT]
[FONT=Andale Mono]vzctl: 4.0-1pve6[/FONT]
[FONT=Andale Mono]vzprocps: 2.0.11-2[/FONT]
[FONT=Andale Mono]vzquota: 3.1-2[/FONT]
[FONT=Andale Mono]pve-qemu-kvm: 2.1-10[/FONT]
[FONT=Andale Mono]ksm-control-daemon: 1.1-1[/FONT][/COLOR]
[COLOR=#000000][FONT=Andale Mono]glusterfs-client: 3.5.2-1[/FONT][/COLOR]
 
Last edited:
Hi,

I experience VM lockups after live migration. These lockups usually lasts for 2-5 minutes with 100% CPU usage and the VM is not responding and the network and disk graphs are showing a 200-400 petabyte spike. After the lockup the clock inside the VM is off from 16 to 360 secods. The VMs are Ubuntu's with 3.8, 3.11 and 3.13 kernels or Debians with 3.10 and 3.14 kernels. It's a bit hard to willingly reproduce because it seems to me that it happens randomly, but at least 50% of the migrations ending up like this. I'm experiencing this problem since 3.2. With 3.1 running 2.6.32 I didn't have this lockup problem. I have moved to the 3.10 host kernel because I have a few FreeBSD VMs which weren't able to boot on a host with the 2.6.32 kernel.
The VMs using the kvm64 CPU type and the hosts have identical CPUs. I have frequency scaling enabled, but I've tested the migration with scaling disabled and the issue was present. Do you have any suggestion I should try?

Best regards,
Mate

Code:
[COLOR=#000000][FONT=Andale Mono]proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-4-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-4-pve: 3.10.0-17[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-32-pve: 2.6.32-136[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-5-pve: 3.10.0-19[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-28-pve: 2.6.32-124[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-2-pve: 3.10.0-10[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-30-pve: 2.6.32-130[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-29-pve: 2.6.32-126[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-34-pve: 2.6.32-140[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-3-pve: 3.10.0-11[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-31-pve: 2.6.32-132[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-26-pve: 2.6.32-114[/FONT]
[FONT=Andale Mono]lvm2: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]clvm: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]corosync-pve: 1.4.7-1[/FONT]
[FONT=Andale Mono]openais-pve: 1.1.4-3[/FONT]
[FONT=Andale Mono]libqb0: 0.11.1-2[/FONT]
[FONT=Andale Mono]redhat-cluster-pve: 3.2.0-2[/FONT]
[FONT=Andale Mono]resource-agents-pve: 3.9.2-4[/FONT]
[FONT=Andale Mono]fence-agents-pve: 4.0.10-1[/FONT]
[FONT=Andale Mono]pve-cluster: 3.0-15[/FONT]
[FONT=Andale Mono]qemu-server: 3.3-3[/FONT]
[FONT=Andale Mono]pve-firmware: 1.1-3[/FONT]
[FONT=Andale Mono]libpve-common-perl: 3.0-19[/FONT]
[FONT=Andale Mono]libpve-access-control: 3.0-15[/FONT]
[FONT=Andale Mono]libpve-storage-perl: 3.0-25[/FONT]
[FONT=Andale Mono]pve-libspice-server1: 0.12.4-3[/FONT]
[FONT=Andale Mono]vncterm: 1.1-8[/FONT]
[FONT=Andale Mono]vzctl: 4.0-1pve6[/FONT]
[FONT=Andale Mono]vzprocps: 2.0.11-2[/FONT]
[FONT=Andale Mono]vzquota: 3.1-2[/FONT]
[FONT=Andale Mono]pve-qemu-kvm: 2.1-10[/FONT]
[FONT=Andale Mono]ksm-control-daemon: 1.1-1[/FONT][/COLOR]
[COLOR=#000000][FONT=Andale Mono]glusterfs-client: 3.5.2-1[/FONT][/COLOR]

Upgrade your system :

# vim /etc/apt/sources.list
add :
deb http://download.proxmox.com/debian wheezy pvetest

# apt-get update && apt-get -y dist-upgrade && apt-get install -y pve-kernel-3.10.0.7-pve

reboot your node with this kernel if you don't use openvz.

Merci.
 
I suuggest you test with latest pve-qemu-kvm from pvetest repository.

I could upgrade one of the five production nodes. Is downgrade possible back to stable using similar methods? Even if the testing goes fine I would like to downgrade back to stable (we have the community subscription) in the knowledge that the fix will make it's way into the stable repo someday.
 
I'm seeing similar issue with very high cpu usage in userland by the KVM at boot, shutdown and resume operations after running apt-get upgrade against the pvetest repo, I am still on kernel 2.6.32-37 though not 3.1. Also when doing apt-get upgrade it did hold back some package also some emu/kvm things, but assumed they might have been from std. debian repo and that this were okay thus not to overload similar pve packages.

Wondering if it is the pve-qemu-kvm 2.1-5 package since it is the kvm process that burns userland cpu and if I should/could revert to previous. Any hints appreciated to solve this, TIA!


My VM console shows issues like this when booting/resuming a VM:
BUG: soft lockup - CPU#1 stuck for 66s! [udevd:96]
BUG: soft lockup - CPU#0 stuck for 66s! [udevadm:98]
BUG: soft lockup - CPU#1 stuck for 64s! [udevd:96]
BUG: soft lockup - CPU#1 stuck for 70s! [udevd:96]
BUG: soft lockup - CPU#0 stuck for 67s! [udevadm:103]
BUG: soft lockup - CPU#1 stuck for 67s! [udevd:96]

and like this when shutting down a VM:

BUG: soft lockup - CPU#0 stuck for 75s! [S01halt:7357]
BUG: soft lockup - CPU#1 stuck for 75s! [S01halt:6998]
BUG: soft lockup - CPU#0 stuck for 64s! [S01halt:7357]





root@node7:~# dpkg -l | egrep qemu\|pve
ii clvm 2.02.98-pve4 amd64 Cluster LVM Daemon for lvm2
ii corosync-pve 1.4.7-1 amd64 Standards-based cluster framework (daemon and modules)
ii dmsetup 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii fence-agents-pve 4.0.10-2 amd64 fence agents for redhat cluster suite
ii libcorosync4-pve 1.4.7-1 amd64 Standards-based cluster framework (libraries)
ii libdevmapper-event1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii liblvm2app2.2:amd64 2.02.98-pve4 amd64 LVM2 application library
ii libopenais3-pve 1.1.4-3 amd64 Standards-based cluster framework (libraries)
ii libpve-access-control 3.0-16 amd64 Proxmox VE access control library
ii libpve-common-perl 3.0-22 all Proxmox VE base library
ii libpve-storage-perl 3.0-28 all Proxmox VE storage management library
ii lvm2 2.02.98-pve4 amd64 Linux Logical Volume Manager
ii novnc-pve 0.4-7 amd64 HTML5 VNC client
ii openais-pve 1.1.4-3 amd64 Standards-based cluster framework (daemon and modules)
ii pve-cluster 3.0-15 amd64 Cluster Infrastructure for Proxmox Virtual Environment
ii pve-firewall 1.0-17 amd64 Proxmox VE Firewall
ii pve-firmware 1.1-3 all Binary firmware code for the pve-kernel
ii pve-kernel-2.6.32-37-pve 2.6.32-146 amd64 The Proxmox PVE Kernel Image
ii pve-libspice-server1 0.12.4-3 amd64 SPICE remote display system server library
ii pve-manager 3.3-1 amd64 The Proxmox Virtual Environment
ii pve-qemu-kvm 2.1-5 amd64 Full virtualization on x86 hardware
ii qemu-server 3.3-14 amd64 Qemu Server Tools
ii redhat-cluster-pve 3.2.0-2 amd64 Red Hat cluster suite
ii resource-agents-pve 3.9.2-4 amd64 resource agents for redhat cluster suite
ii tar 1.27.1+pve.1 amd64 GNU version of the tar archiving utility
ii vzctl 4.0-1pve6 amd64 OpenVZ - server virtualization solution - control tools
 
I'm starting to think that this issue might be related to the guest kernel. I'll try to narrow it down, but my guess is that this issue happens with guests running a kernel older than 3.13.

UPDATE: nope, it happens with 3.13 guest kernel too.
 
Last edited:
My guests are on 2.6.32-504.8.1.el6.x86_64 latest CentOS 6.6 and did not see this before patch against latest PVEtest repo, so believe it might be in newer/latest pve-qemu-kvm f.ex. will try to revert to previous package if possible...
 
Not easily done as other packages depend on this off course... su..s to be new in debian package managing :)

Though it seems I got pending updates for these packages:

root@node7:~# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
ceph-common grub-common grub-pc grub-pc-bin grub2-common librados2 librbd1
pve-manager pve-qemu-kvm python-ceph
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
root@node7:~#

which somehow conflicts with other packages installed...

would be nice to get pve-* stuff patched to see if this fixes anything of this cpu wasteland 'feature'


aptitude 0.6.8.2
--\ Upgradable Packages (10)
--\ admin - Administrative utilities (install software, manage users, etc) (7)
--\ main - The main Debian archive (7)
i ceph-common 0.80.5-1~bpo70 0.87-1~bpo70+1
i grub-common 1.99-27+deb7u2 2.02~bpo70+3
i grub-pc 1.99-27+deb7u2 2.02~bpo70+3
i grub-pc-bin 1.99-27+deb7u2 2.02~bpo70+3
i grub2-common 1.99-27+deb7u2 2.02~bpo70+3
i pve-manager 3.3-1 3.3-15
i pve-qemu-kvm 2.1-5 2.1-12
--\ libs - Collections of software routines (2)
--\ main - The main Debian archive (2)
i librados2 0.80.5-1~bpo70 0.87-1~bpo70+1
i librbd1 0.80.5-1~bpo70 0.87-1~bpo70+1
--\ python - Python programming language and libraries (1)
--\ main - The main Debian archive (1)
i python-ceph 0.80.5-1~bpo70 0.87-1~bpo70+1
--- Installed Packages (507)




How to make package manager update such packages force fully... by doing a apt-get dist-upgrade... will test these new update packages....
 
Last edited:
Does not seems to be any better with those latest packages.

#> dpkg -l | egrep qemu\|pve
ii clvm 2.02.98-pve4 amd64 Cluster LVM Daemon for lvm2
ii corosync-pve 1.4.7-1 amd64 Standards-based cluster framework (daemon and modules)
ii dmsetup 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii fence-agents-pve 4.0.10-2 amd64 fence agents for redhat cluster suite
ii libcorosync4-pve 1.4.7-1 amd64 Standards-based cluster framework (libraries)
ii libdevmapper-event1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii liblvm2app2.2:amd64 2.02.98-pve4 amd64 LVM2 application library
ii libopenais3-pve 1.1.4-3 amd64 Standards-based cluster framework (libraries)
ii libpve-access-control 3.0-16 amd64 Proxmox VE access control library
ii libpve-common-perl 3.0-22 all Proxmox VE base library
ii libpve-storage-perl 3.0-28 all Proxmox VE storage management library
ii lvm2 2.02.98-pve4 amd64 Linux Logical Volume Manager
ii novnc-pve 0.4-7 amd64 HTML5 VNC client
ii openais-pve 1.1.4-3 amd64 Standards-based cluster framework (daemon and modules)
ii pve-cluster 3.0-15 amd64 Cluster Infrastructure for Proxmox Virtual Environment
ii pve-firewall 1.0-17 amd64 Proxmox VE Firewall
ii pve-firmware 1.1-3 all Binary firmware code for the pve-kernel
ii pve-kernel-2.6.32-32-pve 2.6.32-136 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-2.6.32-37-pve 2.6.32-146 amd64 The Proxmox PVE Kernel Image
ii pve-libspice-server1 0.12.4-3 amd64 SPICE remote display system server library
ii pve-manager 3.3-15 amd64 The Proxmox Virtual Environment
ii pve-qemu-kvm 2.1-12 amd64 Full virtualization on x86 hardware
ri qemu-server 3.3-14 amd64 Qemu Server Tools
ii redhat-cluster-pve 3.2.0-2 amd64 Red Hat cluster suite
ii resource-agents-pve 3.9.2-4 amd64 resource agents for redhat cluster suite
ii tar 1.27.1+pve.1 amd64 GNU version of the tar archiving utility
ii vzctl 4.0-1pve6 amd64 OpenVZ - server virtualization solution - control tools



KVM process on hypervisor host just burns cpu and uses much more memory than I would expect it too.

Tasks: 225 total, 1 running, 224 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.6 sy, 0.0 ni, 98.4 id, 0.4 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem: 24685180 total, 15333092 used, 9352088 free, 76964 buffers
KiB Swap: 8912892 total, 0 used, 8912892 free, 10561740 cached


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
771755 root 20 0 12.7g 2.8g 8796 S 106.7 12.0 13:46.07 kvm
775973 root 20 0 23260 1616 1100 R 6.3 0.0 0:00.01 top
1 root 20 0 10608 824 688 S 0.0 0.0 0:01.57 init

kvm 771755 is just booting a VM image here which takes minutes and guest status on PVE web manager says VM is using 230MB memory, but kvm seen on hypervisor host is residential +3GB and virtual +12GB, why this huge difference :(

# This is what kvm is spending al that cpu cycles on
#> strace -p 771755
...
clock_gettime(CLOCK_MONOTONIC, {173390, 592374390}) = 0
ppoll([{fd=22, events=POLLIN|POLLERR|POLLHUP}, {fd=10, events=POLLIN|POLLERR|POLLHUP}, {fd=3, events=POLLIN|POLLERR|POLLHUP}, {fd=6, events=POLLIN}, {fd=28, events=POLLIN}, {fd=7, events=POLLIN}, {fd=5, events=POLLIN}], 7, {0, 0}, NULL, 8) = 0 (Timeout)
read(6, 0x7ffffd191880, 16) = -1 EAGAIN (Resource temporarily unavailable)
clock_gettime(CLOCK_MONOTONIC, {173390, 592514783}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 592554076}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 592594270}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 592633105}) = 0
futex(0x7f7951dfeb80, FUTEX_WAKE_PRIVATE, 1) = 1
ppoll([{fd=22, events=POLLIN|POLLERR|POLLHUP}, {fd=10, events=POLLIN|POLLERR|POLLHUP}, {fd=3, events=POLLIN|POLLERR|POLLHUP}, {fd=6, events=POLLIN}, {fd=28, events=POLLIN}, {fd=7, events=POLLIN}, {fd=5, events=POLLIN}], 7, {0, 494405730}, NULL, 8) = 1 ([{fd=5, revents=POLLIN}], left {0, 494402232})
tgkill(771755, 771814, SIGUSR1) = 0
futex(0x7f7951dfeb44, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x7f7951dfeb80, 3392288) = 1
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
read(5, "\2\0\0\0\0\0\0\0", 512) = 8
clock_gettime(CLOCK_MONOTONIC, {173390, 592962992}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 593005661}) = 0
ppoll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, {0, 0}, NULL, 8) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {173390, 593177761}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 593222281}) = 0
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
clock_gettime(CLOCK_MONOTONIC, {173390, 593304957}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 593344224}) = 0
clock_gettime(CLOCK_MONOTONIC, {173390, 593384541}) = 0
...
 
I've upgraded one of the nodes with the packages from pvetest and migration to that node seems to be fine for now.

Anyway, could unsychnorized system clocks in the VMs cause this migration issue?
 
I patched to latest on pvetest on friday the 13th. ;) and got these:

Code:
root@node7:~# dpkg -l | egrep pve\|qemu
ii  clvm                             2.02.98-pve4                  amd64        Cluster LVM Daemon for lvm2
ii  corosync-pve                     1.4.7-1                       amd64        Standards-based cluster framework (daemon and modules)
ii  dmsetup                          2:1.02.77-pve4                amd64        Linux Kernel Device Mapper userspace library
ii  fence-agents-pve                 4.0.10-2                      amd64        fence agents for redhat cluster suite
ii  libcorosync4-pve                 1.4.7-1                       amd64        Standards-based cluster framework (libraries)
ii  libdevmapper-event1.02.1:amd64   2:1.02.77-pve4                amd64        Linux Kernel Device Mapper event support library
ii  libdevmapper1.02.1:amd64         2:1.02.77-pve4                amd64        Linux Kernel Device Mapper userspace library
ii  liblvm2app2.2:amd64              2.02.98-pve4                  amd64        LVM2 application library
ii  libopenais3-pve                  1.1.4-3                       amd64        Standards-based cluster framework (libraries)
ii  libpve-access-control            3.0-16                        amd64        Proxmox VE access control library
ii  libpve-common-perl               3.0-24                        all          Proxmox VE base library
ii  libpve-storage-perl              3.0-30                        all          Proxmox VE storage management library
ii  lvm2                             2.02.98-pve4                  amd64        Linux Logical Volume Manager
ii  novnc-pve                        0.4-7                         amd64        HTML5 VNC client
ii  openais-pve                      1.1.4-3                       amd64        Standards-based cluster framework (daemon and modules)
ii  pve-cluster                      3.0-16                        amd64        Cluster Infrastructure for Proxmox Virtual Environment
ii  pve-firewall                     1.0-18                        amd64        Proxmox VE Firewall
ii  pve-firmware                     1.1-3                         all          Binary firmware code for the pve-kernel
ii  pve-kernel-2.6.32-32-pve         2.6.32-136                    amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-2.6.32-37-pve         2.6.32-147                    amd64        The Proxmox PVE Kernel Image
ii  pve-libspice-server1             0.12.4-3                      amd64        SPICE remote display system server library
ii  pve-manager                      3.3-19                        amd64        The Proxmox Virtual Environment
ii  pve-qemu-kvm                     2.1-12                        amd64        Full virtualization on x86 hardware
ii  qemu-server                      3.3-17                        amd64        Qemu Server Tools
ii  redhat-cluster-pve               3.2.0-2                       amd64        Red Hat cluster suite
ii  resource-agents-pve              3.9.2-4                       amd64        resource agents for redhat cluster suite
ii  tar                              1.27.1+pve.1                  amd64        GNU version of the tar archiving utility
ii  vzctl                            4.0-1pve6                     amd64        OpenVZ - server virtualization solution - control tools

which seems to work so much better, now live migration seems to work flawless again, great news for PoC testing...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!