Hi,
We have configured two servers with Proxmox 6.2 with local storage
hypervisor 1
CPU = 2x Xeon E5-2630 v4
Memory = 128 GB
Network = 2x 10G
1x Public Interface (10G)
1x Private interface for clustering (10G)
Storage = Using local storage
RAID1 (HW RAID controller) 1TB SATA LVM
RAID1 (HW RAID controller) 4TB SSD LVM-thin
hypervisor 2
CPU = 2x Xeon E5-2630 v4
Memory = 128 GB
Network = 2x 10G
1x Public Interface (10G)
1x Private interface for clustering (10G)
Storage = Using local storage
RAID1 (HW RAID controller) 1TB SATA LVM
RAID1 (HW RAID controller) 4TB SSD LVM-thin
root@hv1:~# cat /etc/pve/datacenter.cfg
bwlimit: default=10240,migration=10240
keyboard: en-us
migration: insecure,network=10.28.28.101/24
root@hv1:~# cat /etc/pve/qemu-server/2708.conf
agent: 1
bootdisk: scsi0
cores: 4
cpu: qemu64
cpulimit: 1
ide0: none,media=cdrom
memory: 8192
name: xxxxxxxx
net0: virtio=2e:e0:8d:82:56:3d,bridge=vmbrxxx,rate=125
numa: 0
onboot: 1
ostype: win10
scsi0: hv1storage:vm-2708-disk-0,cache=writethrough,discard=on,format=raw,mbps_rd=5,mbps_rd_max=5,mbps_wr=5,mbps_wr_max=5,size=200G,ssd=1
scsi1: hv1storage:vm-2708-disk-1,cache=writethrough,discard=on,format=raw,mbps_rd=5,mbps_rd_max=5,mbps_wr=5,mbps_wr_max=5,size=1T,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=9729b8f4-57a5-4efd-843a-df3a5b3ea528
sockets: 1
vcpus: 4
vmgenid: 62171785-fe08-4cbf-aef3-8eebdb5674dc
[Issue]
When we performing a live migration from hypervisor 1 to hypervisor 2 the hypervisor 2 is using all its CPU cores for the migration. This is resulting in that the VMs on hypervisor 2 becomes unavailable until the migration process is finished or canceled. We tried to limit the read/write and the CPU limit but this did not resolve our problem.
Update:
We have this issue also when we are restoring a VM from dump with qmrestore.
We have configured two servers with Proxmox 6.2 with local storage
hypervisor 1
CPU = 2x Xeon E5-2630 v4
Memory = 128 GB
Network = 2x 10G
1x Public Interface (10G)
1x Private interface for clustering (10G)
Storage = Using local storage
RAID1 (HW RAID controller) 1TB SATA LVM
RAID1 (HW RAID controller) 4TB SSD LVM-thin
hypervisor 2
CPU = 2x Xeon E5-2630 v4
Memory = 128 GB
Network = 2x 10G
1x Public Interface (10G)
1x Private interface for clustering (10G)
Storage = Using local storage
RAID1 (HW RAID controller) 1TB SATA LVM
RAID1 (HW RAID controller) 4TB SSD LVM-thin
root@hv1:~# cat /etc/pve/datacenter.cfg
bwlimit: default=10240,migration=10240
keyboard: en-us
migration: insecure,network=10.28.28.101/24
root@hv1:~# cat /etc/pve/qemu-server/2708.conf
agent: 1
bootdisk: scsi0
cores: 4
cpu: qemu64
cpulimit: 1
ide0: none,media=cdrom
memory: 8192
name: xxxxxxxx
net0: virtio=2e:e0:8d:82:56:3d,bridge=vmbrxxx,rate=125
numa: 0
onboot: 1
ostype: win10
scsi0: hv1storage:vm-2708-disk-0,cache=writethrough,discard=on,format=raw,mbps_rd=5,mbps_rd_max=5,mbps_wr=5,mbps_wr_max=5,size=200G,ssd=1
scsi1: hv1storage:vm-2708-disk-1,cache=writethrough,discard=on,format=raw,mbps_rd=5,mbps_rd_max=5,mbps_wr=5,mbps_wr_max=5,size=1T,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=9729b8f4-57a5-4efd-843a-df3a5b3ea528
sockets: 1
vcpus: 4
vmgenid: 62171785-fe08-4cbf-aef3-8eebdb5674dc
[Issue]
When we performing a live migration from hypervisor 1 to hypervisor 2 the hypervisor 2 is using all its CPU cores for the migration. This is resulting in that the VMs on hypervisor 2 becomes unavailable until the migration process is finished or canceled. We tried to limit the read/write and the CPU limit but this did not resolve our problem.
Update:
We have this issue also when we are restoring a VM from dump with qmrestore.
Last edited: