VM migration between local-lvm storage kills I/O on destination server

carles89

Renowned Member
May 27, 2015
100
14
83
Hi,

On a two server cluster, trying to offline migrate a 50GB VM stored from one node to another kills I/O on the destination node.

Both have HW RAID-10 Enterprise SSD (they are on OVH) and 10Gb connection between them, and the VM is stored on lvm-thin (recently converted from "classic" lvm).

I think that 10Gbps speed is killing my I/O... Maybe there's a way to set a speed limit on those high I/O tasks?

Here is pveperf output on the destination node:

CPU BOGOMIPS: 83202.88
REGEX/SECOND: 1831512
HD SIZE: 19.25 GB (/dev/sda2)
BUFFERED READS: 684.25 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND: 6329.57
DNS EXT: 23.89 ms
DNS INT: 39.36 ms

For now, as a workaround, i'm moving VM disk from local to shared (sata based) storage used for backups, then do the migration and move again disks to local. This works fine, I guess because storage backup is slower (SATA storage + 1Gbps instead of 10Gbps)

Thank you all
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!