Hi,
On a two server cluster, trying to offline migrate a 50GB VM stored from one node to another kills I/O on the destination node.
Both have HW RAID-10 Enterprise SSD (they are on OVH) and 10Gb connection between them, and the VM is stored on lvm-thin (recently converted from "classic" lvm).
I think that 10Gbps speed is killing my I/O... Maybe there's a way to set a speed limit on those high I/O tasks?
Here is pveperf output on the destination node:
CPU BOGOMIPS: 83202.88
REGEX/SECOND: 1831512
HD SIZE: 19.25 GB (/dev/sda2)
BUFFERED READS: 684.25 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND: 6329.57
DNS EXT: 23.89 ms
DNS INT: 39.36 ms
For now, as a workaround, i'm moving VM disk from local to shared (sata based) storage used for backups, then do the migration and move again disks to local. This works fine, I guess because storage backup is slower (SATA storage + 1Gbps instead of 10Gbps)
Thank you all
On a two server cluster, trying to offline migrate a 50GB VM stored from one node to another kills I/O on the destination node.
Both have HW RAID-10 Enterprise SSD (they are on OVH) and 10Gb connection between them, and the VM is stored on lvm-thin (recently converted from "classic" lvm).
I think that 10Gbps speed is killing my I/O... Maybe there's a way to set a speed limit on those high I/O tasks?
Here is pveperf output on the destination node:
CPU BOGOMIPS: 83202.88
REGEX/SECOND: 1831512
HD SIZE: 19.25 GB (/dev/sda2)
BUFFERED READS: 684.25 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND: 6329.57
DNS EXT: 23.89 ms
DNS INT: 39.36 ms
For now, as a workaround, i'm moving VM disk from local to shared (sata based) storage used for backups, then do the migration and move again disks to local. This works fine, I guess because storage backup is slower (SATA storage + 1Gbps instead of 10Gbps)
Thank you all
Last edited: