Slow qcow2 disk migration between nfs-storages

jnkraft

Member
Mar 30, 2021
5
0
6
36
Compute nodes: 2x10gbit LACP, nfs nconnect=16
Storage nodes: 4x10gbit LACP, mdadm 8x8TB enterprise SSDs, nfs daemons count increased to 256
Newtork: ovs because of lots of different VM vlans
Migration network is in separate vlan and CIDR on top of ovs-bond
I can almost max out fio to underlaying ssd-mdadm limit between compute node and storage node, can get about 7-8 gbit/s with dd, have decent iops and throughput winthin vms with same benchmarks considering qcow2 penalty. But i have a feeling these speeds that good because of overdriven nfs.
I'm in the middle of maintenance task that includes draining two of three storage nodes to the third one. And migrating disks with average speed of 2-3 gbits/s of 30TB VM data is SLOW. I understand that migration is done via some kind of ssh, but maybe there are some hints or hacks to speed it up? Live migration or VM RAM has one - set migration channel to insecure, after that i've got ram migration speed from mentioned 2-3 gbit/s to full single channel's 10 gbit/s.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!