Compute nodes: 2x10gbit LACP, nfs nconnect=16
Storage nodes: 4x10gbit LACP, mdadm 8x8TB enterprise SSDs, nfs daemons count increased to 256
Newtork: ovs because of lots of different VM vlans
Migration network is in separate vlan and CIDR on top of ovs-bond
I can almost max out fio to underlaying ssd-mdadm limit between compute node and storage node, can get about 7-8 gbit/s with dd, have decent iops and throughput winthin vms with same benchmarks considering qcow2 penalty. But i have a feeling these speeds that good because of overdriven nfs.
I'm in the middle of maintenance task that includes draining two of three storage nodes to the third one. And migrating disks with average speed of 2-3 gbits/s of 30TB VM data is SLOW. I understand that migration is done via some kind of ssh, but maybe there are some hints or hacks to speed it up? Live migration or VM RAM has one - set migration channel to insecure, after that i've got ram migration speed from mentioned 2-3 gbit/s to full single channel's 10 gbit/s.
Storage nodes: 4x10gbit LACP, mdadm 8x8TB enterprise SSDs, nfs daemons count increased to 256
Newtork: ovs because of lots of different VM vlans
Migration network is in separate vlan and CIDR on top of ovs-bond
I can almost max out fio to underlaying ssd-mdadm limit between compute node and storage node, can get about 7-8 gbit/s with dd, have decent iops and throughput winthin vms with same benchmarks considering qcow2 penalty. But i have a feeling these speeds that good because of overdriven nfs.
I'm in the middle of maintenance task that includes draining two of three storage nodes to the third one. And migrating disks with average speed of 2-3 gbits/s of 30TB VM data is SLOW. I understand that migration is done via some kind of ssh, but maybe there are some hints or hacks to speed it up? Live migration or VM RAM has one - set migration channel to insecure, after that i've got ram migration speed from mentioned 2-3 gbit/s to full single channel's 10 gbit/s.