Live disk migration limit speed

check-ict

Member
Apr 19, 2011
93
1
8
Hi,

We have a 6-node cluster, each connected with 2x1Gbit trunk to a 4x1Gbit trunk ZFS NFS server loaded with SSD's.

The disk IO on the ZFS server is really high, it has 12 SSD's in RAIDz2 (RAID6).

We have another ZFS SSD server, also connected with 4x1Gbit.

Now we want to migrate some disks to this new storage server with live migration. When we start the migration, all VM's on the node that initiates the move get really slow. Login on RDP takes about 5 minutes on any VM, while this normally takes several seconds. Webservers crash or respond really slow.

So when we move 1 disk of 1 VM, all VM's on that node become unusable.

How can we limit the disk move, so it will only use like 80% of the 1000Mbit link? The SSD's are almost idle when the disk moves, it's the network link.

The costs to replace all 1Gbit to 10Gbit, including a redundant managed switch, is really high. So that's not a option.
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
4,089
369
88
31
Vienna
you can set a 'migrate_speed' option in the vm config
see
Code:
man qm.conf
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,185
273
88

check-ict

Member
Apr 19, 2011
93
1
8
Hi,

I tried migrate_speed: 2 (MBps) however it keeps taking 100-120 MBps so my network card is stull 1000Mbit in use during the move.

pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
Debian 8
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,185
273
88
As I said above, either limit the process or copy the disk directly from one storage server to the other.
 

check-ict

Member
Apr 19, 2011
93
1
8
When I start the move, it only shows the following process:
task UPID:proxmox05:00002F0C:40FF4B4C:5A018D66:qmmove:159:root@pam:

How can I limit this process? I don't see any qemu-nbd or port being used.

Else I will copy it from ZFS to ZFS server, however it will require the VM to be shut down for some hours. It's about 2 TB that needs to be moved.
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,185
273
88
The usual, grep & ps, as there will be also the vmid in the disk name.
 

check-ict

Member
Apr 19, 2011
93
1
8
Yes I used ps aux, nothing there.

So moving offline this weekend, no other option it seems.
 
Last edited:

guletz

Renowned Member
Apr 19, 2017
1,070
150
68
Brasov, Romania
So moving offline this weekend, no other option it seems
But you can have this: pve-zsync
You can replicate the virtual hdd to the designated node using zfs send/recive.
And pve-zsync can be used with bandwidth limit speed(as you want). After the pve-zsync is finish then on the destination node you cam move the VM config file from source node to the destination node.

This is all that you need to do.
 

guletz

Renowned Member
Apr 19, 2017
1,070
150
68
Brasov, Romania
The costs to replace all 1Gbit to 10Gbit, including a redundant managed switch, is really high. So that's not a option.

If you look at the proper device....;) You can get a layer7 switch with 16 sfp+ ports at 400 $/unit. Maybe is not so high price for you ;) But is very expensive for me ... maybe you are more lucky ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!