Live disk migration limit speed

Discussion in 'Proxmox VE: Installation and configuration' started by check-ict, Nov 7, 2017.

  1. check-ict

    check-ict Member

    Joined:
    Apr 19, 2011
    Messages:
    93
    Likes Received:
    1
    Hi,

    We have a 6-node cluster, each connected with 2x1Gbit trunk to a 4x1Gbit trunk ZFS NFS server loaded with SSD's.

    The disk IO on the ZFS server is really high, it has 12 SSD's in RAIDz2 (RAID6).

    We have another ZFS SSD server, also connected with 4x1Gbit.

    Now we want to migrate some disks to this new storage server with live migration. When we start the migration, all VM's on the node that initiates the move get really slow. Login on RDP takes about 5 minutes on any VM, while this normally takes several seconds. Webservers crash or respond really slow.

    So when we move 1 disk of 1 VM, all VM's on that node become unusable.

    How can we limit the disk move, so it will only use like 80% of the 1000Mbit link? The SSD's are almost idle when the disk moves, it's the network link.

    The costs to replace all 1Gbit to 10Gbit, including a redundant managed switch, is really high. So that's not a option.
     
  2. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,592
    Likes Received:
    325
    you can set a 'migrate_speed' option in the vm config
    see
    Code:
    man qm.conf
    
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,309
    Likes Received:
    206
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. check-ict

    check-ict Member

    Joined:
    Apr 19, 2011
    Messages:
    93
    Likes Received:
    1
    Hi,

    I tried migrate_speed: 2 (MBps) however it keeps taking 100-120 MBps so my network card is stull 1000Mbit in use during the move.

    pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
    Debian 8
     
  5. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,309
    Likes Received:
    206
    As I said above, either limit the process or copy the disk directly from one storage server to the other.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. check-ict

    check-ict Member

    Joined:
    Apr 19, 2011
    Messages:
    93
    Likes Received:
    1
    When I start the move, it only shows the following process:
    task UPID:proxmox05:00002F0C:40FF4B4C:5A018D66:qmmove:159:root@pam:

    How can I limit this process? I don't see any qemu-nbd or port being used.

    Else I will copy it from ZFS to ZFS server, however it will require the VM to be shut down for some hours. It's about 2 TB that needs to be moved.
     
  7. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,309
    Likes Received:
    206
    The usual, grep & ps, as there will be also the vmid in the disk name.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. check-ict

    check-ict Member

    Joined:
    Apr 19, 2011
    Messages:
    93
    Likes Received:
    1
    Yes I used ps aux, nothing there.

    So moving offline this weekend, no other option it seems.
     
    #8 check-ict, Nov 7, 2017
    Last edited: Nov 7, 2017
  9. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    884
    Likes Received:
    119
    But you can have this: pve-zsync
    You can replicate the virtual hdd to the designated node using zfs send/recive.
    And pve-zsync can be used with bandwidth limit speed(as you want). After the pve-zsync is finish then on the destination node you cam move the VM config file from source node to the destination node.

    This is all that you need to do.
     
  10. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    884
    Likes Received:
    119

    If you look at the proper device....;) You can get a layer7 switch with 16 sfp+ ports at 400 $/unit. Maybe is not so high price for you ;) But is very expensive for me ... maybe you are more lucky ;)
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice