Live-Migration and Snapshot-Backups - possible with NFS?

Discussion in 'Proxmox VE: Installation and configuration' started by philister, Jun 18, 2012.

  1. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    Hello all,

    I have just started to evaluate proxmox ve 2.1 in our test environment a couple of days ago and we're thinking about using it in the datacenter. Especially live migration, snapshot backup and clustering seem so wonderfully easy if you know ESXi (which is nice, too, of course).

    I've come across one thing though that I couldn't sort out myself so far: If I want to be able to live-migrate, I have to store the vm on shared storage, which is NFS in our case (used to be NFS with our ESXi hosts, too, and worked very well). But, when I use NFS storage I can't do a snapshot backup, it always gives me ''mode failure – unable to detect lvm volume group" and falls back on "suspend mode", which means a downtime of the vm. It seems snapshot mode somehow relies on lvm, which isn't there in case of an nfs volume. Is this assumption right?

    How can this be solved? What kind of setup is required, if I want to use both features, live migration and snapshot backup?

    Thank you very much.

    Cheers,
    P.
     
  2. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    yes, ive-backup is not possible if you store disk images on a NFS share.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    Thanks for your quick response.

    Is this something you're planning to add or what is your suggested setup that supports live-migration and live-backup at the same time?

    Bets Regards
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    the current backup system does not work on nfs. there are some ideas on the KVM mailing list but nothing usable in the short run.

    what about using iSCSI?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    A) Generally speaking, iSCSI would be possible. We're using Netapp Filers in the datacenter and they support iscsi. In the test lab I could set up an iscsi target with debian. I have no deep knowledge about iSCSI though and feel much more at home with NFS. But if understand you right, iSCSI is the only way to go if I want to use shared storage and be able to make snapshot backups. Is that correct?

    B) I understand that the live backup mechanism relies on lvm functionality to create the snapshot, right? So, what do I have to take care of when creating the iSCSI target? Must the target volume somehow be managed by lvm (from the client's standpoint)?

    C) Would you recommend such a setup (do you use it in your environment, would you call it "best practice"):
    1. Proxmox server cluster with Proxmox OS installed on local RAID1 disks (real Hardware RAID by HP)
    2. VM disk images stored on shared storage (NetApp iSCSI)
    3. ISOs and Backups stored on shared storeage (NetApp NFS)

    D) That way live migration and live backups would be possible, right?

    Please excuse my many questions and thanks very much for your advice.

    Best Regards,
     
  6. solliecomm

    solliecomm New Member

    Joined:
    Jun 16, 2012
    Messages:
    4
    Likes Received:
    0
    We use a storage server that is linked to the serveral Vhosts with ata over ethernet (AoE). With bonded interfaces and jumbo frames we get better speeds than with iSCSI. With a HA failover the storage moves easily with the Vhost (AoE is layer2).

    If interrested I can post my howto, maybe it is a nice addition to Proxmox.
     
  7. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    yes, a howto for using AoE in our wiki would be helpful. AoE is nice but for some reasons not very known.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    I've created a new VM on an iSCSI shared storage. I first created a new iSCSI target via the web gui and then created a new LVM group on it (also via web gui).

    However, when I try to live-migrate the vm I get the following message:

    Jun 20 14:37:23 starting migration of VM 103 to node 'pmx0' (xxx.yyy.zzz.aaa)
    Jun 20 14:37:23 copying disk images
    Jun 20 14:37:23 ERROR: Failed to sync data - can't do online migration - VM uses local disks
    Jun 20 14:37:23 aborting phase 1 - cleanup resources
    Jun 20 14:37:23 ERROR: migration aborted (duration 00:00:01): Failed to sync data - can't do online migration - VM uses local disks
    TASK ERROR: migration aborted

    The VM's disk is stored on the shared iSCSI drive, it's not a local disk.

    Can you give me a hint on where to dig deeper? Thanks.

    Cheers,
     
  9. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    post storage.cfg and VMID.conf:

    Code:
    cat /etc/pve/storage.cfg
    cat /etc/pve/qemu-server/VMID.conf
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    I found the error. I had to tick the "verteilt" checkbox in the properties dialogue of the LVM storage which resides on the iSCSI device. Now it works very well. Thanks.
     
  11. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    yes, the translation is a bit misleading. "shared" is better.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. philister

    philister Member

    Joined:
    Jun 18, 2012
    Messages:
    31
    Likes Received:
    0
    btw: How come both nodes of my proxmox cluster can access the same iSCSI LUN? This won't work normally, right? How is this accomplished, technically? What kind of filesystem is created on the iSCSI target when I add an LVM group on it through the Proxmox Web GUI?
     
  13. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,551
    Likes Received:
    405
    we implemented our own locking here - if you need all details check the source code. and we do not need any file-system here as we use LVM volumes directly for the guests (block devices).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. Ruben Waitz

    Ruben Waitz Member

    Joined:
    Aug 6, 2012
    Messages:
    38
    Likes Received:
    1
    Dear Solliecomm,

    Can you post some thoughts/hints on how to configure an AoE setup? We got stuck on making online snapshots (LVM) backups for our OpenVZ containers. We tried NFS, but that doesn't support LVM. Then we tried iSCSI, but LUNS in an LVM aren't sharable through Proxmox cluster nodes.

    Thanks,
    Ruben
     
  15. solliecomm

    solliecomm New Member

    Joined:
    Jun 16, 2012
    Messages:
    4
    Likes Received:
    0
    Ruben,

    It true, this promise was still open. Because you ask for a hint I will post my notes, made when testing the setup. I did still have no time to check them and post them nicely, so I can not guarentee there no faults in this howto. In our setup the KVM client with Apache2 and Samba connects with AoE to our storage server. Our webdesign department uses this without complaints for over a month.

    On the storage server (in my setup 4 disks with software raid):
    # use jumbo frames (ifconfig ethx mtu 9000)
    # use multiple ethernet cards (that support jumbo frames)
    # see: http://www.metsahovi.fi/en/vlbi/vsib-tests/aoe-nbd/index

    apt-get install vblade lvm2
    # create physical volume
    pvcreate /dev/md9
    # create volume group
    vgcreate vg001 /dev/md9
    # create logical volumes
    lvcreate -n data1 --size 250g vg001
    lvcreate -n data2 --size 250g vg001
    lvcreate -n data3 --size 250g vg001
    # formate logical volumes
    mkfs.ext4 /dev/vg001/data1
    mkfs.ext4 /dev/vg001/data2
    mkfs.ext4 /dev/vg001/data3
    lvdisplay
    vblade-persist setup 0 1 eth0 /dev/vg001/lv001
    vblade-persist setup 0 2 eth0 /dev/vg001/lv002
    vblade-persist start 0 2
    vblade-persist start 0 2

    On the KVM VM server (so not on proxmox, but on the virtual client):
    # The proxmox server has two extra interfaces which are bridged, so the KVM clients can use them for AoE
    # Some isues:
    # Jumbo frames can be set on de KVM client but do not work. I think I have to adjust the bridge on the proxmox server to
    # The KVM client can be moved over to another proxmox server, the storage will be still there
    modprobe aoe
    apt-get install aoetools
    aoe-discover
    aoe-stat (should show the available AoE exports)
    mkdir /mountpoint
    mount /dev/etherd/e0.1 /mountpoint (replace e0.1 with the appropriate device from aoe-stat)
    or add this to /etc/fstab
    /dev/etherd/e0.1 /mountpoint ext4 defaults,auto,_netdev 0 0

    Gr.

    Edwin
     
  16. Ruben Waitz

    Ruben Waitz Member

    Joined:
    Aug 6, 2012
    Messages:
    38
    Likes Received:
    1
    Thanks Edwin!

    I will look at it and try if I can bend it to our situation. (Our setup is quite different)

    M.v.g./Regards,
    Ruben
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice