nappit / omnios - how to set up for backup storage

Discussion in 'Proxmox VE: Installation and configuration' started by RobFantini, Apr 14, 2016.

  1. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Hello ,
    I want to set up a mirrored 4 drive zpool for vzdump on a nappit SAN .

    should the zfs be shared using NFS?

    any suggestions on zpool / zfs settings to optimize the zfs for just backups?

    any other suggestions?
     
  2. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    For backup NFS is best and if the NFS share is for backup only the default ACL for the NFS folder should suffice.
    If you get access permission problems add noacl to NFS mount options in Proxmox.
     
  3. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    OK I've got nfs up and working for backups.

    you mentioned ACL. is there a way to set acs's that work with linux? I get ACL errors during rsync.
     
  4. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Try:
    zfs set aclmode=passthrough zpool/dataset
    zfs set aclinherit=passthrough-x zpool/dataset

    zpool/dataset is your name of pool and the dataset which is exported through NFS
     
  5. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    there still are owner and permission differences compared to a dump stored on linux.
    this may or not cause an issue with some uses like lxc on lvm .

    the following ls -l were done from cli at pve not omnios

    here is a dump to local pve zfs:
    Code:
    -rw-r--r-- 1 root root  734185960 Feb 20 02:02 vzdump-qemu-22104-2016_02_20-02_00_03.vma.lzo
    
    a dump on nfs/omnios before changes you suggested:
    Code:
    -rwxrwxrwx  1 4294967294 4294967294 2974056716 Apr 16 07:34 vzdump-lxc-4501-2016_04_16-07_31_20.tar.lzo*
    
    after changes , dump at nfs/omnios:
    Code:
    -rw-rw-rw-+ 1 4294967294 4294967294  915418839 Apr 17 08:37 vzdump-lxc-3033-2016_04_17-08_37_42.tar.lzo
    
    the user and group 4294967294 , that is not in /etc/passwd at omnios.

    I am not sure what the permissions and ownership is supposed to be.

    here is a test move file from pve local zfs to nfs omnios.
    Code:
    # mv vzdump-qemu-5104-2016_01_17-18_45_01.log /mnt/pve/bkup/dump/
    mv: failed to preserve ownership for ‘/mnt/pve/bkup/dump/vzdump-qemu-5104-2016_01_17-18_45_01.log’: Operation not permitted
    
    is there a way to preserve ownership?

    PS: thank you for all the help getting omnios/nappit working. kvm and backup storage - it works really well.
     
    #5 RobFantini, Apr 17, 2016
    Last edited: Apr 17, 2016
  6. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Use /bin/ls -V to see extended permissions.

    Are you using NFSv3 or NFSv4?

    What does the following show on omnion: /usr/sbin/exportfs
     
  7. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    nfs version from pve storage.cfg :
    Code:
    nfs: bkup
            server 10.2.2.41
            path /mnt/pve/bkup
            export /tank2/bkup
            content vztmpl,iso,backup
            options vers=3
            maxfiles 1
    

    Code:
    OmniOS 5.11     omnios-c91bcdf  February 2016
    
    root@sys4:/root# /usr/sbin/exportfs
    -@tank2/bkup    /tank2/bkup   rw   ""  
    -@tank2/dump-s  /tank2/dump-save   rw   ""  
    
     
  8. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    This is out at me:
    Code:
    # exportfs
    -@vMotion/nfs   /vMotion/nfs   sec=sys,rw=@172.16.2.0/24:10.0.1.0/24:@10.0.2.0/24,root=@172.16.2.0/24:@10.0.1.0/24:@10.0.2.0/24   ""
    
    In Omnios 'ZFS Filesystems' click the NFS fiels on your datasets which has NFS enabled and add the following after 'sharenfs='
    Code:
    rw=@[network1]:@[network2]:@...,root=@[network1]:@[network2]:@...
    
     
  9. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    that led to the solution here, thank you!

    here is an rsync to omnios before the new setting.
    Code:
    sys7  ~ # rsync -a obnam-logs/  /mnt/pve/bkup/obnam-logs/
    rsync: chown "/mnt/pve/bkup/obnam-logs/." failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.HTT0oW" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.0.JonBWQ" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.1.G6YKxL" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.2.aRCy9F" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.3.rIqWLA" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.umiVov" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.0.V30y2p" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.1.NL8QGk" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.2.27XMlf" failed: Operation not permitted (1)
    rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.3.yAGC19" failed: Operation not permitted (1)
    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
    
    then set this:
    Code:
    rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16
    
    also before and after on ls -l
    Code:
    sys7  /mnt/pve/bkup # ll
    total 6
    drwxrwxrwx  2 4294967294 4294967294 80 Apr 18 13:04 dump/
    drwxr-xr-x+ 2 root  root  12 Feb 20 17:11 obnam-logs/
    drwxrwxrwx  4 4294967294 4294967294  4 Dec 22 16:41 template/
    sys7  /mnt/pve/bkup # chown -R root:root *
    sys7  /mnt/pve/bkup # ll
    total 6
    drwxrwxrwx  2 root root 80 Apr 18 13:04 dump/
    drwxr-xr-x+ 2 root root 12 Feb 20 17:11 obnam-logs/
    drwxrwxrwx  4 root root  4 Dec 22 16:41 template/
    
    the next rsync worked with out an error.

    next on the to do list, when I was trying to use lxc i had acl errors that prevented any but a stop mode backup. I'll post info on that soon. there is probably a zfs setting to fix.
     
    #9 RobFantini, Apr 18, 2016
    Last edited: Apr 18, 2016
  10. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Mir -

    what happens to pve vm's using omnios iscsi storage when omnios is shutdown to add hardware?

    as of now I plan to shut down the vm's 1ST. is that what you would do?
     
  11. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    s
    If you should down iscsi while it is serving running VM's the VM's will freeze/hang since the IO system while wait forever for commits to disk. If no IO is requiring commits while iscsi is shut down there are chances that the VM will run as nothing has happen.
    Shutting down VM's having disks on the iscsi storage is highly recommended. If VM's needs to be available while the iscsi storage is shut down you can make an online move of the disks to another storage. Local - not recommended or to another shared storage. In my case I use a small Qnap station for backups - never keep backups the same place as the data, and on this Qnap I have both a NFS share and a LVM_over_iscsi exposed as shared storage for proxmox. When I have service windows on Omnios disks for core services (DNS, DHCP, Mail server, and simpel webserver announcing service window for all public services) are moved to the Qnap while the service window takes place (This of course means lower performance for the services).
     
  12. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Normal backups use a tmp directory at the pve host.

    we have lxc running on omnios . i am also storing backups on omnios

    the backup goes from omnios to pve tmpdir then back to omnios. useless extra network traffic.

    so I tried to set the tmpdir to this on the omnios nfs backup share]
    Code:
    tmpdir: /mnt/pve/bkup/vzdump-tmp.
    
    however there is an issue with acl: temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf

    that ended in a failed backup:
    Code:
    NFO: starting new backup job: vzdump 3039 --storage bkup --compress lzo --mode snapshot --node sys5 --remove 0
    INFO: Starting Backup of VM 3039 (lxc)
    INFO: status = running
    INFO: mode failure - some volumes do not support snapshots
    INFO: trying 'suspend' mode instead
    INFO: backup mode: suspend
    INFO: bandwidth limit: 500000 KB/s
    INFO: ionice priority: 7
    temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf
    INFO: starting first sync /proc/32179/root// to /mnt/pve/bkup/vzdump-tmp/vzdumptmp29472
    INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
    ERROR: Backup of VM 3039 failed - command 'rsync --stats --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative '--bwlimit=500000' /proc/32179/root///./ /mnt/pve/bkup/vzdump-tmp/vzdumptmp29472' failed: exit code 23
    INFO: Backup job finished with errors
    TASK ERROR: job errors
    
    This is a minor issue , still it'd be good to not have the extra disk i/o and network traffic by using tmpdir on the target storage.
     
  13. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,193
    Likes Received:
    494
    That is not as easy as it sounds. For this to work, we would need to know that the VM storage and backup storage are on the same physical machine, would need to have access to that machine, know how to find and access the images locally on the storage machine. and basically run a local version of vzdump there. This is not possible without very tight integration with the storage machine. Just by using a tmpdir on NFS does not help here - the data is still copied via the PVE node.

    In your setup you might be better off by scripting a zfs backup yourself:
    1. lock container on PVE node ("pct set ID -lock backup")
    2. freeze container on PVE node ("lxc-freeze -n ID")
    3. create zfs snapshot on omnios ("zfs snapshot ...")
    4. unfreeze container on PVE node ("lxc-unfreeze -n ID")
    5. unlock container on PVE node ("pct unlock ID")
    The lock and freeze is to ensure data consistency. Note that this is just a rough draft and in no way officialy supported ;) I know that other users in the forum use similar snapshotting solutions on their storage servers.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Fabian - thank you for the answer , that makes perfect sense. I'll use a local tmpdir for now.
     
  15. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    I've been monitoring our current backups in progress.

    I looks like only the virtual machine .conf file is put to tmpdir. from an in progress pve backup:
    Code:
    # ls -laR /bkup/vzdumptmp45380/
    
    /bkup/vzdumptmp45380/:
    total 18
    drwxr-xr-x 3 root root 3 Apr 22 20:35 ./
    drwxr-xr-x 8 root root 9 Apr 22 20:35 ../
    drwxr-xr-x 3 root root 3 Apr 22 20:35 etc/
    
    /bkup/vzdumptmp45380/etc:
    total 2
    drwxr-xr-x 3 root root 3 Apr 22 20:35 ./
    drwxr-xr-x 3 root root 3 Apr 22 20:35 ../
    drwxr-xr-x 2 root root 3 Apr 22 20:35 vzdump/
    
    /bkup/vzdumptmp45380/etc/vzdump:
    total 10
    drwxr-xr-x 2 root root  3 Apr 22 20:35 ./
    drwxr-xr-x 3 root root  3 Apr 22 20:35 ../
    -rw-r--r-- 1 root root 513 Apr 22 20:35 pct.conf
    
    If tmpdir is only used for .conf then there is no issue .
     
    #15 RobFantini, Apr 23, 2016
    Last edited: Apr 23, 2016
  16. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Mir -

    I've got two napp-it iscsi systems. the main one needs to go off line tomorrow to add a HBA .

    there are a couple of KVM's that need to go to the 2ND system.

    do you just move disk from PVE while the KVM is on?

    or shutdown the KVM first?
     
  17. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    I just move the disk from PVE while the system is on. There is no need to shutdown the vm while moving a disk.
     
  18. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    I assume it makes sense to delete the source disk.

    Or does it make sense to leave it there for a faster zfs send/rcv on return?
     
  19. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    You should delete it as moving it back will not use or overwrite the old disk
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice