nappit / omnios - how to set up for backup storage

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
Hello ,
I want to set up a mirrored 4 drive zpool for vzdump on a nappit SAN .

should the zfs be shared using NFS?

any suggestions on zpool / zfs settings to optimize the zfs for just backups?

any other suggestions?
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
For backup NFS is best and if the NFS share is for backup only the default ACL for the NFS folder should suffice.
If you get access permission problems add noacl to NFS mount options in Proxmox.
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
OK I've got nfs up and working for backups.

you mentioned ACL. is there a way to set acs's that work with linux? I get ACL errors during rsync.
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
Try:
zfs set aclmode=passthrough zpool/dataset
zfs set aclinherit=passthrough-x zpool/dataset

zpool/dataset is your name of pool and the dataset which is exported through NFS
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
there still are owner and permission differences compared to a dump stored on linux.
this may or not cause an issue with some uses like lxc on lvm .

the following ls -l were done from cli at pve not omnios

here is a dump to local pve zfs:
Code:
-rw-r--r-- 1 root root  734185960 Feb 20 02:02 vzdump-qemu-22104-2016_02_20-02_00_03.vma.lzo

a dump on nfs/omnios before changes you suggested:
Code:
-rwxrwxrwx  1 4294967294 4294967294 2974056716 Apr 16 07:34 vzdump-lxc-4501-2016_04_16-07_31_20.tar.lzo*

after changes , dump at nfs/omnios:
Code:
-rw-rw-rw-+ 1 4294967294 4294967294  915418839 Apr 17 08:37 vzdump-lxc-3033-2016_04_17-08_37_42.tar.lzo

the user and group 4294967294 , that is not in /etc/passwd at omnios.

I am not sure what the permissions and ownership is supposed to be.

here is a test move file from pve local zfs to nfs omnios.
Code:
# mv vzdump-qemu-5104-2016_01_17-18_45_01.log /mnt/pve/bkup/dump/
mv: failed to preserve ownership for ‘/mnt/pve/bkup/dump/vzdump-qemu-5104-2016_01_17-18_45_01.log’: Operation not permitted
is there a way to preserve ownership?

PS: thank you for all the help getting omnios/nappit working. kvm and backup storage - it works really well.
 
Last edited:

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
Use /bin/ls -V to see extended permissions.

Are you using NFSv3 or NFSv4?

What does the following show on omnion: /usr/sbin/exportfs
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
Use /bin/ls -V to see extended permissions.

Are you using NFSv3 or NFSv4?

What does the following show on omnion: /usr/sbin/exportfs

nfs version from pve storage.cfg :
Code:
nfs: bkup
        server 10.2.2.41
        path /mnt/pve/bkup
        export /tank2/bkup
        content vztmpl,iso,backup
        options vers=3
        maxfiles 1


Code:
OmniOS 5.11     omnios-c91bcdf  February 2016

root@sys4:/root# /usr/sbin/exportfs
-@tank2/bkup    /tank2/bkup   rw   ""  
-@tank2/dump-s  /tank2/dump-save   rw   ""
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
This is out at me:
Code:
# exportfs
-@vMotion/nfs   /vMotion/nfs   sec=sys,rw=@172.16.2.0/24:10.0.1.0/24:@10.0.2.0/24,root=@172.16.2.0/24:@10.0.1.0/24:@10.0.2.0/24   ""
In Omnios 'ZFS Filesystems' click the NFS fiels on your datasets which has NFS enabled and add the following after 'sharenfs='
Code:
rw=@[network1]:@[network2]:@...,root=@[network1]:@[network2]:@...
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
that led to the solution here, thank you!

here is an rsync to omnios before the new setting.
Code:
sys7  ~ # rsync -a obnam-logs/  /mnt/pve/bkup/obnam-logs/
rsync: chown "/mnt/pve/bkup/obnam-logs/." failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.HTT0oW" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.0.JonBWQ" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.1.G6YKxL" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.2.aRCy9F" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam-pro4.log.3.rIqWLA" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.umiVov" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.0.V30y2p" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.1.NL8QGk" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.2.27XMlf" failed: Operation not permitted (1)
rsync: chown "/mnt/pve/bkup/obnam-logs/.obnam.log.3.yAGC19" failed: Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]

then set this:
Code:
rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16

also before and after on ls -l
Code:
sys7  /mnt/pve/bkup # ll
total 6
drwxrwxrwx  2 4294967294 4294967294 80 Apr 18 13:04 dump/
drwxr-xr-x+ 2 root  root  12 Feb 20 17:11 obnam-logs/
drwxrwxrwx  4 4294967294 4294967294  4 Dec 22 16:41 template/
sys7  /mnt/pve/bkup # chown -R root:root *
sys7  /mnt/pve/bkup # ll
total 6
drwxrwxrwx  2 root root 80 Apr 18 13:04 dump/
drwxr-xr-x+ 2 root root 12 Feb 20 17:11 obnam-logs/
drwxrwxrwx  4 root root  4 Dec 22 16:41 template/

the next rsync worked with out an error.

next on the to do list, when I was trying to use lxc i had acl errors that prevented any but a stop mode backup. I'll post info on that soon. there is probably a zfs setting to fix.
 
Last edited:

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
Mir -

what happens to pve vm's using omnios iscsi storage when omnios is shutdown to add hardware?

as of now I plan to shut down the vm's 1ST. is that what you would do?
 

mir

Famous Member
Apr 14, 2012
3,559
120
83
Copenhagen, Denmark
s
Mir -

what happens to pve vm's using omnios iscsi storage when omnios is shutdown to add hardware?
If you should down iscsi while it is serving running VM's the VM's will freeze/hang since the IO system while wait forever for commits to disk. If no IO is requiring commits while iscsi is shut down there are chances that the VM will run as nothing has happen.
as of now I plan to shut down the vm's 1ST. is that what you would do?
Shutting down VM's having disks on the iscsi storage is highly recommended. If VM's needs to be available while the iscsi storage is shut down you can make an online move of the disks to another storage. Local - not recommended or to another shared storage. In my case I use a small Qnap station for backups - never keep backups the same place as the data, and on this Qnap I have both a NFS share and a LVM_over_iscsi exposed as shared storage for proxmox. When I have service windows on Omnios disks for core services (DNS, DHCP, Mail server, and simpel webserver announcing service window for all public services) are moved to the Qnap while the service window takes place (This of course means lower performance for the services).
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
Normal backups use a tmp directory at the pve host.

we have lxc running on omnios . i am also storing backups on omnios

the backup goes from omnios to pve tmpdir then back to omnios. useless extra network traffic.

so I tried to set the tmpdir to this on the omnios nfs backup share]
Code:
tmpdir: /mnt/pve/bkup/vzdump-tmp.
however there is an issue with acl: temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf

that ended in a failed backup:
Code:
NFO: starting new backup job: vzdump 3039 --storage bkup --compress lzo --mode snapshot --node sys5 --remove 0
INFO: Starting Backup of VM 3039 (lxc)
INFO: status = running
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: bandwidth limit: 500000 KB/s
INFO: ionice priority: 7
temporary directory is on NFS, disabling xattr and acl support, consider configuring a local tmpdir via /etc/vzdump.conf
INFO: starting first sync /proc/32179/root// to /mnt/pve/bkup/vzdump-tmp/vzdumptmp29472
INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
ERROR: Backup of VM 3039 failed - command 'rsync --stats --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative '--bwlimit=500000' /proc/32179/root///./ /mnt/pve/bkup/vzdump-tmp/vzdumptmp29472' failed: exit code 23
INFO: Backup job finished with errors
TASK ERROR: job errors

This is a minor issue , still it'd be good to not have the extra disk i/o and network traffic by using tmpdir on the target storage.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,861
1,506
164
That is not as easy as it sounds. For this to work, we would need to know that the VM storage and backup storage are on the same physical machine, would need to have access to that machine, know how to find and access the images locally on the storage machine. and basically run a local version of vzdump there. This is not possible without very tight integration with the storage machine. Just by using a tmpdir on NFS does not help here - the data is still copied via the PVE node.

In your setup you might be better off by scripting a zfs backup yourself:
  1. lock container on PVE node ("pct set ID -lock backup")
  2. freeze container on PVE node ("lxc-freeze -n ID")
  3. create zfs snapshot on omnios ("zfs snapshot ...")
  4. unfreeze container on PVE node ("lxc-unfreeze -n ID")
  5. unlock container on PVE node ("pct unlock ID")
The lock and freeze is to ensure data consistency. Note that this is just a rough draft and in no way officialy supported ;) I know that other users in the forum use similar snapshotting solutions on their storage servers.
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
I've been monitoring our current backups in progress.

I looks like only the virtual machine .conf file is put to tmpdir. from an in progress pve backup:
Code:
# ls -laR /bkup/vzdumptmp45380/

/bkup/vzdumptmp45380/:
total 18
drwxr-xr-x 3 root root 3 Apr 22 20:35 ./
drwxr-xr-x 8 root root 9 Apr 22 20:35 ../
drwxr-xr-x 3 root root 3 Apr 22 20:35 etc/

/bkup/vzdumptmp45380/etc:
total 2
drwxr-xr-x 3 root root 3 Apr 22 20:35 ./
drwxr-xr-x 3 root root 3 Apr 22 20:35 ../
drwxr-xr-x 2 root root 3 Apr 22 20:35 vzdump/

/bkup/vzdumptmp45380/etc/vzdump:
total 10
drwxr-xr-x 2 root root  3 Apr 22 20:35 ./
drwxr-xr-x 3 root root  3 Apr 22 20:35 ../
-rw-r--r-- 1 root root 513 Apr 22 20:35 pct.conf

If tmpdir is only used for .conf then there is no issue .
 
Last edited:

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
sIf you should down iscsi while it is serving running VM's the VM's will freeze/hang since the IO system while wait forever for commits to disk. If no IO is requiring commits while iscsi is shut down there are chances that the VM will run as nothing has happen.

Shutting down VM's having disks on the iscsi storage is highly recommended. If VM's needs to be available while the iscsi storage is shut down you can make an online move of the disks to another storage. Local - not recommended or to another shared storage. In my case I use a small Qnap station for backups - never keep backups the same place as the data, and on this Qnap I have both a NFS share and a LVM_over_iscsi exposed as shared storage for proxmox. When I have service windows on Omnios disks for core services (DNS, DHCP, Mail server, and simpel webserver announcing service window for all public services) are moved to the Qnap while the service window takes place (This of course means lower performance for the services).

Mir -

I've got two napp-it iscsi systems. the main one needs to go off line tomorrow to add a HBA .

there are a couple of KVM's that need to go to the 2ND system.

do you just move disk from PVE while the KVM is on?

or shutdown the KVM first?
 

RobFantini

Renowned Member
May 24, 2012
1,972
94
68
Boston,Mass
I assume it makes sense to delete the source disk.

Or does it make sense to leave it there for a faster zfs send/rcv on return?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!