[SOLVED] ZFS Backup with snapshot mode fails

Kadrim

Well-Known Member
May 20, 2018
47
2
48
41
Hi there,

i am currently testing ZFS instead of my "old" setup (mdadm raid) and wanted to start the usual backup cycle.

My Qemu-VMs will get backuped perfectly, my LXCs won't.

Here is the log:
Code:
Virtual Environment 6.1-3
Search
Datacenter
Server View
Logs
()
INFO: starting new backup job: vzdump 100 --remove 0 --compress lzo --storage backup --mode snapshot --node pve
INFO: filesystem type on dumpdir is 'zfs' -using /var/tmp/vzdumptmp5837 for temporary files
INFO: Starting Backup of VM 100 (lxc)
INFO: Backup started at 2019-12-18 17:47:03
INFO: status = running
INFO: CT Name: vUbuntu
INFO: excluding bind mount point mp0 ('/storage/dropbox') from backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
filesystem 'rpool/data/subvol-100-disk-0@vzdump' cannot be mounted, unable to open the dataset
umount: /mnt/vzsnap0/: not mounted.
command 'umount -l -d /mnt/vzsnap0/' failed: exit code 32
ERROR: Backup of VM 100 failed - command 'mount -o ro -t zfs rpool/data/subvol-100-disk-0@vzdump /mnt/vzsnap0//' failed: exit code 1
INFO: Failed at 2019-12-18 17:47:04
INFO: Backup job finished with errors
TASK ERROR: job errors

As i am quite new to ZFS i have no idea why the command mount -o ro -t zfs rpool/data/subvol-100-disk-0@vzdump /mnt/vzsnap0// fails.

The snapshot is actually there.

Here is the output of zfs list -t snapshot:

Code:
NAME                                 USED  AVAIL     REFER  MOUNTPOINT
tank/data/subvol-100-disk-0@vzdump  11.3M      -     26.1G  -

this is the output of zfs list:

Code:
NAME                              USED  AVAIL     REFER  MOUNTPOINT
tank                             5.82T  1.22T       96K  none
tank/data                         407G  1.22T      152K  /rpool/data
tank/data/subvol-100-disk-0      26.1G  17.9G     26.1G  /rpool/data/subvol-100-disk-0
tank/data/subvol-101-disk-0       921M  3.10G      921M  /rpool/data/subvol-101-disk-0
tank/data/subvol-104-disk-0       647M  1.37G      646M  /rpool/data/subvol-104-disk-0
tank/data/subvol-104-disk-1      32.0G  64.0G     32.0G  /rpool/data/subvol-104-disk-1
tank/data/subvol-108-disk-0      6.60G  9.40G     6.60G  /rpool/data/subvol-108-disk-0
tank/data/subvol-109-disk-0      1.49G  14.5G     1.49G  /rpool/data/subvol-109-disk-0
tank/data/subvol-110-disk-0       743M  7.28G      742M  /rpool/data/subvol-110-disk-0
tank/data/subvol-111-disk-0       825M  3.19G      825M  /rpool/data/subvol-111-disk-0
tank/data/subvol-113-disk-0      1.53G  6.48G     1.52G  /rpool/data/subvol-113-disk-0
tank/data/subvol-114-disk-0      2.19G  1.81G     2.19G  /rpool/data/subvol-114-disk-0
tank/data/subvol-115-disk-0      1.47G  2.53G     1.47G  /rpool/data/subvol-115-disk-0
tank/data/subvol-117-disk-0      7.25G  24.8G     7.25G  /rpool/data/subvol-117-disk-0
tank/data/subvol-118-disk-0      4.31G  11.7G     4.31G  /rpool/data/subvol-118-disk-0
tank/data/subvol-119-disk-0       605M  1.41G      603M  /rpool/data/subvol-119-disk-0
tank/data/subvol-120-disk-0      1.70G  6.30G     1.70G  /rpool/data/subvol-120-disk-0
tank/data/subvol-121-disk-0       638M  1.38G      638M  /rpool/data/subvol-121-disk-0
tank/data/subvol-123-disk-0      3.21G  60.8G     3.21G  /rpool/data/subvol-123-disk-0
tank/data/subvol-125-disk-0      1.23G  14.8G     1.23G  /rpool/data/subvol-125-disk-0
tank/data/subvol-126-disk-0       993M  7.03G      993M  /rpool/data/subvol-126-disk-0
tank/data/subvol-127-disk-0       711M  3.31G      710M  /rpool/data/subvol-127-disk-0
tank/data/subvol-129-disk-0       539M  7.47G      538M  /rpool/data/subvol-129-disk-0
tank/data/subvol-130-disk-0      3.50G  28.5G     3.50G  /rpool/data/subvol-130-disk-0
tank/data/vm-102-disk-0            76K  1.22T       76K  -
tank/data/vm-102-disk-1          80.3G  1.22T     80.3G  -
tank/data/vm-103-disk-0          32.0G  1.22T     32.0G  -
tank/data/vm-105-disk-0          56.3G  1.22T     56.3G  -
tank/data/vm-105-disk-1            80K  1.22T       80K  -
tank/data/vm-106-disk-0          18.1G  1.22T     18.1G  -
tank/data/vm-107-disk-0          1.34G  1.22T     1.34G  -
tank/data/vm-112-disk-0          21.5G  1.22T     21.5G  -
tank/data/vm-112-disk-1          57.3G  1.22T     57.3G  -
tank/data/vm-116-disk-0          1.64G  1.22T     1.64G  -
tank/data/vm-122-disk-0          22.6G  1.22T     22.6G  -
tank/data/vm-124-disk-0          5.63G  1.22T     5.63G  -
tank/data/vm-128-disk-0          11.4G  1.22T     11.4G  -

Any hints why that is happening?

 
ok, i got this now.

should have inspected the used command mount -o ro -t zfs rpool/data/subvol-100-disk-0@vzdump /mnt/vzsnap0// correctly:

my pool is named tank (because that is the commonly used name for creating zfs-pools online). but proxmox had issues creating LXCs that way and i found out, that the mountpoint for that dataset has to be /rpool/...


the backup command above does not respect the pool name and instead always uses rpool als pool-name. either this is hardcoded or totally dependant on the mountpoint. both are - in my opinion - bugs in the proxmox scripting. so i am labeling the issue as bug also.

As workaround i did the following:

export the current pool:
Code:
zpool export tank

stopped most pve-services (don't remember which ones), i think the most important one is pvestatd

reimported the pool with the new name (rpool):
Code:
zpool import tank rpool

adjusted the storage settings in /etc/pve/storage.cfg

and then reboot
 
The mount point doesn't have to be rpool. Backing up containers on my tank01 pool mounted with the same name works quite fine.

Can you explain how you set up your server and storage configuration? Having a pool called tank mounted at /rpool strikes me as odd.
 
Ok, i think the simple explanation is like this:

Proxmox depends on a matching mountpoint in reference to the pool/dataset

so if the tank is called tank0 and the dataset is called test there must be a mountpoint /tank0/test/

This is where all the confusion originally came from. because i don't like having my mountpoints on root and instead placing them in /mnt/

So i think many proxmox scripts don't actually check the mountpoint for a given dataset and instead just takes <pool>/<dataset> and assumes the mountpoint is /<pool>/<dataset>/

Am i correct with that assumption? If yes, then this seems wrong because instead the proxmox scripts should fetch the real mountpoint via zfs get mointpoint
 
Can you show your /etc/pve/storage.cfg? I guess you don't have a backup from the time when the backup did not work?

My guess is that something went wrong there.

Having a different mount point than the pool name works fine.

The pool was initially created as tank via the GUI. Then the container created and the pool renamed and the mount point manually changed.
Code:
# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
foobar                    6.85M  30.5G       96K  /baz
foobar/subvol-100-disk-0  6.01M  1.99G     6.01M  /baz/subvol-100-disk-0

The config for the pool in the /etc/pve/storage.cfg now looks like this:
Code:
zfspool: tank
    pool foobar
    content images,rootdir
    mountpoint /baz
    nodes zfspooltest

Backups work fine.
 
Can you show your /etc/pve/storage.cfg? I guess you don't have a backup from the time when the backup did not work?

yes, here it is:

Code:
dir: local
        disable
        path /var/lib/vz
        content images,iso,rootdir,vztmpl
        maxfiles 0
        shared 0

zfspool: zfs-data
        pool rpool/data
        content rootdir,images
        mountpoint /rpool/data
        sparse 1

dir: zfs-templates
        path /mnt/zfs/templates
        content vztmpl,iso,snippets
        shared 0

dir: backup
        path /mnt/unsecured/backup
        content backup
        maxfiles 6
        shared 0

as you suspected, i don't have the state where it did not work so easily availabe.

But what i did, was another test-setup (not entirely related to backups) which clears the whole thing up for me (thanks @aaron for pointing me in this direction!):

1. Create a new dataset (side note: i did not use the proxmox GUI to setup the pool and datasets)
Code:
zfs create rpool/test

2. add this dataset as storage via proxmox GUI
zfs-storage-add.png

3. create a new lxc-container via proxmox GUI and set rootfs to reside in the newly created storage (test)
zfs-add-lxc.png

4. this fails:
Code:
()
mounting container failed
TASK ERROR: unable to create CT 131 - cannot open directory //rpool/test: No such file or directory

and here was the confusion!

because the log output told me that it wanted to use the path //rpool/test i thought i had to setup the mountpoint exactly like the dataset was named within zfs.

what i did find out was, that if i create a new storage (assign zfs dataset to proxmox-storage) it only there checks the mountpoint.

so exactly at this stage the storage.cfg is like this:
Code:
zfspool: test
        pool rpool/test
        content images,rootdir
        mountpoint none
        sparse 0

but if i set the mountpoint for this dataset afterwards, this parameter will be ignored by proxmox scripts because it is not within storage.cfg, more specifically the mountpoint still points to none even if zfs has this value set.

so to sum things up:
always add the mountpoint first and then add the storage
or
if the mountpoint is changed, this has also to be done within storage.cfg

Maybe a suggestion for an upcoming proxmox version: if a zfs storage has no mountpoint within storage.cfg, try to detect the current mountpoint within zfs. Alternative: instead of outputting that the mount was not successful an error message could be thrown, that this zfs-storage has no mountpoint set.

i will mark this thread as solved now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!