LXC backup fails - unable to open the dataset 'vzdump'.

Tassir

Active Member
Feb 22, 2016
7
1
43
38
Hi,

I have noticed this issue where all of my LXC fail to backup while all of the the VMs do backup successfully. This is similar to This Thread which has been marked "Solved", though that sounds like a workaround.

This is the error I am seeing:
Code:
INFO: Starting Backup of VM 110 (lxc)
INFO: Backup started at 2020-02-03 01:21:38
INFO: status = running
INFO: CT Name: plex
INFO: excluding bind mount point mp0 ('/mnt/media') from backup
INFO: found old vzdump snapshot (force removal)
zfs error: could not find any snapshots to destroy; check snapshot names.
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
filesystem 'mnt/nvme_pool/subvol-110-disk-0@vzdump' cannot be mounted, unable to open the dataset
umount: /mnt/vzsnap0/: not mounted.
command 'umount -l -d /mnt/vzsnap0/' failed: exit code 32
ERROR: Backup of VM 110 failed - command 'mount -o ro -t zfs mnt/nvme_pool/subvol-110-disk-0@vzdump /mnt/vzsnap0//' failed: exit code 1
INFO: Failed at 2020-02-03 01:21:38


Looking at this it seems like the backup script is using the mount point of the ZFS pool (probably from storage.cfg ?), instead, it should be using the pool's name when mounting the @vzdump snapshot.
mount -o ro -t zfs mnt/nvme_pool/subvol-110-disk-0@vzdump /mnt/vzsnap0// should be mount -o ro -t zfs nvme_pool/subvol-110-disk-0@vzdump /mnt/vzsnap0//, which works and mounts the snapshot as expected.

This issue will not be exposed if the pool has the default ZFS mount point under / as the mount point will be the same as the pool's name, I have tested this and it works.

As for why this is only affecting LXC and not affecting VMs, I think it is because LXC subvolumes are always mounted, while VMs are not. I am assuming there is a fall back somewhere to use the pool's name if get_pool_mount_point fails?
Code:
nvme_pool                     808G  90.5G       31K  /mnt/nvme_pool
nvme_pool/subvol-110-disk-0   759M  1.26G      753M  /mnt/nvme_pool/subvol-110-disk-0
nvme_pool/subvol-111-disk-0   571M  1.44G      569M  /mnt/nvme_pool/subvol-111-disk-0
nvme_pool/subvol-112-disk-0   605M  1.41G      603M  /mnt/nvme_pool/subvol-112-disk-0
nvme_pool/subvol-113-disk-0   719M  1.31G      705M  /mnt/nvme_pool/subvol-113-disk-0
nvme_pool/subvol-114-disk-0   362M  1.65G      362M  /mnt/nvme_pool/subvol-114-disk-0
nvme_pool/subvol-115-disk-1   789M  1.28G      742M  /mnt/nvme_pool/subvol-115-disk-1
nvme_pool/vm-100-disk-0      20.6G  98.6G     12.5G  -
nvme_pool/vm-101-disk-0      33.0G   102G     21.2G  -
nvme_pool/vm-102-disk-0      66.0G   119G     37.8G  -
nvme_pool/vm-103-disk-0      2.12M  90.5G     21.5K  -
nvme_pool/vm-103-disk-1      51.6G   103G     39.3G  -
nvme_pool/vm-103-disk-2       516G   162G      445G  -
nvme_pool/vm-104-disk-0      2.12M  90.5G       21K  -
nvme_pool/vm-104-disk-1      51.6G   107G     35.6G  -
nvme_pool/vm-105-disk-0      33.0G   102G     21.1G  -
nvme_pool/vm-106-disk-0      33.0G   109G     14.1G  -
 
Hi,
this is in fact a different issue than the one reported in the thread you linked to although both have to do with non-default mount points. The ability to specify non-standard mount points for ZFS is rather new and it seems that some code for container backup was assuming that the mount point is the default one. I'll look into it and let you know when the patch is out. Thank you for reporting this!
 
A patch for this was applied yesterday and should be included in the next version of pve-container.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!