[SOLVED] Error: error at ".zfs/shares": EOPNOTSUPP: Operation not supported on transport endpoint

timdonovan

Active Member
Feb 3, 2020
80
16
28
38
Hi,

On one of my nodes, all lxc backups fail with the topic error:


Code:
INFO: starting new backup job: vzdump 302 --storage pbs-iscsi --node proxmox-2 --notes-template '{{guestname}}' --mode stop --remove 0
INFO: Starting Backup of VM 302 (lxc)
INFO: Backup started at 2023-10-08 19:45:56
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: lxc-xxx
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/media') from backup (not a volume)
INFO: stopping virtual guest
INFO: creating Proxmox Backup Server archive 'ct/302/2023-10-08T18:45:56Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp298855_302/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 302 --backup-time 1696790756 --repository root@pam@192.168.1.216:apoc-iscsi
INFO: Starting backup: ct/302/2023-10-08T18:45:56Z
INFO: Client name: proxmox-2
INFO: Starting backup protocol: Sun Oct  8 19:45:57 2023
INFO: Downloading previous manifest (Mon Oct  2 02:02:40 2023)
INFO: Upload config file '/var/tmp/vzdumptmp298855_302/etc/vzdump/pct.conf' to 'root@pam@192.168.1.216:8007:apoc-iscsi' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.216:8007:apoc-iscsi' as root.pxar.didx
INFO: catalog upload error - channel closed
INFO: Error: error at ".zfs/shares": EOPNOTSUPP: Operation not supported on transport endpoint
INFO: restarting vm
INFO: guest is online again after 3 seconds
ERROR: Backup of VM 302 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp298855_302/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 302 --backup-time 1696790756 --repository root@pam@192.168.1.216:apoc-iscsi' failed: exit code 255
INFO: Failed at 2023-10-08 19:45:59
INFO: Backup job finished with errors
TASK ERROR: job errors

If I migrate them to a different node (which are all built similarly) I do not get this error. I'm not sure what .zfs/shares is referencing.
 
Hi,
yes, the internal .zfs/shares and .zfs/snapshot directories can be problematic for backup/restore. There was an RFC a while ago to do that by default, but seems like it never made it in. You might want to add them to the --exclude-path setting in your /etc/vzdump.conf (to change the setting node-wide) or in each backup job configuration.
 
Hey @fiona - thanks! I'm still confused though. This path is inside the LXC? Or on the PVE host? I cannot find it in either location (not sure where "." is).

Why would this work fine on one PVE host and not another? They are almost identical afaik.

Also not sure what I'm losing by excluding this path from backups? FWIW these LXC's are pretty vanilla, not doing anything exotic by Proxmox standards.

Cheers!
 
Last edited:
Hey @fiona - thanks! I'm still confused though. This path is inside the LXC? Or on the PVE host? I cannot find it in either location (not sure where "." is).
It's inside the LXC, in the root of the container volume.
Why would this work fine on one PVE host and not another? They are almost identical afaik.
Likely one of them has snapdir=visible set and the other not. You can check with zfs get snapdir <name of your ZFS>.
Also not sure what I'm losing by excluding this path from backups? FWIW these LXC's are pretty vanilla, not doing anything exotic by Proxmox standards.
It's just an internal directory used by ZFS, that's usually not visible, so I'd expect you don't loose anything.
 
Neat! Thank you. I verified and you are correct - for some reason (afaik I never set this) snapdir is visible on the problematic node. So I guess also

zfs set snapdir=hidden rpool

should solve this too! Thank you :)
 
Neat! Thank you. I verified and you are correct - for some reason (afaik I never set this) snapdir is visible on the problematic node. So I guess also

zfs set snapdir=hidden rpool

should solve this too! Thank you :)
Glad it worked :) Please mark the thread as [SOLVED] by using the Edit Thread button above the first post and selecting the prefix. This helps other users find solutions more quickly.
 
  • Like
Reactions: timdonovan
Unsolved :(

Code:
root@proxmox-2:~# zfs get snapdir rpool
NAME   PROPERTY  VALUE    SOURCE
rpool  snapdir   hidden   local

And yet:

Code:
INFO: Starting Backup of VM 202 (lxc)
INFO: Backup started at 2023-10-22 01:00:01
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: lxc-adguard-2
INFO: including mount point rootfs ('/') in backup
INFO: stopping virtual guest
INFO: creating Proxmox Backup Server archive 'ct/202/2023-10-22T00:00:01Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=encrypt --keyfd=11 pct.conf:/var/tmp/vzdumptmp1499097_202/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 202 --backup-time 1697932801 --repository DB0797@pbs@td-pbs
INFO: Starting backup: ct/202/2023-10-22T00:00:01Z
INFO: Client name: proxmox-2
INFO: Starting backup protocol: Sun Oct 22 01:00:03 2023
INFO: Using encryption key from file descriptor..
INFO: Encryption key fingerprint: e1:6d:dd:b7:1e:76:6d:58
INFO: No previous manifest available.
INFO: Upload config file '/var/tmp/vzdumptmp1499097_202/etc/vzdump/pct.conf' to 'pbs' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'pbs:8007:DB0797_td-pbs' as root.pxar.didx
INFO: catalog upload error - channel closed
INFO: Error: error at ".zfs/shares": EOPNOTSUPP: Operation not supported on transport endpoint
INFO: restarting vm
INFO: guest is online again after 3 seconds
ERROR: Backup of VM 202 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=encrypt' '--keyfd=11' pct.conf:/var/tmp/vzdumptmp1499097_202/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 202 --backup-time 1697932801 --repository pbs' failed: exit code 255
INFO: Failed at 2023-10-22 01:00:04
INFO: Backup job finished with errors
TASK ERROR: job errors

Should it be hidden on all my pools, e.g. at the moment:


root@proxmox-2:~# zfs get snapdir NAME PROPERTY VALUE SOURCE rpool snapdir hidden local rpool/ROOT snapdir hidden inherited from rpool rpool/ROOT/pve-1 snapdir hidden inherited from rpool rpool/data snapdir visible local rpool/data/subvol-201-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-201-disk-0@currents snapdir - - rpool/data/subvol-201-disk-0@__replicate_201-1_1697966103__ snapdir - - rpool/data/subvol-201-disk-0@__replicate_201-0_1697967003__ snapdir - - rpool/data/subvol-202-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-202-disk-0@__replicate_202-0_1697967001__ snapdir - - rpool/data/subvol-202-disk-0@__replicate_202-1_1697967004__ snapdir - - rpool/data/subvol-301-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-301-disk-0@__replicate_301-0_1697943602__ snapdir - - rpool/data/subvol-301-disk-0@__replicate_301-1_1697943620__ snapdir - - rpool/data/subvol-302-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-302-disk-0@__replicate_302-1_1697857226__ snapdir - - rpool/data/subvol-302-disk-0@__replicate_302-0_1697943624__ snapdir - - rpool/data/subvol-303-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-303-disk-0@__replicate_303-1_1697857248__ snapdir - - rpool/data/subvol-303-disk-0@__replicate_303-0_1697943642__ snapdir - - rpool/data/subvol-304-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-304-disk-0@__replicate_304-0_1697756403__ snapdir - - rpool/data/subvol-305-disk-0 snapdir visible inherited from rpool/data rpool/data/subvol-305-disk-0@__replicate_305-0_1697943657__ snapdir - - rpool/data/subvol-306-disk-0 snapdir visible inherited from rpool/data
 
Last edited:
Ah ok, I see. They are all inheriting snapdir setting from rpool/data, whereas I only set it on rpool. I checked on my first node and everything is hidden, so I think I'm safe to hide it for every pool. Solved again :)
 
  • Like
Reactions: fiona

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!