Encrypted ZFS datasets empty after manual mount

Athlon

New Member
Dec 31, 2024
1
0
1
Hello!
I am struggling with a quite weird problem imho.
Running Proxmox 7.4.1 (without subscription) without any issues for a long time until recently the SATA controller card locked up and I had to do a hard shutdown. Connected the 4 harddrives to the internal ports and booted up.
The pool "ZFS_2" is encrypted and gets mounted using zfs mount -l ZFS_2 after logging in via SSH:
Code:
zfs get mounted
NAME                          PROPERTY  VALUE    SOURCE
ZFS_2                         mounted   yes      -
ZFS_2/Backups                 mounted   yes      -
ZFS_2/Multimedia              mounted   yes      -
However, df -h shows an incorrect size for the dataset "Backups" and Multimedia and the mountpoints /mnt/ZFS_2/Backups & /mnt/ZFS_2/Multimedia appear to be empty:
Code:
Filesystem                    Size  Used Avail Use% Mounted on
...
ZFS_2                          46T   23T   24T  50% /mnt/ZFS_2
ZFS_2/Backups                  24T  256K   24T   1% /mnt/ZFS_2/Backups
ZFS_2/Multimedia               24T  256K   24T   1% /mnt/ZFS_2/Multimedia

The zfs get canmount command is set:
Code:
NAME                          PROPERTY  VALUE     SOURCE
ZFS_2                         canmount  on        default
ZFS_2/Backups                 canmount  on        default
ZFS_2/Multimedia              canmount  on        default

As it turns out, for a brief moment, after manually unmounting using zfs unmount ZFS_2/Backups and immediately listing the mountpoint's content using ls /mnt/ZFS_2/Backups/ the directory structure including files and subdirectories is visible. After two seconds the command ls /mnt/ZFS_2/Backups/ returns an empty mountpoint again - apparently the zfs mount command has performed another mount. This is also true for the Multimedia directory. I read up a lot about "double mounting", "empty mount directories" etc. but did not find a permanent solution yet unless i set the canmount=off for the pool "ZFS_2" and its datasets.
I suspect some misconfiguration in my proxmox setup (Cache?) or on the pool "ZFS_2" since my other pool "ZFS" never showed this behaviour. Any help would be greatly appreciated!

###################################################################
Additional information:
SMART tests using smartctl -t long for all four drives finished without error.
The command zpool status shows no errors:
Code:
pool: ZFS_2
 state: ONLINE
  scan: scrub repaired 0B in 21:13:41 with 0 errors on Mon Dec 23 16:51:42 2024
config:

        NAME                                   STATE     READ WRITE CKSUM
        ZFS_2                                  ONLINE       0     0     0
          raidz1-0                             ONLINE       0     0     0
            ata-ST18000NM000J-2TV103           ONLINE       0     0     0
            ata-ST18000NM000J-2TV103           ONLINE       0     0     0
            ata-ST18000NM000J-2TV103           ONLINE       0     0     0
            ata-ST18000NM000J-2TV103           ONLINE       0     0     0

errors: No known data errors
Code:
zpool --version
zfs-2.1.15-pve1
zfs-kmod-2.1.15-pve1
 
That's the zfs pod lid special feature to even not show data if still encrypted yet. Use canmount off with systemd unit for mount after key is given.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!