Slightly strange issue that I have created that I'm not sure how to solve. Specifically, stale file handles associated with a Gluster Distributed volume, from the CIFS share of it.
Console output:
Gluster setup:
3 node proxmox cluster, but only 2 of the nodes have gluster storage bricks:
GlusterFS is mounted at: /mnt/pve/blackholeShared via CIFS from a LXC container with a mount point. (appropriate permissions, everything "works").
My screw up:
My Ubuntu VM was complaining that my CIFS share didn't support server inodes. So I added in a "noserverino" in my mount options. Now I have files within the above folders, that I can't remove/modify.
My understanding of the resolution path is to umount and remount the drive. But the order within the clients matter? Thoughts?
Console output:
Code:
root@holodeck:/mnt/pve/blackhole/Media/Downloads/Incomplete# ls -lah
ls: cannot access 'Some_File_Name': Stale file handle
ls: cannot access 'Some_File_Name': Stale file handle
ls: cannot access 'Some_File_Name': Stale file handle
ls: cannot access 'Some_File_Name': Stale file handle
total 8.0K
drwxrwxrwx 21 200000 200000 4.0K Jul 28 10:22 .
drwxrwxrwx 1 200000 200000 4.0K Jun 28 14:26 ..
d????????? ? ? ? ? ? Some_File_Name
d????????? ? ? ? ? ? Some_File_Name
d????????? ? ? ? ? ? Some_File_Name
d????????? ? ? ? ? ? Some_File_Name
Gluster setup:
Code:
(pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-4-pve))
3 node proxmox cluster, but only 2 of the nodes have gluster storage bricks:
Code:
Volume Name: blackhole
Type: Distribute
Volume ID: c863a60a-6e96-4a6f-bc5e-0815e889fe05
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: 10.0.0.10:/blackhole_01/nas-01
Brick2: 10.0.0.11:/blackhole_02/nas-02
Brick3: 10.0.0.11:/blackhole_04/nas
Brick4: 10.0.0.11:/blackhole_03/nas-02
Brick5: 10.0.0.10:/blackhole_05/nas
Options Reconfigured:
performance.cache-size: 1GB
performance.cache-samba-metadata: on
network.inode-lru-limit: 16384
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: off
features.cache-invalidation-timeout: 600
features.cache-invalidation: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
GlusterFS is mounted at: /mnt/pve/blackholeShared via CIFS from a LXC container with a mount point. (appropriate permissions, everything "works").
My screw up:
My Ubuntu VM was complaining that my CIFS share didn't support server inodes. So I added in a "noserverino" in my mount options. Now I have files within the above folders, that I can't remove/modify.
My understanding of the resolution path is to umount and remount the drive. But the order within the clients matter? Thoughts?