Hi,
created a cephfs to hold ISO images for a 3 node (n1,n2,n3) proxmox cluster.
After mounting an ISO image on two nodes at the same time, ceph will show:
HEALTH_WARN 1 clients failing to respond to capability release
[WRN] MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
mds.n1(mds.0): Client n2: failing to respond to capability release client_id: 84379
also changing ISO images while a VM is running is not possible ( connection timeout ).
Stopping this VM after trying to change ( or even completely unmount ) the ISO will result in:
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-103.conf' - got timeout
Restarting the mds on the reporting n1 node "fixes" the problem ( while the 1st ISO mount will die ).
The problem is reproduceable.
Are there known issues ? The ISO's are obviously read-only. There should not be any locks. But as it seems there are.
created a cephfs to hold ISO images for a 3 node (n1,n2,n3) proxmox cluster.
After mounting an ISO image on two nodes at the same time, ceph will show:
HEALTH_WARN 1 clients failing to respond to capability release
[WRN] MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release
mds.n1(mds.0): Client n2: failing to respond to capability release client_id: 84379
also changing ISO images while a VM is running is not possible ( connection timeout ).
Stopping this VM after trying to change ( or even completely unmount ) the ISO will result in:
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-103.conf' - got timeout
Restarting the mds on the reporting n1 node "fixes" the problem ( while the 1st ISO mount will die ).
The problem is reproduceable.
Are there known issues ? The ISO's are obviously read-only. There should not be any locks. But as it seems there are.
Code:
pveversion --verbose
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
openvswitch-switch: 2.15.0+ds1-2
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3