RBD locked

R0bin

Member
Dec 6, 2019
27
0
21
34
Montpellier
Hi,
I bought a managed ceph from OVH (Cloud Disk Array). I followed OVH's documentation to configure storage with RBD, and in my cluster I can see my storage, purchased space and free space correctly.

Only here, when I create a CT or a VM, I have the following message after a minute:

Now I can see my storage and freespace is good. So my volume is mounted, but I can't use it for VM or CT.
During creation wizard, all seems good, but whend i click :cepherror.png
TASK ERROR: unable to create VM 259 - error during cfs-locked 'storage-Cloud-Disk-Array' operation: rbd error: got lock timeout - aborting command

I'm using PVE 6.3-6

Can you explain me what is the problem and how to solve it ?

Thank you for help
 
Hi,
were you able to resolve the issue in the meantime? Otherwise, please share the entry for the Ceph storage in /etc/pve/storage.cfg. Can you manually do a rbd ls <poolname>?
 
the problem is that the disk (rbd image) allocation took longer than 60s.. how big was the disk? maybe allocating a smaller one and resizing it afterwards helps? either your disk is huge, or the Ceph cluster is slow ;)
 
Hi Fabian,

This problem is not solved. Tank you for replying :)

in /etc/pve/storage.cfg I have :
Code:
cat /etc/pve/storage.cfg
dir: local-nvme
    path /var/lib/vz/datastore-nvme
    content rootdir,images,vztmpl,backup,snippets,iso
    prune-backups keep-last=3
    shared 1

dir: local-hdd
    path /var/lib/vz/datastore-hdd
    content iso,snippets,images,vztmpl,backup,rootdir
    prune-backups keep-last=3
    shared 1

dir: local
    disable
    path /var/lib/vz
    content images
    prune-backups keep-all=1
    shared 0

rbd: Cloud-Disk-Array
    content images
    krbd 0
    monhost 10.97.67.220 10.134.35.221 10.99.103.192
    pool ACC_VMS
    username ACC

  • I'm not sure what krbd 0 really mean, should I use it ? Juste by putting 1 in place to 0 ? nothing to install / parameter ?
  • 10.97.67.220, 10.134.35.221 and 10.99.103.192 are mon_hosts provided by OVH. I can ping them.
  • My private network also use 10.0.0.0 class A. So I created ip routes (because mon_hosts are on OVH network, so I can joint them via my public interface)
Code:
ip route add 10.97.67.220/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface
ip route add 10.134.35.221/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface
ip route add 10.99.103.192/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface

  • Can you manually do a rbd ls <poolname>?
Et voilà :

Code:
sudo rbd ls Cloud-Disk-Array
2021-04-20 10:50:09.258136 7f824c0c30c0 -1 did not load config file, using default settings.
2021-04-20 10:50:09.259132 7f824c0c30c0 -1 Errors while parsing config file!
2021-04-20 10:50:09.259134 7f824c0c30c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259135 7f824c0c30c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259135 7f824c0c30c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259729 7f824c0c30c0 -1 Errors while parsing config file!
2021-04-20 10:50:09.259731 7f824c0c30c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259732 7f824c0c30c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259732 7f824c0c30c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
unable to get monitor info from DNS SRV with service name: 2021-04-20 10:50:09.266249 7f824c0c30c0 -1 failed for service _ceph-mon._tcp
ceph-mon
no monitors specified to connect to.
rbd: couldn't connect to the cluster!
rbd: list: (2) No such file or directory

  • This result is strange : I tried to configure via PVE gui and via storage.cfg but nothing explain me to configure ceph.conf. Should I add something like this ?
    Code:
    [global]
    mon_host = 10.97.67.220,10.134.35.221,10.99.103.192


    the problem is that the disk (rbd image) allocation took longer than 60s.. how big was the disk?
    I tried VM with 8Gb disk (pretty sure Ceph can deal with 8Gb :D ) and CT templates ( Less than 500Mb) Edit : I just try with 600Mb hard drive allocation, but same result.
 
Last edited:
So this is an external cluster IIUC?

rbd: Cloud-Disk-Array
content images
krbd 0
monhost 10.97.67.220 10.134.35.221 10.99.103.192
pool ACC_VMS
username ACC
[/CODE]
Did you put the key for the external cluster in /etc/pve/priv/ceph/Cloud-Disk-Array.keyring (double check for typos)?

I'm not sure what krbd 0 really mean, should I use it ? Juste by putting 1 in place to 0 ? nothing to install / parameter ?
It's a setting whether to use the RBD kernel module or not, should not be relevant here.

  • 10.97.67.220, 10.134.35.221 and 10.99.103.192 are mon_hosts provided by OVH. I can ping them.
  • My private network also use 10.0.0.0 class A. So I created ip routes (because mon_hosts are on OVH network, so I can joint them via my public interface)
Code:
ip route add 10.97.67.220/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface
ip route add 10.134.35.221/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface
ip route add 10.99.103.192/32 dev vmbr0 via 51.1x.x.x #mon_host via public interface


Et voilà :

Code:
sudo rbd ls Cloud-Disk-Array
2021-04-20 10:50:09.258136 7f824c0c30c0 -1 did not load config file, using default settings.
2021-04-20 10:50:09.259132 7f824c0c30c0 -1 Errors while parsing config file!
2021-04-20 10:50:09.259134 7f824c0c30c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259135 7f824c0c30c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259135 7f824c0c30c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259729 7f824c0c30c0 -1 Errors while parsing config file!
2021-04-20 10:50:09.259731 7f824c0c30c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259732 7f824c0c30c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 10:50:09.259732 7f824c0c30c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
unable to get monitor info from DNS SRV with service name: 2021-04-20 10:50:09.266249 7f824c0c30c0 -1 failed for service _ceph-mon._tcp
ceph-mon
no monitors specified to connect to.
rbd: couldn't connect to the cluster!
rbd: list: (2) No such file or directory

  • This result is strange : I tried to configure via PVE gui and via storage.cfg but nothing explain me to configure ceph.conf. Should I add something like this ?
    Code:
    [global]
    mon_host = 10.97.67.220,10.134.35.221,10.99.103.192



    I tried VM with 8Gb disk (pretty sure Ceph can deal with 8Gb :D ) and CT templates ( Less than 500Mb) Edit : I just try with 600Mb hard drive allocation, but same result.
For an external cluster, the command needs some additional options, can you try the following?
Code:
rbd ls -p Cloud-Disk-Array -m 10.97.67.220 --auth_supported cephx -n client.ACC --keyring /etc/pve/priv/ceph/Cloud-Disk-Array.keyring
 
So this is an external cluster IIUC?
I don't understand "IIUC". I have a proxmox cluster (private network for hosts and VMs + public network on all nodes). The Cloud disk array is a ceph-as-a-service (managed) offer.

Did you put the key for the external cluster in /etc/pve/priv/ceph/Cloud-Disk-Array.keyring (double check for typos)?
in /etc/pve/priv/ceph/Cloud-Disk-Array.keyring I have someting like this (scrambled)
Code:
[client.ACC]
        key =  ChieL1vu+tieshain4aengodaeCheeX3ooshuB==

It's a setting whether to use the RBD kernel module or not, should not be relevant here.
Thank you.

For an external cluster, the command needs some additional options, can you try the following?
Code:
rbd ls -p Cloud-Disk-Array -m 10.97.67.220 --auth_supported cephx -n client.ACC --keyring /etc/pve/priv/ceph/Cloud-Disk-Array.keyring

There is the output :

Code:
rbd ls -p Cloud-Disk-Array -m 10.97.67.220 --auth_supported cephx -n client.ACC --keyring /etc/pve/priv/ceph/Cloud-Disk-Array.keyring
2021-04-20 14:45:21.622280 7fb2db35e0c0 -1 did not load config file, using default settings.
2021-04-20 14:45:21.623575 7fb2db35e0c0 -1 Errors while parsing config file!
2021-04-20 14:45:21.623577 7fb2db35e0c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 14:45:21.623577 7fb2db35e0c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 14:45:21.623577 7fb2db35e0c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2021-04-20 14:45:21.624101 7fb2db35e0c0 -1 Errors while parsing config file!
2021-04-20 14:45:21.624103 7fb2db35e0c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2021-04-20 14:45:21.624104 7fb2db35e0c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2021-04-20 14:45:21.624104 7fb2db35e0c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
rbd: error opening pool 'Cloud-Disk-Array': (2) No such file or directory
rbd: list: (2) No such file or directory
 
I don't understand "IIUC". I have a proxmox cluster (private network for hosts and VMs + public network on all nodes). The Cloud disk array is a ceph-as-a-service (managed) offer.
IIUC is an abbreviation of "if I understand correctly".

Code:
rbd ls -p Cloud-Disk-Array -m 10.97.67.220 --auth_supported cephx -n client.ACC --keyring /etc/pve/priv/ceph/Cloud-Disk-Array.keyring
2021-04-20 14:45:21.622280 7fb2db35e0c0 -1 did not load config file, using default settings.
I forgot to mention that usually the default settings are fine, because PVE adds the relevant ones as command line parameters (similar to the command above).

Code:
rbd: error opening pool 'Cloud-Disk-Array': (2) No such file or directory
rbd: list: (2) No such file or directory
Sorry, the pool name is ACC_VMS. But the connection and authentication seem to work, as otherwise I'd expect an error earlier.

Can you do a pvesm list Cloud-Disk-Array? What about
Code:
pvesm alloc rbdkvm 1234 vm-1234-deleteme 1G
pvesm free rbdkvm:vm-1234-deleteme
to allocate and remove a disk.

Please also post your pveversion -v.
 
IIUC is an abbreviation of "if I understand correctly".
Thank you for translation :rolleyes:

Sorry, the pool name is ACC_VMS. But the connection and authentication seem to work, as otherwise I'd expect an error earlier.
-> yes, as explained at top : On "datastorage view" I can see the 2Tb buyed, so authentication seems work.
Screenshot from 2021-04-21 14-27-47.png
Can you do a pvesm list Cloud-Disk-Array? What about
Code:
pvesm alloc rbdkvm 1234 vm-1234-deleteme 1G
pvesm free rbdkvm:vm-1234-deleteme
to allocate and remove a disk.
I replaced rbdkvm from your input to CDA name :
Code:
$ time pvesm alloc Cloud-Disk-Array 1234 vm-1234-deleteme 1G
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
trying to acquire cfs lock 'storage-Cloud-Disk-Array' ...
error during cfs-locked 'storage-Cloud-Disk-Array' operation: got lock request timeout

real    0m9.284s
user    0m0.233s
sys    0m0.033s
same result when trying "free".

Please also post your pveversion -v.
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph: 12.2.11+dfsg1-2.1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
pve-zsync: 2.0-4
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Proxmox nodes are up-to-date.
 
Seems like the lock directory was not correctly released. Please try removing it manually and try again.
Code:
rmdir /etc/pve/priv/lock/storage-Cloud-Disk-Array

Is the node part of a cluster? If yes, please share the output of the following:
Code:
pvecm status
journalctl -u pve-cluster.service
 
There is no lockfile for this storage
Code:
ls -lha /etc/pve/priv/lock/
total 0
drwx------ 2 root www-data 0 Mar  3  2020 .
drwx------ 2 root www-data 0 Mar  3  2020 ..
drwx------ 2 root www-data 0 Feb 11 15:30 ha_agent_acc-host-0001_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0002_lock
drwx------ 2 root www-data 0 Jan 22 04:43 ha_agent_acc-host-0003_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0004_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0005_lock
drwx------ 2 root www-data 0 Oct 19  2020 ha_agent_acc-host-0006_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_manager_lock

So rmdir /etc/pve/priv/lock/storage-Cloud-Disk-Array don't remove anything

And there is pvecm status :
Code:
Cluster information
-------------------
Name:             ACC-cluster
Config Version:   11
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Apr 22 09:27:13 2021
Quorum provider:  corosync_votequorum
Nodes:            6
Node ID:          0x00000001
Ring ID:          1.2d527
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      6
Quorum:           4 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.80.0.0 (local)
0x00000002          1 10.80.0.1
0x00000003          1 10.80.0.2
0x00000004          1 10.80.0.3
0x00000005          1 10.80.0.5
0x00000006          1 10.80.0.6

And lasts logs :
Code:
Apr 20 09:28:08 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: all data is up to date
Apr 20 09:28:08 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received all states
Apr 20 09:28:08 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: all data is up to date
Apr 20 09:40:07 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 20 09:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 10:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 11:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 12:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 13:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 14:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 15:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 16:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 17:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 18:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 19:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 20:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 21:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 22:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 20 23:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 00:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 01:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 02:26:07 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 02:26:10 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 02:35:46 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 02:35:49 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 02:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 03:01:03 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 03:01:06 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 03:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 04:06:48 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 04:06:51 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 04:28:50 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 04:28:57 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 04:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 05:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 06:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 07:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 08:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 09:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 10:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 11:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 12:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 13:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 14:29:48 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 14:29:53 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 14:31:42 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 14:31:51 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 21 14:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 15:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 16:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 17:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 18:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 19:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 20:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 21:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 22:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 21 23:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 00:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 01:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 02:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 03:01:04 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:01:07 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:39:47 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:39:53 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:44:42 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:44:45 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 03:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 04:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 05:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 05:51:35 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 05:51:42 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 05:54:45 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 05:54:47 acc-host-0001.**FQDN** pmxcfs[28793]: [status] notice: received log
Apr 22 06:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 07:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 08:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 09:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
Apr 22 10:50:10 acc-host-0001.**FQDN** pmxcfs[28793]: [dcdb] notice: data verification successful
 
There is no lockfile for this storage
Code:
ls -lha /etc/pve/priv/lock/
total 0
drwx------ 2 root www-data 0 Mar  3  2020 .
drwx------ 2 root www-data 0 Mar  3  2020 ..
drwx------ 2 root www-data 0 Feb 11 15:30 ha_agent_acc-host-0001_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0002_lock
drwx------ 2 root www-data 0 Jan 22 04:43 ha_agent_acc-host-0003_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0004_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_agent_acc-host-0005_lock
drwx------ 2 root www-data 0 Oct 19  2020 ha_agent_acc-host-0006_lock
drwx------ 2 root www-data 0 Feb 11 15:31 ha_manager_lock

So rmdir /etc/pve/priv/lock/storage-Cloud-Disk-Array don't remove anything

There was a lock when you tried the operations (did you interact with the storage before issuing the alloc command?), but it most likely was released by the time you issued the rmdir command.

Please check that there is no lock present and then try issuing the pvesm alloc Cloud-Disk-Array 1234 vm-1234-deleteme 1G command again.

Did the pvesm list Cloud-Disk-Array command work?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!