Moving VM disks to Ceph failing

tubatodd

Active Member
Feb 3, 2020
12
0
41
46
I've provisioned 1 Proxmox host with Ceph that has 2 SSDs for OSDs. Ceph is configured and running. I've created a ceph pool called "ceph-dev". When I attempt to move a VM from local storage to Ceph I am getting a lock error.

Code:
storage migration failed: error with cfs lock 'storage-ceph-dev': rbd error: got lock timeout - aborting command
 
Ceph needs at least three nodes. If you want a local RAID storage, then better go with ZFS.
 
Ceph needs at least three nodes. If you want a local RAID storage, then better go with ZFS.
That is understood and that is what we are doing. But in previous deployments, we'd spin up the first node with ceph, import a VM. then bring up another ceph node and it imports its VM, and finally spin up the 3rd node and the third VM. I don't recall ever having ceph refuse to move a volume to the pool.
 
I don't recall ever having ceph refuse to move a volume to the pool.
Not without adjusting vital settings for data safety. Safest and easiest, setup the nodes beforehand. This way you can also test the storage on performance and in some terms on reliability.
 
So I found out why this was failing. I had ceph pool size: 3 and min_size: 2. Which apparently is the number of other cluster members to function. Setting it to size: 2 and min_size: 1 allowed me do what I needed to do. Apparently those were the numbers I had used successfully previously.