Moving VM disks to Ceph failing

tubatodd

Member
Feb 3, 2020
10
0
21
44
I've provisioned 1 Proxmox host with Ceph that has 2 SSDs for OSDs. Ceph is configured and running. I've created a ceph pool called "ceph-dev". When I attempt to move a VM from local storage to Ceph I am getting a lock error.

Code:
storage migration failed: error with cfs lock 'storage-ceph-dev': rbd error: got lock timeout - aborting command
 
Ceph needs at least three nodes. If you want a local RAID storage, then better go with ZFS.
 
Ceph needs at least three nodes. If you want a local RAID storage, then better go with ZFS.
That is understood and that is what we are doing. But in previous deployments, we'd spin up the first node with ceph, import a VM. then bring up another ceph node and it imports its VM, and finally spin up the 3rd node and the third VM. I don't recall ever having ceph refuse to move a volume to the pool.
 
I don't recall ever having ceph refuse to move a volume to the pool.
Not without adjusting vital settings for data safety. Safest and easiest, setup the nodes beforehand. This way you can also test the storage on performance and in some terms on reliability.
 
So I found out why this was failing. I had ceph pool size: 3 and min_size: 2. Which apparently is the number of other cluster members to function. Setting it to size: 2 and min_size: 1 allowed me do what I needed to do. Apparently those were the numbers I had used successfully previously.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!