Adding Ceph RBD Storage on Prox mox 6.0 from 8.3

stevensedory

Well-Known Member
Oct 26, 2019
46
3
48
39
We love migrating VMs from one host or cluster to another via RDB. It's very easy, with the downtime of a quick reboot of the VM.

We have a current proxmox "cluster" of two nodes that are using local ZFS to host their VMs. Proxmox 6.0-11 is the version. I know I know. It was a set and forget setup, and it's been extremely reliable. The ceph version is now 15.2.17 (after some upgrading trying to get this to work with no avail).

We've now built a new five node cluster, which is hyperconverged (proxmox and ceph). That guy is on proxmox 8.3.0 and ceph 18.2.

What we typically do is, on the old node/cluster, we add a Storage of Type RBD, and boom, you have the ceph volume available on the old node/cluster that actually lives on the new. We then live move the VM's storage to that location, and then shut it down, copy the config file to one of the new hosts, make a few necessary edits, save, and wala, the VM is fully on the new cluster, and we turn it on.

However, this old proxmox cluster will not add the RBD from the new cluster. I've weeded out the possibility of a config issue on the new cluster by connecting to the new RBD storage from another newer standalone host with no problem. So the Ceph RBD works.

My question is, does anyone know what the magic sauce might be version wise (and I'm assuming it's just a Ceph thing and not so much proxmox?) to get my older guy to talk to the newer? Like, if I instead of 18.2 on the new cluster installed 17.2, would the old guys on 15.2 be able to talk to it?

I could of course follow an upgrade path to get these two clustered proxmox 6 hosts to version 8, but there are some production VMs running and that would cause unacceptable interruptions.

Thanks in advance prox/ceph wizards.
 
So if this helps anyone in the future, I got it working and here's how:

echo "deb http://download.proxmox.com/debian/ceph-nautilus buster main" | tee /etc/apt/sources.list.d/ceph.list
apt update
apt install --reinstall ceph-common librados2 librbd1
ceph -v //to check now on Ceph 14.2

echo "deb http://download.proxmox.com/debian/ceph-octopus buster main" > /etc/apt/sources.list.d/ceph.list
apt update
apt install --only-upgrade ceph-common librados2 librbd1
ceph -v //to check now on Ceph 15.2
systemctl restart pvedaemon pvestatd //For good measure to ensure proxmox knows about the Ceph upgrade

Copy over the ceph.conf from the receiving cluster, and ensure you have the client.admin keyring in the following locations:

/etc/pve/priv/ceph.client.admin.keyring
/etc/pve/priv/ceph/lax-hci01-VMs.keyring //replace lax-hci01-VMs with whatever the pool/ID