ceph upgrade

walter.egosson

Active Member
Sep 4, 2019
27
2
43
33
Hi!
I have proxmox 4 on three nodes and ceph hammer on each:
I want to upgrade ceph from hammer to jewel and then from jewel to hammer. Since the upgrade is done node by node, will there be a risk during the process while some nodes will run ceph hammer and the others ceph jewel (those being upgraded)?
thx
 
Thank you a lot,
The upgrade notes seems incomplete to me.

Yes, I meant jewel to luminous. Talking about ceph hammer to jewel step: I have some worries if you could help me :
- When upgrading Ceph since it is done node by node , do I have to shutdown the VMs/CTs on the to-be-upgraded proxmox node then upgrade that node or can I migrate them on the two other nodes, let them run and finally migrate them back to the upgraded proxmox node?
- since the upgrade process is progressive, there will be a moment when I will have 2 ceph jewel and 1 ceph hammer coexisting => will my data be lost?

Thanks!

i hope you mean luminous?

also did you see these sites:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel

https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous

note that luminous is only supported on pve 5.x (but you must upgrade on 4.x before upgrading to 5, this is also written on the upgrade notes)

.
i hope you mean luminous? also did you see these sites: https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous note that luminous is only supported on pve 5.x (but you must upgrade on 4.x before upgrading to 5, this is also written on the upgrade notes)
 
just follow the upgrade guide node by node (this is also mentioned there)

if you are unsure, maybe setup a non-production system where you can test the upgrade

as a general rule: always have backups (and verify they work)
 
We ended up using a 6TB NFS storage and move all VM storage there to leave the ceph unused and NO DOWNTIME
Everything went straight through, absolutely no issue
If it can help others:
Our NFS storage had 1Gb interface (125MB/s maximum theorical speed) was able to handle almost 50 VMs hard-drives (with 25MB/sec write and 10MB/sec read average) leaving a confortable bandwitdth margin (absolute max was ~110MB/sec)

1/ connect NFS storage (or any non-ceph storage you have)
2/ live migrate hard-drives to that storage (no downtime but slightly longer that offline migrate)
3/ upgrade your cluster so that is your ceph cluster messes up, your VMs will not be affected
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!