What's about ceph luminous 12.2.7?

casparsmit

Active Member
Feb 24, 2015
38
1
28
+1 for this

I am using EC pools with 12.2.5 so i'm fearing the worst but hopefully my workload didn't trigger any corruption.
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
443
88
Do you have an eta of the important bug-fix to 12.2.7?
We are currently testing the v12.2.7 and the upgrade to it. The packages will be soon released.

I am using EC pools with 12.2.5 so i'm fearing the worst but hopefully my workload didn't trigger any corruption.
From my understanding, the ceph health status would indicate if you would have corrupted objects. Failing filestore OSDs to PGs that are unable to peer, were reported to Ceph. The upgrade needs some extra precaution and steps.
https://ceph.com/releases/12-2-7-luminous-released/

EDIT: finished my unfinished sentence ;)
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
443
88
The ceph luminous v12.2.7-pve1 update has been released on the pvetest repository.
https://pve.proxmox.com/wiki/Package_Repositories#_proxmox_ve_test_repository

Note to the update:
If you used ceph v12.2.5 in combination with erasure coded (EC) pools and v12.2.6 with bluestore there is a small risk of corruption under certain workloads. For example, OSD service restarts may be a trigger. Aforementioned operation cases are unsupported by Proxmox.

See the link below for further upgrade instructions.
http://docs.ceph.com/docs/master/releases/luminous/#v12-2-7-luminous
 

udo

Famous Member
Apr 22, 2009
5,918
180
83
Ahrensburg; Germany
Hi Alwin,
thanks for the info.

I have updated the first cluster without issues (but I don't use EC and have no havy writes on this cluster).

Due I don't want to switch the repo, i use wget and do an handwork update:
Code:
## on all nodes
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-base_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-common_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-fuse_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-mds_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-mgr_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-mon_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-osd_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph-resource-agents_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/ceph_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/libcephfs2_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/librados2_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/libradosstriper1_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/librbd1_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/librgw2_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/python-ceph_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/python-cephfs_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/python-rados_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/python-rbd_12.2.7-pve1_amd64.deb
wget http://download.proxmox.com/debian/ceph-luminous/dists/stretch/test/binary-amd64/python-rgw_12.2.7-pve1_amd64.deb

dpkg -l | grep 12.2.*-pve1

dpkg -i librados2_12.2.7-pve1_amd64.deb libradosstriper1_12.2.7-pve1_amd64.deb librbd1_12.2.7-pve1_amd64.deb librgw2_12.2.7-pve1_amd64.deb libcephfs2_12.2.7-pve1_amd64.deb
dpkg -i python-rgw_12.2.7-pve1_amd64.deb python-rbd_12.2.7-pve1_amd64.deb python-rados_12.2.7-pve1_amd64.deb python-cephfs_12.2.7-pve1_amd64.deb python-ceph_12.2.7-pve1_amd64.deb
dpkg -i ceph-base_12.2.7-pve1_amd64.deb ceph-common_12.2.7-pve1_amd64.deb
dpkg -i ceph-fuse_12.2.7-pve1_amd64.deb ceph-mds_12.2.7-pve1_amd64.deb ceph-mgr_12.2.7-pve1_amd64.deb ceph-mon_12.2.7-pve1_amd64.deb ceph-osd_12.2.7-pve1_amd64.deb
dpkg -i ceph_12.2.7-pve1_amd64.deb

dpkg -l | grep 12.2.*-pve1

ceph versions

## restart services - wait for the next one until the cluster is healthy again
1.
/etc/init.d/ceph restart   ## restart mon + mds - on all nodes
ceph versions

2.
systemctl restart ceph-mgr@pve01.service ## name differ on all nodes
3.
systemctl restart ceph-osd@0 ## and so on
Udo
 

Patrick Zippenfenig

Active Member
Mar 27, 2014
44
6
28
Regarding release on production repositories

Edit: I switched to the ``test`` release channel, but it would be nice to have it on the release channel soon
 
Last edited:

RobFantini

Renowned Member
May 24, 2012
1,950
80
68
Boston,Mass
quick question,

we have ceph Version: 12.2.5-pve1 .

do I need to take special precautions for the 12.2.7-pve1 upgrade?

Also:
I know we use bluestore ,
how can i tell is 'erasure-coded pools' are in use?
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
4,617
443
88
Note to the update:
If you used ceph v12.2.5 in combination with erasure coded (EC) pools and v12.2.6 with bluestore there is a small risk of corruption under certain workloads. For example, OSD service restarts may be a trigger. Aforementioned operation cases are unsupported by Proxmox.

See the link below for further upgrade instructions.
http://docs.ceph.com/docs/master/releases/luminous/#v12-2-7-luminous

do I need to take special precautions for the 12.2.7-pve1 upgrade?
This is a broad question but I hope the link to the ceph release can answer it.

how can i tell is 'erasure-coded pools' are in use?
EC pools need some extra configuration, so you probably don't. If you are unsure, run the following command to see if your pool is running a EC profile.
Code:
ceph osd pool get <pool> erasure_code_profile
 

hahosting

Member
Aug 20, 2018
10
2
8
Sheffield
www.hahosting.com
Hi,

We're about to upgrade a 6-node Proxmox/Ceph cluster to 12.2.7. Do I need to pause IO to the Ceph cluster even if we're _not_ using EC pools? The official Ceph docs are unclear on this IMHO.

Thanks,
Stu.
 

RobFantini

Renowned Member
May 24, 2012
1,950
80
68
Boston,Mass
In our case using PVE defaults the 5 nodes had no issue. As far as I know PVE never used the the ceph release with the bug. At least in our case we did not see that release.
 

hahosting

Member
Aug 20, 2018
10
2
8
Sheffield
www.hahosting.com
Hi Rob, PVE did use the release with the bug (12.2.5), 5 of our 6 nodes are on 12.2.5 as they were last updated in July. We added the 6th node this month which as 12.2.7 installed, hence wanting to bring all the nodes to the same level.

@udo - when you upgraded your clusters, did you pause the IO first or just run an "apt-get dist-upgrade" on each node?

Thanks,
Stu.
 

udo

Famous Member
Apr 22, 2009
5,918
180
83
Ahrensburg; Germany
Hi Rob, PVE did use the release with the bug (12.2.5), 5 of our 6 nodes are on 12.2.5 as they were last updated in July. We added the 6th node this month which as 12.2.7 installed, hence wanting to bring all the nodes to the same level.

@udo - when you upgraded your clusters, did you pause the IO first or just run an "apt-get dist-upgrade" on each node?

Thanks,
Stu.
Hi Stu,
no - i don't paused the IO.
Upgrade all mon-nodes and restart all mon processes, after that, mgr and osd and ad last all nodes without mon/osds.

Udo
 

hahosting

Member
Aug 20, 2018
10
2
8
Sheffield
www.hahosting.com
Hi all, meant to report back after doing the upgrade at the end of August... The upgrade went OK with no loss of service.

We had VM's on an EC pool with a 3/2 replication cache pool. Just to be on the safe side we created a new 2/1 temp pool and migrated VM's to it, and left some test VM's on the EC pool to see what would happen.

So before the upgrade, we had:
- 6 nodes, 5 @ 12.2.5 and 1 @ 12.2.7.
- several replication pools and 2 EC pools with cache tier.
- a mixture of SSD and WD Gold disks.
- 3 mon hosts (all 12.2.5).

Pre-upgrade we did the following:
- created a new 2/1 replication pool and migrated production VM's to it.
- created some test VM's on the EC pool to monitor uptime during the upgrade.

During the upgrade we did the following:
- set noout, no-scrub, no-deepscrub to stop unnecessary IO and balancing.
- ran an apt-get update & apt-get dist upgrade on the 3 mon hosts (one at a time) and rebooted them.
- this upgraded both Proxmox and Ceph.
- ran apt-get update & apt-get dist upgrade on the remaining 3 nodes (these ran the EC pools), again one at a time, and rebooted.
- confirmed all VM's still online, including those on the EC pools.
- unset noout, no-scrub, and no-deepscrub.
- Ceph showed HEALTH_OK as soon as the pools re-syncd after noout was removed.

Post-upgrade we did the following;
- moved the VM's from the temp pool back to the EC with cache pools.
- removed the temp 2/1 replication pool.

All in, it was much less painless as I thought (and the Ceph documentation implied). No data loss and no loss of service.

Thanks to all on here for your comments and pointers,
Stu.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!