Ceph 19.2 Squid Available as Technology Preview and Ceph 17.2 Quincy soon to be EOL

Thanks for the report. Changes to the label will prompt all users if they want to accept these changes, well naturally only for systems that have that repo configured and ran apt update, or similar, at least once before the change that is.

But as this is technical speaking still in preview and not yet available on the enterprise repo it's certainly better to change that now than later, and it does might confuse some users, so probably worth it.

Anyhow, thanks for the report, we will look into it.
 
  • Like
Reactions: Neobin
Hi Community!

The recently released Ceph 19.2 Squid is now available on the Proxmox Ceph test and no-subscription repositories to install or upgrade.

Upgrades from Reef to Squid:
You can find the upgrade how-to here: https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid

New Installation of Reef:
Use the updated ceph installation wizard available with the recently released pve-manager version 8.2.8 (available on the pvetest repository at time of writing).

Current State:
We ran and tested the release since a few weeks internally and found no major issue.
Ceph 18.2 Reef will stay supported until mid-2025 for the time being.

Road to Enterprise Stability:
Our further plan is to lift the preview state and provided Squid as fully supported Ceph release once we got even more test time and feedback from QA, and naturally we would be happy to hear about the observations from our great community! Once we deem the Ceph 19.2 Squid release, and it's integration into Proxmox VE fully production ready, we'll also populate the Ceph Squid enterprise repository.

Reminder: Old Ceph 17.2 Quincy Going EOL Soon:
Please also remember that Ceph 17.2 Pacific is going to be end of life (EOL) soon and received no update since a while already. So you should upgrade any existing Ceph Quincy setups to Ceph Reef, or soon also Ceph Squid, rather sooner than later. See the upgrade how-to: https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef.

We welcome your feedback!
I have updated my node with no problem.
 
Tiny oversight in the label (18 -> 19):
Bash:
http://download.proxmox.com/debian/ceph-squid bookworm/no-subscription amd64 Packages
release o=Proxmox,a=stable,n=bookworm,l=Proxmox Ceph 18 Squid Debian Repository,c=no-subscription,b=amd64
origin download.proxmox.com

Fixed now, all of you that already added that repo will get a prompt on next repo index refresh, like when executing apt update (or similar) command.

It will look like this:


E: Repository 'http://download.proxmox.com/debian/ceph-squid bookworm InRelease' changed its 'Label' value from 'Proxmox Ceph 18 Squid Debian Repository' to 'Proxmox Ceph 19 Squid Debian Repository'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.
Do you want to accept these changes and continue updating from this repository? [y/N] y



This was done by us and if nothing else is listed then it can be safely accepted by entering y and pressing enter.
Sorry for any inconvenience caused.
 
Last edited:
  • Like
Reactions: Falk R. and Neobin
Just a heads up if anyone is using IPv6 for the ceph network - seems one of the recent merges for health checking only checks IPv4. Result is a fully functioning ceph cluster but the state will be flagged as HEALTH_ERR.

https://tracker.ceph.com/issues/67517

root@pve01:~# ceph health detail
HEALTH_ERR 13 osds(s) are not reachable
[ERR] OSD_UNREACHABLE: 13 osds(s) are not reachable
osd.0's public address is not in 'fc00::81/125' subnet
osd.1's public address is not in 'fc00::81/125' subnet
osd.2's public address is not in 'fc00::81/125' subnet
osd.3's public address is not in 'fc00::81/125' subnet
osd.4's public address is not in 'fc00::81/125' subnet
osd.5's public address is not in 'fc00::81/125' subnet
osd.6's public address is not in 'fc00::81/125' subnet
osd.7's public address is not in 'fc00::81/125' subnet
osd.8's public address is not in 'fc00::81/125' subnet
osd.9's public address is not in 'fc00::81/125' subnet
osd.10's public address is not in 'fc00::81/125' subnet
osd.11's public address is not in 'fc00::81/125' subnet
osd.12's public address is not in 'fc00::81/125' subnet


root@pve01:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 20.12212 root default
-3 7.32106 host pve01
5 hdd 1.81940 osd.5 up 1.00000 1.00000
0 nvme 0.93149 osd.0 up 1.00000 1.00000
1 nvme 1.81940 osd.1 up 1.00000 1.00000
2 nvme 0.93149 osd.2 up 1.00000 1.00000
6 ssd 1.81929 osd.6 up 1.00000 1.00000
-7 6.43317 host pve02
3 hdd 1.81940 osd.3 up 1.00000 1.00000
7 nvme 1.86299 osd.7 up 1.00000 1.00000
8 nvme 0.93149 osd.8 up 1.00000 1.00000
9 ssd 1.81929 osd.9 up 1.00000 1.00000
-10 6.36789 host pve03
4 hdd 1.81940 osd.4 up 1.00000 1.00000
12 hdd 1.81940 osd.12 up 1.00000 1.00000
10 nvme 0.90970 osd.10 up 1.00000 1.00000
11 nvme 1.81940 osd.11 up 1.00000 1.00000
 
  • Like
Reactions: wbedard
Updated as well, 0 issues during process. Finally RGW IAM present and S3 bucket policies working.

However, I'm facing `1 MDSs behind on trimming` warning randomly. Tried some workarounds and now it is persistently grows 700+/128 segments. Might be the fact that I have only 1 slow HDD OSD per node and nothing else, but never faced this issue before.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!