Proxmox ceph upgrade problems experienced

huseyinsunar

Member
Jan 18, 2022
13
0
6
49
Hi,

Ceph pacific (16.2.13) to quincy 17.2.6

During the quincy upgrade process from ceph pacific, we could not upgrade our osd disks while we were upgrading 17.2.6 in monitor, manager and meta.

Since I could not upgrade the osd disks, the osd disk versions remained at 16.2.13

Can anyone have this problem and share the solutions?

Help me !


Screenshot from 2023-08-17 23-12-27.png
 
Hello, how did you upgrade the nodes? Did you restart the services after upgrading? Not restarting the services is a common cause for the issue you describe. You can read more, specifically what services to restart, at our wiki [1].

[1] https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
 
Hi,

I restarted all monitor, manager and metadata services, but when I add a new disk, it shows up as filestore and the disk is out. new add osd disk osd.2 filestore

1693076983967.png
 
root@d10v11vmmppve09:~# pveceph osd create /dev/sda
create OSD on /dev/sda (bluestore)
wiping block device /dev/sda
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.13786 s, 184 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fa15640d-940a-4037-bf73-11209dffcbc5
Running command: vgcreate --force --yes ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a /dev/sda
stdout: Physical volume "/dev/sda" successfully created.
stdout: Volume group "ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a" successfully created
Running command: lvcreate --yes -l 1875289 -n osd-block-fa15640d-940a-4037-bf73-11209dffcbc5 ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a
stdout: Logical volume "osd-block-fa15640d-940a-4037-bf73-11209dffcbc5" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a/osd-block-fa15640d-940a-4037-bf73-11209dffcbc5
Running command: /bin/chown -R ceph:ceph /dev/dm-5
Running command: /bin/ln -s /dev/ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a/osd-block-fa15640d-940a-4037-bf73-11209dffcbc5 /var/lib/ceph/osd/ceph-2/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
stderr: 2023-08-26T22:03:15.519+0300 7f5742a74700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2023-08-26T22:03:15.519+0300 7f5742a74700 -1 AuthRegistry(0x7f573c060b40) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 18
--> Creating keyring file for osd.2
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid fa15640d-940a-4037-bf73-11209dffcbc5 --setuser ceph --setgroup ceph
stderr: 2023-08-26T22:03:15.987+0300 7f633f42b3c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sda
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a/osd-block-fa15640d-940a-4037-bf73-11209dffcbc5 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Running command: /bin/ln -snf /dev/ceph-6371c6b8-9d0b-421f-a87d-23f090b5e11a/osd-block-fa15640d-940a-4037-bf73-11209dffcbc5 /var/lib/ceph/osd/ceph-2/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: /bin/chown -R ceph:ceph /dev/dm-5
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /bin/systemctl enable ceph-volume@lvm-2-fa15640d-940a-4037-bf73-11209dffcbc5
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-fa15640d-940a-4037-bf73-11209dffcbc5.service -> /lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@2
Running command: /bin/systemctl start ceph-osd@2
--> ceph-volume lvm activate successful for osd ID: 2
--> ceph-volume lvm create successful for: /dev/sda
 
Did you try adding the OSD from the UI? Did you follow the upgrade guide at [2]?

The line

Code:
stderr: 2023-08-26T22:03:15.519+0300 7f5742a74700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory

is of interest, could you please share the contents of /etc/pve/ceph.conf. See [1] for a similar issue.

[1] https://forum.proxmox.com/threads/m...on-var-lib-ceph-mds-ceph-admin-keyring.57582/
[2] https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
 
Hi,

I am sharing the ceph.conf file
-----------------------------------------------------------

root@d10v11vmmppve01:~# cat /etc/pve/ceph.conf
[küresel]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
küme_ağı = 10.10.11.0/24
fsid = c49d20f5-ea9c-41ba-b733-d4dcf746382a
mon_allow_pool_delete = doğru
mon_host = 10.10.11.28 10.10.11.41 10.10.11.51 10.10.11.26
osd_pool_default_min_size = 2
osd_pool_default_size = 3
genel_ağ = 10.10.11.0/24

[müşteri]
anahtarlık = /etc/pve/priv/$cluster.$name.keyring

[mds]
anahtarlık = /var/lib/ceph/mds/ceph-$id/anahtarlık

[mds.d10v11vmmppve01]
ana bilgisayar = d10v11vmmppve01
mds_standbay_for_name = pve

[mds.d10v11vmmppve02]
ana bilgisayar = d10v11vmmppve02
mds_standby_for_name = pve

[mds.d10v11vmmppve03]
ana bilgisayar = d10v11vmmppve03
mds_standby_for_name = pve

[mds.d10v11vmmppve04]
ana bilgisayar = d10v11vmmppve04
mds_standby_for_name = pve

[mds.d10v11vmmppve05]
ana bilgisayar = d10v11vmmppve05
mds_standby_for_name = pve

[mon.d10v11vmmppve01]
public_addr = 10.10.11.26

[mon.d10v11vmmppve03]
public_addr = 10.10.11.28

[mon.d10v11vmmppve04]
public_addr = 10.10.11.41
 
Hi,

I am upgrading from Proxmox 7.4.16 and Ceph 16.2.13 to 17.2.6, but when I restarted the disks or added a new disk, I encountered the following error. Has anyone encountered this kind of problem.

2023-09-14T22:32:06.208+0300 7f7bb4db2700 -1 osd.2 0 waiting for initial osdmap
2023-09-14T22:32:06.208+0300 7f7baa611700 -1 osd.2 0 failed to load OSD map for epoch 2245926, got 0 bytes
2023-09-14T22:32:06.240+0300 7f7ba6b45700 0 osd.2 2245927 crush map has features 288514051259236352, adjusting msgr requires for clients
2023-09-14T22:32:06.240+0300 7f7ba6b45700 0 osd.2 2245927 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
2023-09-14T22:32:06.240+0300 7f7ba6b45700 0 osd.2 2245927 crush map has features 3314933000852226048, adjusting msgr requires for osds
2023-09-14T22:32:06.240+0300 7f7ba6b45700 1 osd.2 2245927 check_osdmap_features require_osd_release unknown -> nautilus
2023-09-14T22:32:07.780+0300 7f7baa611700 1 osd.2 2246602 start_boot
2023-09-14T22:32:07.780+0300 7f7baf365700 -1 osd.2 2246602 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
2023-09-14T22:32:07.780+0300 7f7baf365700 1 osd.2 2246602 set_numa_affinity not setting numa affinity
 
2023-09-14T22:32:06.240+0300 7f7ba6b45700 1 osd.2 2245927 check_osdmap_features require_osd_release unknown -> nautilus
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!