ceph pacific to quincy upgrade

chris.23lo

Member
Aug 31, 2022
14
0
6
hi there.

proxmox with 10 nodes upgrade from 7.0 (pacific) to 7.4 (pacific0, it was completed without problems, vms are fine.
so next is to upgrade pacific to quincy.

yet, i have one of my vm have a problem. the device_health_metrics shall be changed to .mgr during the upgrade.
Is any fix for it?

I also notice /etcc/pve/storage.cfg still have a line .

Should it be rename or deleted? Thank you in advanced.

rbd: ceph-vm
content images
krbd 0
pool device_health_metrics


# qm start 302
kvm: -drive file=rbd:device_health_metrics/vm-302-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/ceph-vm.keyring,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on: error opening pool device_health_metrics: No such file or directory
start failed: QEMU exited with code 1


# ceph -s
cluster:
id: 1fe1245d-4934-439f-bb8c-5cc650efe185
health: HEALTH_OK

services:
mon: 4 daemons, quorum pve026,pve025,pve027,pve029 (age 39m)
mgr: pve025(active, since 79m), standbys: pve026, pve029, pve027
osd: 120 osds: 120 up (since 92s), 120 in (since 4d)

data:
pools: 1 pools, 128 pgs
objects: 2.75M objects, 10 TiB
usage: 29 TiB used, 844 TiB / 873 TiB avail
pgs: 128 active+clean

io:
client: 255 KiB/s wr, 0 op/s rd, 14 op/s wr

# ceph versions
{
"mon": {
"ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)": 4
},
"mgr": {
"ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)": 4
},
"osd": {
"ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)": 120
},
"mds": {
"ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)": 2
},
"overall": {
"ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)": 130
}
}

# ceph mon dump|grep min
dumped monmap epoch 13
min_mon_release 17 (quincy)

# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-13 (running version: 7.4-13/46c37d9c)
pve-kernel-5.15: 7.4-3
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph: 17.2.6-pve1
ceph-fuse: 17.2.6-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
I think i had the issue with upgrade.

# pvesm list ceph-vm
rbd error: rbd: listing images failed: (2) No such file or directory

my storage was named ceph-vm and type rbd
but it is now unlisted, even though if edit the disk and simulate a move disk, the name was shown.


1686981158863.png
 
Here I also try to run rbd , by changing the device_health_metrics to .mgr

# cat storage.cfg
dir: local
path /var/lib/vz
content rootdir,iso
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

rbd: ceph-vm
content images
krbd 0
pool .mgr

nfs: truenas228129
export /mnt/data
path /mnt/pve/truenas228129
server 192.168.228.129
content iso,backup
prune-backups keep-all=1

nfs: truenas01
export /mnt/data
path /mnt/pve/truenas01
server 192.168.228.61
content backup,iso
prune-backups keep-all=1

cephfs: cephfs
path /mnt/pve/cephfs
content backup,vztmpl,iso
fs-name cephfs



with error output below

# rbd -p ceph-vm -m 192.168.228.67,192.168.228.68,192.168.228.69 -n client.ceph-vm_ceph --keyring /etc/pve/priv/ceph/ceph-vm.keyring ls -l
2023-06-17T14:40:18.607+0800 7f1cc9821700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
rbd: couldn't connect to the cluster!
rbd: listing images failed: (1) Operation not permitted
 
Hi team,

Help is needed, much appreciated!!

# /usr/bin/rbd -p .mgr -m 192.168.228.67,192.168.228.68,192.168.228.69 --auth_supported cephx -n client.admin --keyring /etc/pve/priv/ceph/ceph-vm.keyring ls -l --format json
rbd: error opening base-1000-disk-0: (2) No such file or directory
rbd: error opening base-500-disk-0: (2) No such file or directory
rbd: error opening vm-100-disk-0: (2) No such file or directory
rbd: error opening vm-1001-disk-0: (2) No such file or directory
rbd: error opening vm-101-disk-0: (2) No such file or directory
rbd: error opening vm-102-disk-0: (2) No such file or directory
rbd: error opening vm-102-disk-1: (2) No such file or directory
rbd: error opening vm-103-disk-1: (2) No such file or directory
rbd: error opening vm-104-disk-0: (2) No such file or directory
rbd: error opening vm-108-disk-0: (2) No such file or directory
rbd: error opening vm-110-disk-0: (2) No such file or directory
rbd: error opening vm-201-disk-0: (2) No such file or directory
rbd: error opening vm-201-disk-1: (2) No such file or directory
rbd: error opening vm-302-disk-0: (2) No such file or directory
rbd: error opening vm-303-disk-0: (2) No such file or directory
rbd: error opening vm-501-disk-0: (2) No such file or directory
rbd: error opening vm-501-disk-3: (2) No such file or directory
rbd: error opening vm-501-disk-4: (2) No such file or directory
rbd: error opening vm-502-disk-0: (2) No such file or directory
rbd: error opening vm-502-state-c31122021: (2) No such file or directory
rbd: error opening vm-503-disk-0: (2) No such file or directory
rbd: error opening vm-504-disk-0: (2) No such file or directory
rbd: error opening vm-504-state-a07Jan22: (2) No such file or directory
rbd: error opening vm-601-disk-1: (2) No such file or directory
rbd: error opening vm-666-disk-0: (2) No such file or directory
rbd: error opening vm-701-disk-1: (2) No such file or directory
rbd: error opening vm-702-disk-0: (2) No such file or directory
rbd: error opening vm-703-disk-1: (2) No such file or directory
rbd: error opening vm-704-disk-0: (2) No such file or directory
rbd: error opening vm-705-disk-0: (2) No such file or directory
rbd: error opening vm-705-disk-1: (2) No such file or directory
rbd: error opening vm-706-disk-0: (2) No such file or directory
rbd: error opening vm-707-disk-0: (2) No such file or directory
rbd: error opening vm-708-disk-0: (2) No such file or directory
rbd: error opening vm-709-disk-0: (2) No such file or directory
rbd: error opening vm-712-disk-0: (2) No such file or directory
rbd: error opening vm-712-state-cacti20220722: (2) No such file or directory
rbd: error opening vm-712-state-cacti20220801: (2) No such file or directory
rbd: error opening vm-712-state-test: (2) No such file or directory
rbd: error opening vm-713-disk-0: (2) No such file or directory
rbd: error opening vm-714-disk-0: (2) No such file or directory
rbd: error opening vm-714-state-rancid20220722: (2) No such file or directory
rbd: error opening vm-715-disk-0: (2) No such file or directory
rbd: error opening vm-715-state-test1: (2) No such file or directory
rbd: error opening vm-715-state-test3: (2) No such file or directory
rbd: error opening vm-715-state-test4: (2) No such file or directory
rbd: error opening vm-716-disk-0: (2) No such file or directory
rbd: error opening vm-801-disk-0: (2) No such file or directory
rbd: error opening vm-103-disk-0: (2) No such file or directory
[]
rbd: listing images failed: (2) No such file or directory
 
You should not have rbds on the device health metric pool, as this is not intended for rbd usage, its for metrics, usually you should only have 1 PG on that pool! Are you sure this is a legit confing, doesnt seem so:


Code:
rbd: ceph-vm
content images
krbd 0
pool device_health_metrics

Thats how it usually should look like!

Code:
rbd: ceph-vm
        content rootdir,images
        krbd 1
        pool ceph-vm
 
Last edited:
hi . thanks for your note.

I checked the /etc/pve/storage.cfg (backup before upgrade) is below
1687226427866.png

Another 7.0 cluster looks like this very moment, running 7.0 & ceph pacific

pic from another 7.0 cluster and pacific
1687226293327.png

pic from another 7.0 cluster and pacific
1687226330691.png
 
Last edited:
hi . thanks for your note.

I checked the /etc/pve/storage.cfg (backup before upgrade) is below
View attachment 51862

Another 7.0 cluster looks like this very moment, running 7.0 & ceph pacific

pic from another 7.0 cluster and pacific
View attachment 51860

pic from another 7.0 cluster and pacific
View attachment 51861
That's look wrong if you had mapped your proxmox rbd storage to the device_health_metrics pool.... (technicaly it's possible, but this pool is for internal ceph manager daemon only).

you can try to edit storage.cfg with

Code:
rbd: ceph-vm
content images
krbd 0
pool .mgr

if it's working, It should advice to create a new separate pool, and migrate vm disk to this pool
 
  • Like
Reactions: jsterr
I had a similar problem in the paid support a few days ago, where the device_health_metrics pool was used for RBD.

In my tests to recreate the situation with a simple disk image (no snapshots or RBD namespaces) I saw that the "rbd_id.vm-xxx-disk-y" objects and "rbd_object_map.XXXXXX" were gone after the upgrade an rename to ".mgr".

While the "rbd_id" object would be easy to recreate, the object map is harder to recreate. In the end, a restore from the last backups from a couple hours ago was the easier path to get the VMs back into a working state.
 
  • Like
Reactions: jsterr
That's look wrong if you had mapped your proxmox rbd storage to the device_health_metrics pool.... (technicaly it's possible, but this pool is for internal ceph manager daemon only).

you can try to edit storage.cfg with

Code:
rbd: ceph-vm
content images
krbd 0
pool .mgr

if it's working, It should advice to create a new separate pool, and migrate vm disk to this pool
thank you for your feedback. tried without success.
 
I had a similar problem in the paid support a few days ago, where the device_health_metrics pool was used for RBD.

In my tests to recreate the situation with a simple disk image (no snapshots or RBD namespaces) I saw that the "rbd_id.vm-xxx-disk-y" objects and "rbd_object_map.XXXXXX" were gone after the upgrade an rename to ".mgr".

While the "rbd_id" object would be easy to recreate, the object map is harder to recreate. In the end, a restore from the last backups from a couple hours ago was the easier path to get the VMs back into a working state.

I'd try anything to look deeper, can the step sbe shared so i'll try. Thanks.
 
If you have a good pool with working images, you can compare what objects there are.

rados -p {pool} ls for example.
raods -p {pool} get {object} - will print the contents to stdoud.

The objectmap is a bitmap that maps for which parts of the disk image there are actual object. For everything else, the client can return zeros on a read request without checking for an object on Ceph first.

But how hard it is to recreate it I don't know. I am also not sure if that is everything that needs to be fixed, just what I saw immediately.
 
Hi guys. We encountered the same problem yesterday, or at least somewhat the same.
Our configuration was done by an ex-coworker who used the device_health_metrics pool for the VM disks, and after the Pacific to Quincy upgrade our VM-s wouldn't simply start. The error messages were the same as above but I'll write down everything again for future adventurers to validate their progress towards fixing this issue.

So right after the Ceph upgrade from Pacific to Quincy the name of the device_health_metrics pool changes to .mgr, as the upgrade guide foreshadowed. If you have any data there when you try to start a VM you'll be greeted by a similar message:
Code:
kvm: -drive file=rbd:device_health_metrics/vm-130-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on: error opening pool device_health_metrics: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

Notice that it tries to open something inside the device_health_metrics pool. But it was renamed, so something is not right.
On the Proxmox UI -> VM -> Hardware page we saw that the VM disk was inside a pool called "rbd" (deducted from "rbd:vm-130-disk-0,size=32G").
After checking /etc/pve/storage.cfg on a Ceph storage node we found that this "rbd" pool referred to device_health_metrics. The relevant part of the file:
Code:
rbd: rbd
    content images
    krbd 0
    pool device_health_metrics

After changing device_health_metrics to .mgr here a VM start error message changed to:
Code:
kvm: -drive file=rbd:.mgr/vm-130-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on: error reading header from vm-130-disk-0: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

At this point issuing rbd ls -p .mgr --long returned every one of our VM disks as No such file or directory.
Checking on the Proxmox UI -> Ceph -> Pools page the .mgr pool had 10TB+ data, so we were more than sure that our data was there somewhere.

Next, we ran rados -p .mgr ls command and the output was huge so we tried to make sense of it. Initially, it looked good but after a while, I realized all of the rbd_id and rbd_object_map objects were missing. To check for everything but rbd_data run rados -p .mgr ls | grep -v data.

Our goal was to recreate all of the rbd_id objects first. Running rados -p .mgr listomapvals rbd_directory returned all of our VM disk names (I believe it's called image name) and the corresponding IDs for them. One record looks something like this
Code:
id_0ab143ebfab2e5
value (17 bytes) :
00000000  0d 00 00 00 76 6d 2d 31  30 35 2d 64 69 73 6b 2d  |....vm-130-disk-|
00000010  31                                                |0|
00000011

An rbd_id object contains an ID just like that. I tried to just simply put it into rados. I copied the ID into a file, let's call it vmid.file, and ran the command rados -p .mgr put rbd_id.vm-130-disk-0 vmid.file. After this, I ran rbd info -p .mgr vm-130-disk-0 but I got the following error:
Code:
librbd::image::OpenRequest: failed to retrieve image id: (5) Input/output error
librbd::ImageState: 0x557e8977a7b0 failed to open image: (5) Input/output error
rbd: error opening vm-130-disk-0: (5) Input/output error

Simply putting the ID inside a file doesn't work. Running hexdump -C vmid.file outputs our file's content in just one line (sort of).
But we had another pool in Ceph which worked perfectly so I grabbed an rbd_id object from there. Command: rados -p workingpoolname get rbd_id.vm-155-disk-0 outfile
Running hexdump on this outfile returns the last 2 characters of the ID in another line. I'm no expert on this topic so I can't explain how this works, but editing the outfile in mcedit and changing only the ID to the one I tried to write into vmid.file and putting this new file into rados worked.

For people who are debugging their Ceph right now, the hexdump looks like this:
Code:
00000000  0e 00 00 00 66 35 36 32  62 38 36 62 38 62 34 35  |....0ab143ebfab2|
00000010  36 37                                             |e5|
00000012
(I changed the ID manually, so if you are an expert hexreader you can see I cheated.)


We never recreated the rbd_object_map objects, because creating all the rbd_id objects this way enabled us to start our VMs we moved all our VM disk data to the workingpoolname pool, and emptied the .mgr pool, as it was never intended to be used by us, users.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!