Proxmox ceph with VM block device using LVM on entire vdb

Deadpan110

New Member
Sep 7, 2023
15
6
3
Hello all,
I have encountered a problem today that I had not encountered before.

I have ceph installed on 3 Proxmox nodes and created a debian12 virtual machine. I then added a virtual disk from ceph and used it within the vm. I am not sure if the following occurred during a backup where the host interacts with /dev/rbd devices; however, I figured I would post it here in case others run into such problem.

*note on rbd devices in the Proxmox host - I assume correct behaviour is that they are added and then removed during a backup process? So what I am about to post will likely be an edge case (or maybe due to PEBKAC).

Steps to reproduce
  • With a working ceph cluster, create a vm that uses lvm (Debian12 was my install)
  • After vm installation, add another hard disk to the vm from the ceph pool
  • Within the vm, pvcreate /dev/vdb and then add it to the volume group with vgextend {vgname} /dev/vdb
  • On the host machine, check with lvs
Bash:
root@pve-08:~# lvs
  WARNING: Couldn't find device with uuid SgMrg4-IQ1F-3sLc-R020-NZqi-FgTJ-D2jUFP.
  WARNING: VG lvm is missing PV SgMrg4-IQ1F-3sLc-R020-NZqi-FgTJ-D2jUFP (last written to /dev/vdb).

Mistakes were made

As I am a hobbyist in a home lab environment, I took it upon myself to force remove the missing devices on the host. Without fully understanding the error, I then discovered the vm was broken and attempted to fix that also.
Needless data loss ensued (don't worry, it was only a big lancache and I still have an older version somewhere).

Fixing the problem

Somehow the problem occurred because LVM on the host detected a device with LVM on it (as I stated above, I am pretty sure this should not happen and had not encountered it before. /dev/rbd0 was detected on the host as being LVM and gave a warning).

Before trying to make the same mistakes that I did, simply getting the pve host to ignore the warnings is enough.

Change the following in /etc/lvm/lvm.conf on all Proxmox nodes (at the very bottom):

Code:
devices {
     # added by pve-manager to avoid scanning ZFS zvols
     global_filter=["r|/dev/zd.*|"]
 }

to the following:

Code:
devices {
     # added by pve-manager to avoid scanning ZFS zvols (and RBD vols)
     global_filter=["r|/dev/zd.*|","r|/dev/rbd.*|"]
 }

Conclusion

I hope this may help anyone that gets caught by this in the future and look forward to any comments correcting what I have written here (I will try to update this post if anything is updated in the comments).

Regards
 
Hi,

Would you mind to detail what you mean exactly by "After vm installation, add another hard disk to the vm from the ceph pool" ?
I'm trying to figure out what could have caused the behavior you've seen. I'm intrigued :)
 
Hi,

Would you mind to detail what you mean exactly by "After vm installation, add another hard disk to the vm from the ceph pool" ?
I'm trying to figure out what could have caused the behavior you've seen. I'm intrigued :)
I had been messing around with the particular vm base install by adding and configuring little bits and pieces and then making backups in-between stages, I had then got around to deciding that the vm needed more storage within its own lvm vg, so I added a disk using the web UI choosing my ceph storage pool (with likely more backups before and after the case). I did not notice the lvm error on the host right away and made the mistake of using vgreduce lvm --removemissing --force on the pve host.
Somehow the host managed to scan /dev/rbd (as proxmox avoids scanning zvols), I then edited lvm.conf to prevent this from accidentally happening in the future.

I hope this helps :)

Edit: My proxmox host shows this (I assume something - maybe LVM? held it so it was not closed after use?)
Code:
root@pve-08:~# ls /dev/rbd*
/dev/rbd0

/dev/rbd:
compute_pool

/dev/rbd-pve:
754e9505-f121-49e5-b9d9-c0c5d2621b9e
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!