How to mount a guest's LVM volume on a ZFS raw disk image from the PVE host?

Bill Church

Renowned Member
Jun 25, 2015
8
0
66
I have a guest which uses a zfs-local volume. On that guest, is a partition 3 partitions, one of them is an LVM partition/volume group. And that volume group contains several volumes, one of them I want to modify OUTSIDE of the virtual. I basically need to inject some files before the first boot of that system.

On any other system, a vgscan or vgdisplay would bring back the volumes and you could then operate on those volume groups/lvm volumes. I can't quite figure out the magic to get this work with the proxmox config. This is probably less about Proxmox than it is LVM on top of ZFS... But my Internet searches have turned up nothing, so I'm hoping this is something someone here has run across.

I can easily mount the "dos" partition of this image. What I think is the relevant config below:

vm config:
Code:
balloon: 0
boot: c
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 4096
name: <redacted>
net0: virtio=<redacted>,bridge=vmbr0,tag=30
net1: virtio=<redacted>,bridge=vmbr0,tag=10
net2: virtio=<redacted>,bridge=vmbr0,tag=20
net3: virtio=<redacted>,bridge=vmbr0,link_down=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-920-disk-0,discard=on,size=54G,ssd=1
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=<redacted>
sockets: 1
vmgenid: <redacted>

relevant zfs targets:
Code:
root@pve:~# ls -l /dev/zvol/rpool/data/vm-920*
lrwxrwxrwx 1 root root 14 Jun 13 15:40 /dev/zvol/rpool/data/vm-920-disk-0 -> ../../../zd896
lrwxrwxrwx 1 root root 16 Jun 13 15:40 /dev/zvol/rpool/data/vm-920-disk-0-part1 -> ../../../zd896p1
lrwxrwxrwx 1 root root 16 Jun 13 15:40 /dev/zvol/rpool/data/vm-920-disk-0-part2 -> ../../../zd896p2
lrwxrwxrwx 1 root root 16 Jun 13 15:40 /dev/zvol/rpool/data/vm-920-disk-0-part3 -> ../../../zd896p3

extract of fdisk -l /dev/zvol/rpool/data/vm-920-disk-0

Code:
root@pve:~# fdisk -l /dev/zvol/rpool/data/vm-920-disk-0
Disk /dev/zvol/rpool/data/vm-920-disk-0: 54 GiB, 57982058496 bytes, 113246208 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device                               Boot  Start       End   Sectors  Size Id Type
/dev/zvol/rpool/data/vm-920-disk-0p1 *         1    409599    409599  200M  b W95 FAT32
/dev/zvol/rpool/data/vm-920-disk-0p2      409600    442367     32768   16M 82 Linux swap / Solaris
/dev/zvol/rpool/data/vm-920-disk-0p3      442368 113245199 112802832 53.8G 8e Linux LVM

Partition 1 does not start on physical sector boundary.
root@pve:~#
 
For starters, the zfs devices are excluded from lvm (which I mean makes sense, who would do this normally).

/etc/lvm/lvm.conf:
Code:
devices {
         # added by pve-manager to avoid scanning ZFS zvols
         global_filter=["r|/dev/zd.*|"]
}

And there's no reason to change that. I ended up mounting the /dev/zvol/rpool/data/vm-920-disk-0 as a loop device and following the instructions on using a loop with lvm: https://ops.tips/blog/lvm-on-loopback-devices/

Specifically:
losetup /dev/loop0 /dev/zvol/rpool/data/vm-920-disk-0

partx --update /dev/loop0

lvmdiskscan -l

these then all worked:
vgdisplay
lvdisplay
pvdisplay

Finally I believe I needed to this for the /dev entries to show up:
vgchange -aay
 
And then to gracefully unmount and remove... Looks something like:

Code:
lvchange -an /dev/vg-db-vda/*

vgchange -an vg-db-vda

dmsetup remove vm-921-disk-0p3

I tried to do just vgchange -an vg-db-vda but it didn't seem to work.

The disk in dmsetup remove was found by running dmsetup ls
 
Well... It works until you delete the vm... then the zfs volume won't delete no matter what you do...

Code:
root@pve:/tmp# pvesm free local-zfs:vm-921-disk-0


zfs error: cannot destroy 'rpool/data/vm-921-disk-0': dataset is busy

root@pve:/tmp# grep vm-921-disk-0   /proc/*/mounts


root@pve:/tmp# zfs list rpool/data/vm-921-disk-0

NAME                       USED  AVAIL     REFER  MOUNTPOINT

rpool/data/vm-921-disk-0  4.81G   243G     4.81G  -

root@pve:/# zfs get all rpool/data/vm-921-disk-0

NAME                      PROPERTY              VALUE                  SOURCE

rpool/data/vm-921-disk-0  type                  volume                 -

rpool/data/vm-921-disk-0  creation              Tue Jun 14 15:50 2022  -

rpool/data/vm-921-disk-0  used                  4.81G                  -

rpool/data/vm-921-disk-0  available             243G                   -

rpool/data/vm-921-disk-0  referenced            4.81G                  -

rpool/data/vm-921-disk-0  compressratio         1.34x                  -

rpool/data/vm-921-disk-0  reservation           none                   default

rpool/data/vm-921-disk-0  volsize               54G                    local

rpool/data/vm-921-disk-0  volblocksize          8K                     default

rpool/data/vm-921-disk-0  checksum              on                     default

rpool/data/vm-921-disk-0  compression           on                     inherited from rpool

rpool/data/vm-921-disk-0  readonly              off                    default

rpool/data/vm-921-disk-0  createtxg             19647041               -

rpool/data/vm-921-disk-0  copies                1                      default

rpool/data/vm-921-disk-0  refreservation        none                   default

rpool/data/vm-921-disk-0  guid                  15718549786695383507   -

rpool/data/vm-921-disk-0  primarycache          all                    default

rpool/data/vm-921-disk-0  secondarycache        all                    default

rpool/data/vm-921-disk-0  usedbysnapshots       0B                     -

rpool/data/vm-921-disk-0  usedbydataset         4.81G                  -

rpool/data/vm-921-disk-0  usedbychildren        0B                     -

rpool/data/vm-921-disk-0  usedbyrefreservation  0B                     -

rpool/data/vm-921-disk-0  logbias               latency                default

rpool/data/vm-921-disk-0  objsetid              316                    -

rpool/data/vm-921-disk-0  dedup                 off                    default

rpool/data/vm-921-disk-0  mlslabel              none                   default

rpool/data/vm-921-disk-0  sync                  standard               inherited from rpool

rpool/data/vm-921-disk-0  refcompressratio      1.34x                  -

rpool/data/vm-921-disk-0  written               4.81G                  -

rpool/data/vm-921-disk-0  logicalused           6.44G                  -

rpool/data/vm-921-disk-0  logicalreferenced     6.44G                  -

rpool/data/vm-921-disk-0  volmode               default                default

rpool/data/vm-921-disk-0  snapshot_limit        none                   default

rpool/data/vm-921-disk-0  snapshot_count        none                   default

rpool/data/vm-921-disk-0  snapdev               hidden                 default

rpool/data/vm-921-disk-0  context               none                   default

rpool/data/vm-921-disk-0  fscontext             none                   default

rpool/data/vm-921-disk-0  defcontext            none                   default

rpool/data/vm-921-disk-0  rootcontext           none                   default

rpool/data/vm-921-disk-0  redundant_metadata    all                    default

rpool/data/vm-921-disk-0  encryption            off                    default

rpool/data/vm-921-disk-0  keylocation           none                   default

rpool/data/vm-921-disk-0  keyformat             none                   default

rpool/data/vm-921-disk-0  pbkdf2iters           0                      default
 
Last edited:
Can't figure out what's holding it open. Rebooting released it, but obviously I'd like to avoid that in the future. I'm sure it has something to do with LVM but I couldn't see how it was being held busy.
 
Must have been a fluke, did it again and it seems to have cleaned up this time...

This seems to be the trick to take care of the volumes and volume groups I was mounting (will be specific to your LVM config so don't copy verbatim). This also assumes you don't use LVM normally, otherwise this will most likely hose up a lot of stuff...

Code:
lvchange -an /dev/vg-db-vda/*

vgchange -an vg-db-vda

for disk in $(dmsetup ls | awk '{print $1}'); do dmsetup remove $disk; done
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!