PVE root on ZFS - no such logical volume pve/data

abourges

New Member
Dec 20, 2024
6
0
1
Hi,

...we're currently running a poc setup using three nodes. Two of the nodes have ZFS for root-os (two single disks in ZFS-raid-1) and the third node (has a hardware raid-1) uses ext-4.

The logs show *a lot* of the following messages:

Code:
2025-05-13T11:17:32-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:32-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:33-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:33-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:42-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:42-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:43-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:43-04:00 punk pvestatd[2783]: no such logical volume pve/data

lsblk of a node that throws the errors:

Code:
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                                                     8:0    0 465.8G  0 disk
├─sda1                                                                                                  8:1    0  1007K  0 part
├─sda2                                                                                                  8:2    0     1G  0 part
└─sda3                                                                                                  8:3    0 464.8G  0 part
sdb                                                                                                     8:16   0 465.8G  0 disk
├─sdb1                                                                                                  8:17   0  1007K  0 part
├─sdb2                                                                                                  8:18   0     1G  0 part
└─sdb3                                                                                                  8:19   0 464.8G  0 part
sdc                                                                                                     8:32   0   3.6T  0 disk
└─ceph--87c4bf7b--e95c--43da--99ad--c4edd744dba0-osd--block--2df553b3--9350--47bd--89fb--ffcb7a9f21ba 252:0    0   3.6T  0 lvm
sdd                                                                                                     8:48   0   3.6T  0 disk
└─ceph--54405b2a--5160--44e0--9293--8dfbdad80b9a-osd--block--1f55e666--95bc--4021--b925--8c891bc2f1a4 252:2    0   3.6T  0 lvm
sde                                                                                                     8:64   0   3.6T  0 disk
└─ceph--a7dae876--5dfa--4d0f--bfcc--e64c2d6fc16d-osd--block--3c61f76b--0d18--48cd--9610--588d9fc651db 252:1    0   3.6T  0 lvm
sdf                                                                                                     8:80   1     0B  0 disk
sdg                                                                                                     8:96   0  14.9G  0 disk
├─sdg1                                                                                                  8:97   0   100M  0 part
├─sdg5                                                                                                  8:101  0     1G  0 part
├─sdg6                                                                                                  8:102  0     1G  0 part
└─sdg7                                                                                                  8:103  0  12.8G  0 part
sr0                                                                                                    11:0    1   1.3G  0 rom
rbd0                                                                                                  251:0    0     4M  0 disk


vgdisplay only shows the ceph VG's:

Code:
root@tick:/var/log# vgdisplay
  --- Volume group ---
  VG Name               ceph-a7dae876-5dfa-4d0f-bfcc-e64c2d6fc16d
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       953861 / <3.64 TiB
  Free  PE / Size       0 / 0
  VG UUID               1qCeNg-lrrt-pPNn-foc0-t5bU-sNi2-7rXgMs

  --- Volume group ---
  VG Name               ceph-54405b2a-5160-44e0-9293-8dfbdad80b9a
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       953861 / <3.64 TiB
  Free  PE / Size       0 / 0
  VG UUID               IRb0Pz-m2Ny-L2dI-rIIv-RVNt-3PKB-8fXcgg

  --- Volume group ---
  VG Name               ceph-87c4bf7b-e95c-43da-99ad-c4edd744dba0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       953861 / <3.64 TiB
  Free  PE / Size       0 / 0
  VG UUID               BrCBkH-KHz6-k2Lb-o28d-QD6J-fJAs-kK2PZN

root@tick:/var/log#


Searching through the forum didn't bring me to a solution. It looks like PVE is expecting a LVM VG/LV that is not present when using ZFS for the rootfs...

Any hints how to get rid of those messages?

Thanks,

Andreas
 
Hi,

please provide your complete storage configuration, i.e. the output of cat /etc/pve/storage.cfg.
First hunch would be that the storage is a bit misconfigured.
 
Hi,

...here we go:


Code:
root@tick:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

rbd: TEST_POOL
        content images,rootdir
        krbd 0
        pool TEST_POOL

dir: Software
        path /mnt/ISO
        content iso
        prune-backups keep-all=1
        shared 0

nfs: Software-test
        export /export/software
        path /mnt/pve/Software-test
        server files.xxx.lab
        content iso
        prune-backups keep-all=1

root@tick:~#

Thanks,

Andreas
 
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
The local-lvm storage, which is created by default for ext4 and xfs installation, is not constrained properly to one node which is set up like this.

You can either set in the web UI by going to Datacenter > Storage > local-lvm and selecting only the third, ext4-based node for the Nodes, or add the nodes setting yourself manually to /etc/pve/storage.cfg like this:

Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes <hostname-of-the-third-node>
 
Hi,

...yeah - that was a quick fix! Thanks a lot! So the problem was the mixture of different storage options on the nodes in a single cluster?

Thanks,

Andreas
 
So the problem was the mixture of different storage options on the nodes in a single cluster?
Yes, basically. Although - as you saw - it really isn't a problem or discouraged in practice, the storage subsystem just needs correct information about what storage is available on what nodes.

Of course having a (as close as possible) homogeneous cluster can make things simpler, but each storage type also has its pros and cons, so it can also make sense to have different underlying storages.

Please just finally mark the thread as SOLVED, by editing the first post - there should be a dropdown near the title field to set the status. This helps others to find this thread more easily in the future :)