Hi,
...we're currently running a poc setup using three nodes. Two of the nodes have ZFS for root-os (two single disks in ZFS-raid-1) and the third node (has a hardware raid-1) uses ext-4.
The logs show *a lot* of the following messages:
lsblk of a node that throws the errors:
vgdisplay only shows the ceph VG's:
Searching through the forum didn't bring me to a solution. It looks like PVE is expecting a LVM VG/LV that is not present when using ZFS for the rootfs...
Any hints how to get rid of those messages?
Thanks,
Andreas
...we're currently running a poc setup using three nodes. Two of the nodes have ZFS for root-os (two single disks in ZFS-raid-1) and the third node (has a hardware raid-1) uses ext-4.
The logs show *a lot* of the following messages:
Code:
2025-05-13T11:17:32-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:32-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:33-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:33-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:42-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:42-04:00 tick pvestatd[2778]: no such logical volume pve/data
2025-05-13T11:17:43-04:00 punk pvestatd[2783]: no such logical volume pve/data
2025-05-13T11:17:43-04:00 punk pvestatd[2783]: no such logical volume pve/data
lsblk of a node that throws the errors:
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 464.8G 0 part
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 464.8G 0 part
sdc 8:32 0 3.6T 0 disk
└─ceph--87c4bf7b--e95c--43da--99ad--c4edd744dba0-osd--block--2df553b3--9350--47bd--89fb--ffcb7a9f21ba 252:0 0 3.6T 0 lvm
sdd 8:48 0 3.6T 0 disk
└─ceph--54405b2a--5160--44e0--9293--8dfbdad80b9a-osd--block--1f55e666--95bc--4021--b925--8c891bc2f1a4 252:2 0 3.6T 0 lvm
sde 8:64 0 3.6T 0 disk
└─ceph--a7dae876--5dfa--4d0f--bfcc--e64c2d6fc16d-osd--block--3c61f76b--0d18--48cd--9610--588d9fc651db 252:1 0 3.6T 0 lvm
sdf 8:80 1 0B 0 disk
sdg 8:96 0 14.9G 0 disk
├─sdg1 8:97 0 100M 0 part
├─sdg5 8:101 0 1G 0 part
├─sdg6 8:102 0 1G 0 part
└─sdg7 8:103 0 12.8G 0 part
sr0 11:0 1 1.3G 0 rom
rbd0 251:0 0 4M 0 disk
vgdisplay only shows the ceph VG's:
Code:
root@tick:/var/log# vgdisplay
--- Volume group ---
VG Name ceph-a7dae876-5dfa-4d0f-bfcc-e64c2d6fc16d
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953861
Alloc PE / Size 953861 / <3.64 TiB
Free PE / Size 0 / 0
VG UUID 1qCeNg-lrrt-pPNn-foc0-t5bU-sNi2-7rXgMs
--- Volume group ---
VG Name ceph-54405b2a-5160-44e0-9293-8dfbdad80b9a
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953861
Alloc PE / Size 953861 / <3.64 TiB
Free PE / Size 0 / 0
VG UUID IRb0Pz-m2Ny-L2dI-rIIv-RVNt-3PKB-8fXcgg
--- Volume group ---
VG Name ceph-87c4bf7b-e95c-43da-99ad-c4edd744dba0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953861
Alloc PE / Size 953861 / <3.64 TiB
Free PE / Size 0 / 0
VG UUID BrCBkH-KHz6-k2Lb-o28d-QD6J-fJAs-kK2PZN
root@tick:/var/log#
Searching through the forum didn't bring me to a solution. It looks like PVE is expecting a LVM VG/LV that is not present when using ZFS for the rootfs...
Any hints how to get rid of those messages?
Thanks,
Andreas