Bootdisk problem after PVE 9.0.10 update

jo_strasser

New Member
Jun 16, 2025
7
0
1
Salzburg, Austria
I updated one of my PVE servers from 9.0.x to 9.0.10, since this moment I can see the following error if I select "Disks":

1760181632331.png

1760184116998.png
1760184089547.png

Output of pvdisplay:

Code:
root@pve02:/# pvdisplay
  --- Physical volume ---
  PV Name               /dev/nvme0n1p3
  VG Name               pve
  PV Size               930.51 GiB / not usable 4.69 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238210
  Free PE               0
  Allocated PE          238210
  PV UUID               7CrkuN-EER0-p3LT-dbEQ-dSq4-Hv47-5bbcif
 
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
root@pve02:/#


Output of lvdisplay:

Code:
root@pve02:/# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                9Zd21e-qaKz-1Y3W-lvFW-pQRk-9gXN-tsQKb8
  LV Write Access        read/write
  LV Creation host, time proxmox, 2025-09-06 09:10:08 +0200
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                NpqTlX-O9IZ-eIUJ-ducf-JlfD-nIso-aUlhDa
  LV Write Access        read/write
  LV Creation host, time proxmox, 2025-09-06 09:10:08 +0200
  LV Status              available
  # open                 1
  LV Size                <922.51 GiB
  Current LE             236162
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
 
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
root@pve02:/#

I compared the /dev directory with another node which I installed and patched at the same time but on the affected node I can see also "md*" devices which are missing on the working one.

Any idea how to fix this?

Thanks!
 
Last edited:
After running the following I am wondering from where this raid is coming:

Code:
root@pve02:/# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Tue Jul 15 09:07:59 2025
        Raid Level : raid1
        Array Size : 1063784256 (1014.50 GiB 1089.32 GB)
     Used Dev Size : 1063784256 (1014.50 GiB 1089.32 GB)
      Raid Devices : 1
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sat Oct 11 12:53:45 2025
             State : clean
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : 1
              UUID : 6536a223:31b57f49:c8fda89d:0f39a815
            Events : 153

    Number   Major   Minor   RaidDevice State
       0     230       67        0      active sync   /dev/zd64p3
root@pve02:/#
root@pve02:/#
root@pve02:/#
root@pve02:/# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Tue Jul 15 09:07:59 2025
        Raid Level : raid1
        Array Size : 1063784256 (1014.50 GiB 1089.32 GB)
     Used Dev Size : 1063784256 (1014.50 GiB 1089.32 GB)
      Raid Devices : 1
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sat Oct 11 12:53:45 2025
             State : clean
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : 1
              UUID : 6536a223:31b57f49:c8fda89d:0f39a815
            Events : 153

    Number   Major   Minor   RaidDevice State
       0     230       67        0      active sync   /dev/zd64p3
root@pve02:/#

Compared to the other node which I installed identical there is no /dev/md* device available/existing.
I never created it manually.

Can this be maybe related to my issue?
Is it possible to remove this raid without data loss?

What can I do to get an identical configuration like my first node?

PVE01 (good one)
1760183511851.png


PVE02 (bad one)
1760183522725.png


Code:
root@pve01:~# pvesm scan lvm
pve
root@pve01:~#

Code:
root@pve02:~# pvesm scan lvm
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
command '/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count' failed: exit code 5
pve
root@pve02:~#
 
Last edited:
I found the root cause for this issue.

I migrated a virtual NAS to this host and the disks of this NAS are visible for LVM which throws this error.

I excluded the devices from the virtual NAS in the lvm.conf which solved it.