[SOLVED] VG pve is missing PV at boot

migthy_warrior

New Member
Jan 25, 2025
3
0
1
Hi Community,
I installed Proxmox on a sata SSD connected with a "USB to sata" adapter to the internal USB port of my Gen10 Microserver.
Once the installation was complete, I deleted the LV data and extended the LV root to the full size of the disk. So I also extended the fs xfs.
I then created a new PV, added it to the VG pve, and recreated the LV data, so that it resided only on the RAID.
Apparently everything works correctly, but only at startup do I see the warnings attached.
In your opinion, what could it depend on? It could be that /dev/sdb3, the PV that contains root and swap, was not recognized initially, but then worked correctly as soon as the system started.

1737795935014.png

Code:
root@pve:~# lsblk
NAME                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                    8:0    0   5.5T  0 disk
└─sda1                 8:1    0   5.5T  0 part
  ├─pve-data_tmeta   252:0    0    88M  0 lvm
  │ └─pve-data-tpool 252:2    0   5.5T  0 lvm
  │   ├─pve-data     252:3    0   5.5T  1 lvm
  │   └─pve-vz       252:4    0   200G  0 lvm  /vz
  └─pve-data_tdata   252:1    0   5.5T  0 lvm
    └─pve-data-tpool 252:2    0   5.5T  0 lvm
      ├─pve-data     252:3    0   5.5T  1 lvm
      └─pve-vz       252:4    0   200G  0 lvm  /vz
sdb                    8:16   0 223.6G  0 disk
├─sdb1                 8:17   0  1007K  0 part
├─sdb2                 8:18   0     1G  0 part /boot/efi
└─sdb3                 8:19   0 222.6G  0 part
  ├─pve-swap         252:5    0     8G  0 lvm  [SWAP]
  └─pve-root         252:6    0 214.6G  0 lvm  /
sdc                    8:32   1  14.6G  0 disk
└─sdc1                 8:33   1  14.6G  0 part /backup_pve


Code:
root@pve:~# blkid
/dev/sdb2: UUID="01F3-92A4" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7bdd3b4f-3b48-4636-bf2f-23f0c3e210bc"
/dev/sdb3: UUID="xeBhlM-XCAT-tPxw-zqSk-y15n-3iGq-dp2d3U" TYPE="LVM2_member" PARTUUID="c1930308-6df6-47b7-8bdb-e1972874f33a"
/dev/mapper/pve-vz: UUID="995e33f3-521d-4c6c-9040-69cd0c50c811" BLOCK_SIZE="512" TYPE="xfs"
/dev/sda1: UUID="qUlOLp-nu1c-8Isk-gNHA-X7sa-x1k3-blgcKk" TYPE="LVM2_member" PARTLABEL="Linux LVM" PARTUUID="3807b283-d201-400f-a20e-19908e2a040c"
/dev/sdb1: PARTUUID="62687e52-ddb9-4208-8b9b-a67deaa80038"
/dev/mapper/pve-root: UUID="c969768a-71a2-467d-8fa2-8db973938d26" BLOCK_SIZE="4096" TYPE="xfs"
/dev/sdc1: UUID="ab9ba017-af58-4a73-9fa5-d0337edc7657" BLOCK_SIZE="512" TYPE="xfs" PARTLABEL="Linux filesystem" PARTUUID="062bb37b-9aad-4d1c-ad93-e10bcacd5bf3"
/dev/mapper/pve-swap: UUID="c2771290-d4a3-413a-be27-df122c61d850" TYPE="swap"

Code:
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda1  pve lvm2 a--    <5.46t    0
  /dev/sdb3  pve lvm2 a--  <222.57g    0
root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   2   4   0 wz--n- <5.68t    0
root@pve:~# lvs
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-aotz--   <5.46t             0.11   10.46
  root pve -wi-ao---- <214.57g
  swap pve -wi-ao----    8.00g
  vz   pve Vwi-aotz--  200.00g data        3.09

As you can see, the PV indicated as missing at startup is present. Despite this, I still get those warnings on startup.
This is the PV that contained the LVs root, swap, and data and now correctly contains only root and swap.

Thank you in advance for your valuable support,
MC
 
Hi Community,
I continued troubleshooting the error shown on system boot.

In recent days, I noticed that in some cases the error did not appear.
When the error does not appear, the PV has the name dev/sda, on the contrary, when the error appears, the device name is /dev/sdb (most cases). Unfortunately it is not possible to set a static name for the device.

I found this thread useful

https://forum.proxmox.com/threads/uuid-lvm-problem.43777

The thread contains an interesting link to the following document

https://www.suse.com/support/kb/doc/?id=000018730

At this point I will have to resign myself and see the error every time I restart my Proxmox server, even if I don't like it, but I don't see any alternatives.

I hope my experience will be useful to someone, see you soon,
MC
 
Hi Community,
I didn't give up and tried in every way to resolve that annoying cosmetic bug. I finally found the solution.

Lately I had to reinstall Proxmox on the SSD. I noticed that after installation, the message was gone.
As soon as the system started, I had to perform a vgmerge, to join the old Volume Group pve-old to the new Volume Group pve just created during installation.

Code:
#Deactivate VG
vgchange -a n pve
vgchange -a n pve-old

#Merge VG pve-old into VG pve
vgmerge -v pve pve-old

After this, the message started coming back again.
And here's the solution: I simply split the Volume Groups again

Code:
#Boot the system using the Proxmox ISO in Debug Mod

#Deactivate VG
vgchange -a n pve

#Split VG
vgsplit pve pve-raid /dev/sda1

#sda1 is the LVM partition of my RAID 10 disks which contains the data of my VMs

I hope my solution can be useful to someone, MC
 
Last edited: