Adding second hard vgdisplay cannot show volume group

rordonez

Active Member
Aug 4, 2010
12
0
41
Hi,
I recently added a second sata (first is also sata) hard drive to our proxmox running kernel 2.6.18-2-pve #1 SMP.
version:
pve-manager/1.5/4674


Created new partitions on sdb and tried to add lvm group , however vgdisplay now shows a problem:

vgdisplay
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Volume group "pve" not found

The server is operational, however we think that it might not make it after a reboot.

Below are the commands Issued, does anyone have any pointers about where did we go wrong.

Transcript Below:
+++++++++++++++++++++++++

lorena:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/pve/root 99083868 930072 93120632 1% /
tmpfs 12317048 0 12317048 0% /lib/init/rw
udev 10240 2780 7460 28% /dev
tmpfs 12317048 4 12317044 1% /dev/shm
/dev/mapper/pve-data 833970008 92105324 741864684 12% /var/lib/vz
/dev/sda1 516040 31828 458000 7% /boot

lorena:~# du
4 ./.debtags
24 ./.gnupg
4 ./.aptitude
12 ./.ssh
68 .

lorena:~# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.41 GB
PE Size 32.00 MB
Total PE 29805
Alloc PE / Size 29805 / 931.41 GB
Free PE / Size 0 / 0
VG UUID ywM8ut-ah26-V0dW-H3KH-QRNR-HCdO-vJODH2
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 931.11 GB
PE Size 4.00 MB
Total PE 238363
Alloc PE / Size 237316 / 927.02 GB
Free PE / Size 1047 / 4.09 GB
VG UUID XYWDXZ-Of3H-2lUu-Q1q2-pvlE-667u-QJTK7s

lorena:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/pve/root 99083868 930076 93120628 1% /
tmpfs 12317048 0 12317048 0% /lib/init/rw
udev 10240 2780 7460 28% /dev
tmpfs 12317048 4 12317044 1% /dev/shm
/dev/mapper/pve-data 833970008 92109496 741860512 12% /var/lib/vz
/dev/sda1 516040 31828 458000 7% /boot

lorena:~# fdisk /dev/sdb
The number of cylinders for this disk is set to 121601.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): p
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00069351
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 121601 976655610 8e Linux LVM
Command (m for help): q

lorena:~# mkfs.ext3 /dev/sdb1
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
26104 inodes, 104388 blocks
5219 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

lorena:~# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

lorena:~# mkfs.ext3 /dev/sdb2
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61046784 inodes, 244163902 blocks
12208195 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7452 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

lorena:~# pvcreate /dev/sdb2
Physical volume "/dev/sdb2" successfully created

lorena:/# cd home
lorena:/home# mkdir segundo
lorena:/home# mount /dev/sdb2 /home/segundo
mount: unknown filesystem type 'lvm2pv'

lorena:/home#pvremove /dev/sdb2
Labels on physical volume "/dev/sdb2" successfully wiped

lorena:/home# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped

lorena:/home# vgdisplay
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Volume group "pve" not found

++++++++++++++++++++++++++++++++


Thank you in advance

Rodrigo O
 
Last edited:
Thanks for you prompt response, I was able to add a new group (by reading the wiki) and added the secondary hard drive storage as another lvm group,

However when i go to the UI and clic on New lvm group I get an ugly perl error
+++++++
[9069]ERR: 24: Error in Perl code: command '/sbin/vgs --separator : --noheadings --units k --unbuffered --nosuffix --options vg_name,vg_size,vg_free' failed with exit code 5
+++++++

Which i have a hunch that suggests that the pve group seems to be damaged

lorena:~# vgdisplay
--- Volume group ---
VG Name secondhd
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.41 GB
PE Size 4.00 MB
Total PE 238441
Alloc PE / Size 0 / 0
Free PE / Size 238441 / 931.41 GB
VG UUID lFksJ5-S13h-dspc-KHVK-cIjn-LVkk-Rn05BK
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Volume group "pve" not found

+++++++++++++++++

1 -Is the pvscan command able to restore de pve group and remove any invalid entries?

2 -is there a file where i can edit with vi/pico to check if there are any corrupted entries to restore the pve group to a normal/valid state?

+++++++++++++++++
lorena:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/pve/root 99083868 932380 93118324 1% /
tmpfs 12317048 0 12317048 0% /lib/init/rw
udev 10240 2780 7460 28% /dev
tmpfs 12317048 4 12317044 1% /dev/shm
/dev/mapper/pve-data 833970008 92297828 741672180 12% /var/lib/vz
/dev/sda1 516040 31828 458000 7% /boot
 
What output do you get when you run the command manually:

Code:
/sbin/vgs --separator : --noheadings --units k --unbuffered --nosuffix --options 'vg_name,vg_size,vg_free'
 
HI thanks for following Ive been reading the forum searching for a similar issue,

I think the device with uuid Mpzpul- was not correctly destroyed or removed(i had a mixed concept trying to format the device)

--it was somehow added incorrectly to the pve group (first post) and corrupted the group, however the server is still standing. It has a few containers and a kvm, Im getting nervous as i dont have space to migrate reformat and reinstall.

lorena:~# /sbin/vgs --separator : --noheadings --units k --unbuffered --nosuffix --options 'vg_name,vg_size,vg_free'
secondhd:976654336.00:976654336.00
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Volume group "pve" not found


lorena:~# lvs
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Volume group "pve" not found
lorena:~#
 
HI,
I found the following tools to try to fix this issue
vgcfgbackup vgck vgdisplay vgimport vgmknodes vgrename vgsplit
vgcfgrestore vgconvert vgexport vgimportclone vgreduce vgs
vgchange vgcreate vgextend vgmerge vgremove vgscan

is it safe to run vgscan to try to fix the pve group
Maybe vgcfgbackup analyze the backup fix it and vgimport?

does that make any sense?
Regards

Rodrigo O
 
Trying to restore pve group QUESTION?

I found out that the main disk exists but is marked as NEW Physical volume
If i recreate the pve group and add the volume to it

---WHAT WILL happen with the /dev/sda2 info is it safe to do so?


lorena:~# pvdisplay -v /dev/sda2
Using physical volume(s) on command line
Wiping cache of LVM-capable devices
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
get_pv_from_vg_by_id: vg_read failed to read VG pve
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
Couldn't find device with uuid 'Mpzpul-0KN3-SLEg-Vfl9-YfBM-KlFV-4D1IUD'.
Couldn't find all physical volumes for volume group pve.
get_pv_from_vg_by_id: vg_read failed to read VG pve
"/dev/sda2" is a new physical volume of "931.01 GB"
--- NEW Physical volume ---
PV Name /dev/sda2
VG Name
PV Size 931.01 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID FFWvL2-jxEd-OKTx-HFsQ-3Csm-2fhi-wLlEWz
 
Re: Trying to restore pve group QUESTION?

Hi I can now see the light,

I found at this directory :/etc/lvm/archive
Files that seem to be backups of the lvm configurations before any changes were made,
There is also a /etc/lvm/backup where i can see previous configurations

Now i have to figure out how to restore them without destroying data
Ay ideas? vgimport maybe

Regards

Rodrigo O
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!