additional disk is "question mark"

Loulou91

New Member
Feb 3, 2023
14
1
3
Hi all, as you can see below I can't no more access my samba server because my lvm jlm_photos is "question marked". I have no idea of the origin of the problem except that I removed the system disk to try an other one and then go back to the original system disk.
Is there a way to recover without resetting the content of sda3? Thanks id advance.

1686690738397.png
 

Attachments

  • 1686690626495.png
    1686690626495.png
    118.2 KB · Views: 13
Some additional elements:
Code:
root@pve:~# pvesm status
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Name              Type     Status           Total            Used       Available        %
jlm_photos         lvm   inactive               0               0               0    0.00%
local              dir     active       236774440       179673264        46710656   75.88%
Any idea about VG names duplication?
 
No raid.
Code:
root@pve:~# vgdisplay
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               237.97 GiB
  PE Size               4.00 MiB
  Total PE              60921
  Alloc PE / Size       60921 / 237.97 GiB
  Free  PE / Size       0 / 0
  VG UUID               XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <499.50 GiB
  PE Size               4.00 MiB
  Total PE              127871
  Alloc PE / Size       123775 / <483.50 GiB
  Free  PE / Size       4096 / 16.00 GiB
  VG UUID               euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK
 
Last edited:
post your fstab

the issue is, like its written, you have multiple vg with the same name
rename one "pve" in "pve1" or whatever name you desire
 
Thanks for your answer.
I renamed pve in pve1 as you proposed using the following set of commands:
Code:
sed -i 's/pve/pve1/g' /etc/hostname
sed -i 's/pve/pve1/g' /etc/hosts
hostnamectl set-hostname pve1
systemctl restart pveproxy
systemctl restart pvedaemon
mv /etc/pve/nodes/pve/lxc/* /etc/pve/nodes/pve1/lxc
mv /etc/pve/nodes/pve/qemu-server/* /etc/pve/nodes/pve1/qemu-server
rm -r /etc/pve/nodes/pve
reboot
and here is the final result...
1686737459347.png
and vgdisplay command still refers to pve...
Code:
root@pve1:~# vgdisplay
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               237.97 GiB
  PE Size               4.00 MiB
  Total PE              60921
  Alloc PE / Size       60921 / 237.97 GiB
  Free  PE / Size       0 / 0
  VG UUID               XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <499.50 GiB
  PE Size               4.00 MiB
  Total PE              127871
  Alloc PE / Size       123775 / <483.50 GiB
  Free  PE / Size       4096 / 16.00 GiB
  VG UUID               euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK
 
My /etc/fstab

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=CC37-175B /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
There's the command vgrename with an example:

Rename the VG with the specified UUID to "myvg".
vgrename Zvlifi-Ep3t-e0Ng-U42h-o0ye-KHu1-nl7Ns4 myvg

You've duplicated VG names in the metadata, I doubt that your applied commands will solve your issue. Maybe it's "better" to revert your changes from 'pve1' back to 'pve' since it's the default name and afterwards change the VG name of your non-system disk to something different with the mentioned command vgrename.
 
and vgdisplay command still refers to pve...

Name of VGs are unrelated to the name of the node - you need to rename one of the VGs.


You need to be careful to not rename the one that contains your root fs - otherwise your node will not be able to reboot.

Can you post the output of lvdisplay ?
 
Name of VGs are unrelated to the name of the node - you need to rename one of the VGs.


You need to be careful to not rename the one that contains your root fs - otherwise your node will not be able to reboot.

Can you post the output of lvdisplay ?
Code:
root@pve1:~# lvdisplay
  WARNING: VG name pve is used by VGs XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a and euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                ywEFw1-mFeH-hXH3-Znna-dE6q-3HEm-sDHO96
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-02-14 08:48:05 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0


  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                5dQN4h-4BMB-v29G-LBcl-mVAX-FsbA-ZtXBc7
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-02-14 08:48:05 +0100
  LV Status              available
  # open                 1
  LV Size                229.97 GiB
  Current LE             58873
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1


  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                20fNf5-S36v-evr2-923B-L0zA-HWZ5-FyfLWD
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-06-13 18:22:27 +0200
  LV Status              NOT available
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto


  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                Y8cqVD-jbzV-000B-3tlO-vRkK-rOfQ-mMp6k9
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-06-13 18:22:27 +0200
  LV Status              NOT available
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto


  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                jo8Ksk-lIed-bIab-OxKn-D1w1-kR85-dDsjjG
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-06-13 18:22:44 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                371.90 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.46%
  Current LE             95207
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
 
what wonders me is why you have swap and root douple and data one time (it would just make sense if you have a raid1 configured / software level) / strage
 
Can you also post the output of lsblk?
After vg renaming pve to pvedata
Code:
root@pve:~# lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0                    7:0    0    32G  0 loop
sda                      8:0    0 931.5G  0 disk
├─sda1                   8:1    0  1007K  0 part
├─sda2                   8:2    0   512M  0 part
└─sda3                   8:3    0 499.5G  0 part
  ├─pvedata-swap       253:2    0     8G  0 lvm
  ├─pvedata-root       253:3    0    96G  0 lvm
  ├─pvedata-data_tmeta 253:4    0   3.8G  0 lvm
  │ └─pvedata-data     253:6    0 371.9G  0 lvm
  └─pvedata-data_tdata 253:5    0 371.9G  0 lvm
    └─pvedata-data     253:6    0 371.9G  0 lvm
sdb                      8:16   0 238.5G  0 disk
├─sdb1                   8:17   0  1007K  0 part
├─sdb2                   8:18   0   512M  0 part /boot/efi
└─sdb3                   8:19   0   238G  0 part
  ├─pve-swap           253:0    0     8G  0 lvm  [SWAP]
  └─pve-root           253:1    0   230G  0 lvm  /
root@pve:~#
 
Last edited:
There's the command vgrename with an example:



You've duplicated VG names in the metadata, I doubt that your applied commands will solve your issue. Maybe it's "better" to revert your changes from 'pve1' back to 'pve' since it's the default name and afterwards change the VG name of your non-system disk to something different with the mentioned command vgrename.
I went back to original pve name, I changed the name with:
Code:
vgrename euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK pvedata
No more warning related to duplicated VG name, but
Code:
root@pve:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               237.97 GiB
  PE Size               4.00 MiB
  Total PE              60921
  Alloc PE / Size       60921 / 237.97 GiB
  Free  PE / Size       0 / 0
  VG UUID               XG1Rmd-igde-YbLL-6suc-xhNR-Yybk-xff91a

  --- Volume group ---
  VG Name               pvedata
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <499.50 GiB
  PE Size               4.00 MiB
  Total PE              127871
  Alloc PE / Size       123775 / <483.50 GiB
  Free  PE / Size       4096 / 16.00 GiB
  VG UUID               euxadV-EWEM-70mD-noy9-992e-ccNG-hOX8rK
still question mark...1686747275588.png
 
Can you post the jlm_photos section from /etc/pve/storage.cfg
 
Can you post the jlm_photos section from /etc/pve/storage.cfg
Code:
dir: local
    path /var/lib/vz
    content snippets,images,iso,backup,rootdir,vztmpl
    shared 0

lvm: jlm_photos
    vgname jlm_photos
    content rootdir,images
    shared 1
 
Your config vgname mismatches with the actual vgname.

Code:
vgrename pvedata jlm_photos

should solve the issue.
 
  • Like
Reactions: shanreich
Great progress, question mark disappeared!
but not capable to start my samba server (lxc)
Code:
 TASK ERROR: can't activate LV '/dev/jlm_photos/vm-101-disk-0':   Failed to find logical volume "jlm_photos/vm-101-disk-0"
/dev/jlm_photos exists (has been created at pvedata renaming time) but does not contain vm-101....
Code:
root@pve:~# cd /dev/jlm_photos/
root@pve:/dev/jlm_photos# ls
data  root  swap
file 101.conf
Code:
arch: amd64
cores: 1
features: nesting=1
hostname: server-samba-63
memory: 512
mp0: jlm_photos:vm-101-disk-0,mp=/srv/samba/data,size=475G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.254,hwaddr=D2:DA:75:5C:22:A8,ip=192.168.1.63/24,type=veth
onboot: 1
ostype: debian
rootfs: local:101/vm-101-disk-0.raw,size=8G
startup: order=1
swap: 512
unprivileged: 1

lvdisplay
Code:
 --- Logical volume ---
  LV Name                data
  VG Name                jlm_photos
  LV UUID                jo8Ksk-lIed-bIab-OxKn-D1w1-kR85-dDsjjG
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-06-13 18:22:44 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                371.90 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.46%
  Current LE             95207
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
 
Last edited:
Strange that there's no trace from the logical volume 'vm-101-disk-0'.

Code:
lsblk
lvs
lvscan

should list them if they are existing. Maybe you accidentally attachted a wrong disk or overwrote your original setup?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!