No such volume group after clustering 2 servers.

Troplo

Member
Jun 20, 2020
3
0
21
18
Hello Proxmox Community,
I'm new to Proxmox and the QEMU virtualization software, so please excuse anything that may seem a bit stupid lol, I've recently clustered my 2 servers for easy management, the host node, aka the owner of the cluster is having issues when it comes to creating virtual disks, I'm getting an error saying "no such volume group 'main1-pve' (500)", the physical partition is the default one created by Proxmox upon installation, and it appears to exist in the GUI, as shown below:
1594288563862.png
however, on the disk selector, there's only 1 option and it says that 0B is available, I've noticed that the two drives are both called something different, data and local-lvm, which could be the problem, although I did check the config file and the lvmthin is named local-lvm, and it includes the data thinpool, so that might not be the case here.
1594288657745.png
I feel like the 2 nodes are confusing the disks, as the unused local-lvm on my second server has an unknown status.
Not too sure with this one, if you need it I've pasted the config of /etc/pve/storage.cfg below which is taken from the cluster master, MAIN1-PVE
Code:
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

lvmthin: local-lvm
    thinpool data
    vgname main1-pve
    content rootdir,images

zfspool: MAIN-1TB
    pool MAIN-1TB
    content images,rootdir
    nodes pve
    sparse 0
Running the lvs command, the result is as follows:
Code:
root@main1-pve:~# lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <794.79g             7.11   0.56
  root          pve -wi-ao----   96.00g
  swap          pve -wi-ao----    8.00g
  vm-100-disk-0 pve Vwi-aotz--  250.00g data        3.98
  vm-101-disk-0 pve Vwi-a-tz--   55.00g data        63.34
  vm-102-disk-0 pve Vwi-aotz--   80.00g data        3.80
  vm-103-disk-0 pve Vwi-a-tz--    1.00g data        0.00
  vm-104-disk-0 pve Vwi-aotz--   32.00g data        10.60
  vm-105-disk-0 pve Vwi-aotz--   50.00g data        10.58
 
did you have a 'main-pve' volume group on the second node and was the storage there also named 'local-lvm'?
if yes this is expected, since all infos from the to joined node will be lost (thus it is best to only join an empty node)

can you post a 'lsblk' output from both nodes?
 
The volume group on the second group is just named 'pve', however yes, there is a local-lvm on the second node as well, its very small, I just use it for ISOs.
MAIN1-PVE:
Code:
root@main1-pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0   931G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.1G  0 lvm 
  │ └─pve-data-tpool         253:4    0 794.8G  0 lvm 
  │   ├─pve-data             253:5    0 794.8G  0 lvm 
  │   ├─pve-vm--100--disk--0 253:6    0   250G  0 lvm 
  │   ├─pve-vm--101--disk--0 253:7    0    55G  0 lvm 
  │   ├─pve-vm--102--disk--0 253:8    0    80G  0 lvm 
  │   ├─pve-vm--103--disk--0 253:9    0     1G  0 lvm 
  │   ├─pve-vm--104--disk--0 253:10   0    32G  0 lvm 
  │   └─pve-vm--105--disk--0 253:11   0    50G  0 lvm 
  └─pve-data_tdata           253:3    0 794.8G  0 lvm 
    └─pve-data-tpool         253:4    0 794.8G  0 lvm 
      ├─pve-data             253:5    0 794.8G  0 lvm 
      ├─pve-vm--100--disk--0 253:6    0   250G  0 lvm 
      ├─pve-vm--101--disk--0 253:7    0    55G  0 lvm 
      ├─pve-vm--102--disk--0 253:8    0    80G  0 lvm 
      ├─pve-vm--103--disk--0 253:9    0     1G  0 lvm 
      ├─pve-vm--104--disk--0 253:10   0    32G  0 lvm 
      └─pve-vm--105--disk--0 253:11   0    50G  0 lvm
PVE (Second Node):
Code:
root@pve:~# lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda        8:0    0 931.5G  0 disk
├─sda1     8:1    0 931.5G  0 part
└─sda9     8:9    0     8M  0 part
sdb        8:16   0  74.5G  0 disk
├─sdb1     8:17   0  70.5G  0 part /
├─sdb2     8:18   0     1K  0 part
└─sdb5     8:21   0     4G  0 part [SWAP]
sr0       11:0    1  1024M  0 rom 
zd16     230:16   0    10G  0 disk
├─zd16p1 230:17   0   9.3G  0 part
├─zd16p2 230:18   0     1K  0 part
└─zd16p5 230:21   0   765M  0 part
zd32     230:32   0    40G  0 disk
├─zd32p1 230:33   0    36G  0 part
├─zd32p2 230:34   0     1K  0 part
└─zd32p5 230:37   0     4G  0 part
zd48     230:48   0     2M  0 disk
zd64     230:64   0    16G  0 disk
├─zd64p1 230:65   0    15G  0 part
├─zd64p2 230:66   0     1K  0 part
└─zd64p5 230:69   0  1022M  0 part
zd80     230:80   0    90G  0 disk
├─zd80p1 230:81   0    86G  0 part
├─zd80p2 230:82   0     1K  0 part
└─zd80p5 230:85   0     4G  0 part
zd96     230:96   0    45G  0 disk
├─zd96p1 230:97   0    42G  0 part
├─zd96p2 230:98   0     1K  0 part
└─zd96p5 230:101  0     3G  0 part
zd112    230:112  0    50G  0 disk
 
yeah ok so it seems there is no vg named main1-pve
are you sure that did exist? what does 'pvs' say?

if it did not exists, just rename the vgname in the storage config to 'pve' from main1-pve
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!