Hello Proxmox Community,
I'm new to Proxmox and the QEMU virtualization software, so please excuse anything that may seem a bit stupid lol, I've recently clustered my 2 servers for easy management, the host node, aka the owner of the cluster is having issues when it comes to creating virtual disks, I'm getting an error saying "no such volume group 'main1-pve' (500)", the physical partition is the default one created by Proxmox upon installation, and it appears to exist in the GUI, as shown below:
however, on the disk selector, there's only 1 option and it says that 0B is available, I've noticed that the two drives are both called something different, data and local-lvm, which could be the problem, although I did check the config file and the lvmthin is named local-lvm, and it includes the data thinpool, so that might not be the case here.
I feel like the 2 nodes are confusing the disks, as the unused local-lvm on my second server has an unknown status.
Not too sure with this one, if you need it I've pasted the config of /etc/pve/storage.cfg below which is taken from the cluster master, MAIN1-PVE
Running the lvs command, the result is as follows:
I'm new to Proxmox and the QEMU virtualization software, so please excuse anything that may seem a bit stupid lol, I've recently clustered my 2 servers for easy management, the host node, aka the owner of the cluster is having issues when it comes to creating virtual disks, I'm getting an error saying "no such volume group 'main1-pve' (500)", the physical partition is the default one created by Proxmox upon installation, and it appears to exist in the GUI, as shown below:
however, on the disk selector, there's only 1 option and it says that 0B is available, I've noticed that the two drives are both called something different, data and local-lvm, which could be the problem, although I did check the config file and the lvmthin is named local-lvm, and it includes the data thinpool, so that might not be the case here.
I feel like the 2 nodes are confusing the disks, as the unused local-lvm on my second server has an unknown status.
Not too sure with this one, if you need it I've pasted the config of /etc/pve/storage.cfg below which is taken from the cluster master, MAIN1-PVE
Code:
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname main1-pve
content rootdir,images
zfspool: MAIN-1TB
pool MAIN-1TB
content images,rootdir
nodes pve
sparse 0
Code:
root@main1-pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <794.79g 7.11 0.56
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 250.00g data 3.98
vm-101-disk-0 pve Vwi-a-tz-- 55.00g data 63.34
vm-102-disk-0 pve Vwi-aotz-- 80.00g data 3.80
vm-103-disk-0 pve Vwi-a-tz-- 1.00g data 0.00
vm-104-disk-0 pve Vwi-aotz-- 32.00g data 10.60
vm-105-disk-0 pve Vwi-aotz-- 50.00g data 10.58