The information is a little scarce. On what PVE version are you (pveversion -v)? How is your storage configured? Does a volume group exist (vgdisplay)?
1. pveversion -v:
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
ve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90
2. vgdisplay:
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.26 GiB
PE Size 4.00 MiB
Total PE 238402
Alloc PE / Size 234359 / 915.46 GiB
Free PE / Size 4043 / 15.79 GiB
VG UUID zc76mX-ZI7I-A3Gz-03WD-t6y9-nNty-U8JV29
I have 2 nodes, MAIN is configured with LVM and the 2nd node is configured with RAIDZ, I am having trouble with the 2nd node giving out that error.
NOTE: when I do the command 'pvedisplay' on the second node, it doesn't output anything, but clearly in the GUI there are 2 available local lvm storages visible.
If you need any information just tell me, I'd be happy to give you any output that is needed.
Thank you!