[SOLVED] after adding new proxmox 8.1 to a cluster: no such logical volume pve/data (500)

Jun 18, 2023
19
1
3
hi,

i built me a new home lab server and wanted to create a cluster with my existing home lab server.

so, i created a cluster on my existing proxmox, copied join infos and let the new proxmox join the cluster. this added the new server to the cluster, it showed up green and there was a success message in the log. however, it still had this "loading" overlay with some error like (i cant remember 100%) pve/data (500)

when trying to create a new vm, i get the error no such logical volume pve/data (500) when trying to save it.

when opening the lvm in the web frontend, i get this:

1704815595803.png

the lvm was shown in green before i joined the cluster.

after googling and finding some topics in here, i realized you guys need the output from a couple of commands, so i prepared everything what might matter (i hope):

Code:
root@ai:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             24.9G  7.11T   128K  /rpool
rpool/ROOT        1.80G  7.11T   128K  /rpool/ROOT
rpool/ROOT/pve-1  1.80G  7.11T  1.80G  /
rpool/data         128K  7.11T   128K  /rpool/data
rpool/var-lib-vz  23.1G  7.11T  23.1G  /var/lib/vz


Code:
root@ai:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
root@ai:~#


Code:
root@ai:~# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme1n1     259:0    0  3.6T  0 disk
├─nvme1n1p1 259:1    0 1007K  0 part
├─nvme1n1p2 259:2    0    1G  0 part
└─nvme1n1p3 259:3    0  3.6T  0 part
nvme0n1     259:4    0  3.6T  0 disk
├─nvme0n1p1 259:5    0 1007K  0 part
├─nvme0n1p2 259:6    0    1G  0 part
└─nvme0n1p3 259:7    0  3.6T  0 part
nvme2n1     259:8    0  3.6T  0 disk
├─nvme2n1p1 259:9    0 1007K  0 part
├─nvme2n1p2 259:10   0    1G  0 part
└─nvme2n1p3 259:11   0  3.6T  0 part


Code:
root@ai:~# lvs
root@ai:~# pvs
root@ai:~# vgs
root@ai:~#


i'd appreciate any help!

thank you!
 
i think i found the solution.

i was googling forever and didnt find anything helpful but - as usual - you have to post a help thread in a forum or chatroom before the google gods reaveal the solution.

so, if i understood it correctly, once you join a cluster, your local storage will be "removed" and you need to create a new storage config for the new node.

this can also be limited to the node only.

gotcha: local-lvm only works for the "initial cluster node" only and can not be shared amongst the other nodes. so, it makes sense to change the storage config for it to be "node 1 exclusive"


please let me know if there is a mistake in my thinking, otherwise i'd consider this "issue" as solved
 
Last edited:
Could it be that the other server was set up with LVM? The “ai” is definitely set up with ZFS.

Basically, the storages on all servers should have the same name and, ideally, have identical values such as IOPS and storage space. Especially if you want to use replication, you will otherwise run into a problem. The same topic also if you want to do a live migration.
 
hey @sb-jw , yes, the 2nd server (3x 4TB NVMe, planned 768GB RAM (480GB right now due to quite a shit show from the ASUS WS Pro W790E Sage), 6x GeForce RTX 4090, XEON 3465) is considerable larger than my first one (1x 2TB NVMe,128GB RAM, no dedicated GPU, Ryzen 9 7950X3D) and was meant to be an AI workstation.

However, it was not yet used properly (actually i was just in the middle of setting it up) but i already realized that it might be better to have some sort of separation.
coincidentally i crashed the OS (stupid error of mine) so i had to re-install anyway and went with proxmox.

i never even considered building a "cluster". in fact, i just got aware of it after refreshing my proxmox foo with a youtube video and thought that its more convenient to have it all in one admin frontend. and this is basically the only thing i'll use.

with that said, you might have just sparked a hardware upgrade (3x 4tb NVMe) for the first server :D. its running out of space anyway and a bit of redundancy is never a bad thing.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!