I had a cluster with classical disk roles: one for Proxmox OS, another for VMs. It worked. Then I wanted to add a simple small node with single disk for DNS and Ngynx. I've installed Proxmox there, choosing to format disk as ZFS. It was installed and created "local" and "local-lvm" storage. Then I joined it to the cluster and immediately after that "local-lvm" on fresh node got question mark and became inactive. Trying to access it throws error
no such logical volume pve/data (500)
On older nodes "local-lvm" is active. I suppose, it's because this new node got /etc/pve/storage.cfg
file from the cluster, and other nodes have another configuration of local-lvm. How can I restore local-lvm
here? Is it possible without reinstall of this node?vgs
returns nothing
Bash:
~# pvesm status
no such logical volume pve/data
Name Type Status Total Used Available %
h.stor cifs active 3300184880 3098254896 201929984 93.88%
local dir active 705113088 128 705112960 0.00%
local-lvm lvmthin inactive 0 0 0 0.00%
zpool1 zfspool disabled 0 0 0 N/A
zpool2 zfspool disabled 0 0 0 N/A
zpool3 zfspool disabled 0 0 0 N/A
/etc/pve/storage.cfg
:
Code:
...
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
...