[SOLVED] local-lvm is not working after joining the cluster

Morphushka

Well-Known Member
Jun 25, 2019
49
7
48
36
Syberia
Hello. I create and up 3 proxmox virtual machines with VMware Workstation Player. All done well. Then I start to connect them to clulster, and cluster was created. But local-lvm storage icon in 2th and 3th nodes become with question mark. I can't do anything with them on that nodes, they are like inactive or disabled. Before joining to cluster all was normal.
So, what does it mean ? how to fix it ?
Thanks.
local-lvm.png
 
Is local-lvm configured on all nodes? As it presents a local storage, it can only work if the other nodes do have the same configuration. If not, then edit the local-lvm storage and restrict the nodes of this storage.
 
Is local-lvm configured on all nodes? As it presents a local storage, it can only work if the other nodes do have the same configuration. If not, then edit the local-lvm storage and restrict the nodes of this storage.

Hi Alwin

Little confused can you please go into more detail with your response.
What is the restriction with local storage when in a cluster?

What happens if a node was running VM’s and then this node is joined to a cluster what happens to the VM’s?

I haven’t tested this in our setup yet so interested to understand the issue better.

“”Cheers
G
 
In general I warmly recommend to read our documentations or search through it. ;)
https://pve.proxmox.com/pve-docs/pve-admin-guide.html

What is the restriction with local storage when in a cluster?
As the name suggests, it is local, local to a node and created on installing Proxmox VE. Depending on the type of storage, directory, LVM, ZFS, the naming is a bit different. If the other nodes do not have that exact same storage layout, then it needs to be restricted on the host where it exists.

What happens if a node was running VM’s and then this node is joined to a cluster what happens to the VM’s?
The /etc/pve directory is emptied (the DB is newly created) and filled with the content of the cluster. If you find yourself in such a situation, then you need to recreate the configs by hand (not GUI) or recover from backup.
 
  • Like
Reactions: velocity08
In general I warmly recommend to read our documentations or search through it. ;)
https://pve.proxmox.com/pve-docs/pve-admin-guide.html


As the name suggests, it is local, local to a node and created on installing Proxmox VE. Depending on the type of storage, directory, LVM, ZFS, the naming is a bit different. If the other nodes do not have that exact same storage layout, then it needs to be restricted on the host where it exists.


The /etc/pve directory is emptied (the DB is newly created) and filled with the content of the cluster. If you find yourself in such a situation, then you need to recreate the configs by hand (not GUI) or recover from backup.

I will thank you.

Appreciate the heads up on the cluster add.

Is there a way to run VM’s both locally and on shared storage?

I just assumed it worked in a similar fashion to VMware where both local and shared storage can be used at the same time and when adding a host to a cluster no config safe lost.

“”Cheers
G
 
Is there a way to run VM’s both locally and on shared storage?
Yes. But HA only works with VMs on a shared storage.

I just assumed it worked in a similar fashion to VMware where both local and shared storage can be used at the same time and when adding a host to a cluster no config safe lost.
A VMWare cluster is working quite differently from a Proxmox VE cluster. On cluster join the config DB is backed up, but it is an advanced topic to get the config out of the DB. It is easier to backup the configs beforehand.
 
Yes. But HA only works with VMs on a shared storage.


A VMWare cluster is working quite differently from a Proxmox VE cluster. On cluster join the config DB is backed up, but it is an advanced topic to get the config out of the DB. It is easier to backup the configs beforehand.

That’s ok I totally understand that only HA allows for Vm’s To be migrated live and fail over.

Can you please point me in the direction of the correct documentation to be able to run Vm’s Both locally and on shared storage on the same host.

Obviously only the Vm’s Running
On shared storage will be able to migrate live and fail over.

“”Cheers
G
 
Ok but does this just mean that the cluster should be created before adding shared storage otherwise the VM config needs to be backed up and restored on the single node if cluster is created after Vm’s Already exist on node.
For ease of use, only add an empty Proxmox VE node to a cluster.
 
  • Like
Reactions: velocity08
Is local-lvm configured on all nodes? As it presents a local storage, it can only work if the other nodes do have the same configuration. If not, then edit the local-lvm storage and restrict the nodes of this storage.
Hi all, yes they are all have, but at first node it is simple device with btrfs, next two have zraid1 with lvm. Is it can be a reason ?
 
Hi all, yes they are all have, but at first node it is simple device with btrfs, next two have zraid1 with lvm. Is it can be a reason ?
Yes, the storage names are different then and it needs to be restricted to the node itself.
 
Hallo! I have same problem. In the end I tryed separated this cluster by guide(5.5.1. Separate a Node Without Reinstalling), but LVM-local on second node is not repaired. Can i fix it? Also main node of cluster is not disappeared from GUI on second node. Only on the first node GUI started work fine.

How can i fix LVM-local on second node and delete first node on GUI of second node.
May be i have not one problems?
 

Attachments

  • Снимок экрана от 2025-01-29 02-16-07.png
    Снимок экрана от 2025-01-29 02-16-07.png
    21.7 KB · Views: 2
  • Снимок экрана от 2025-01-29 02-16-31.png
    Снимок экрана от 2025-01-29 02-16-31.png
    13.3 KB · Views: 2