2 local-lvm entries in storage.cfg, would like to prevent data loss

m1rko

New Member
Sep 20, 2023
17
0
1
Hi all,
I messed up my storage configuration I guess, now have two lvmthin: local-lvm entries in my storage.cfg and now would like not to loose my installation on my test cluster due to data loss.
So what did I do?

First I setup the node pveNUC with standard installation options, installed a few CTs and VMs, no trouble so far.

Then I setup the node pveXMG with an encrypted volume to serve a local-lvm.
I installed Proxmox with standard options but only using ~80GB of the 250GB SSD.
Then I setup an encrypted PV with cryptsetup, called it /dev/mapper/sda4_crypt, created the LVM and called it sda4_crypt and create a Thinpool called data
So what I got was
1697971401591.png

Fine so far.
Now Migration does not seem to work via the gui if storage configuration for local-lvm between both nodes is not identical.

So, I though, "ok lets rename it" renamed the storage entry in storage.cfg to
1697971535096.png

But that make the lvm-thin entry on the node pveNUC disapear, which I didn't realize imediatle. I tried migration, which acctual move a test CT inbetween the nodes.
Then I realized I am missing my local-lvm on my pveNUC node.
Fortunately, rerenaming the encrypted storage on pveXMG restored peace, the local-lvm on pveNUC is back and works, as ist the local-lvm-crypt on my pveXMG share.

But here I am stuck.
So the big question is how do I reconfigure the storage on pveXMG to be able to migrate, backup and so with the GUI?

Greatfull for any help!
THX
 
Hi,
But here I am stuck.
So the big question is how do I reconfigure the storage on pveXMG to be able to migrate, backup and so with the GUI?
yes, when migrating containers (or VMs offline), the target storage option is only usable on the CLI currently. But what is your issue with backup?

To be able to have a storage with the same name on both nodes, the configuration also has to be the same. So you'd need to rename the volume group on one of the nodes. Once you have done that, you can remove one of the entries in the storage configuration and remove the nodes restriction (or add the second node to it) for the other entry.
 
Hi fiona,
thank you for the advice I'll try to solve this issue that way as soon as I get back to my testserver.

Cheers!