Starting VMs on first node after second node failure on 2-node Proxmox cluster (DRBD)

Mannekino

New Member
Aug 23, 2014
14
0
1
Hello,

I have two servers configured in a Proxmox 2-node cluster with LVM on top of DRDB for storage for my VMs. See diagram below (crude paint drawing :p)

pve_cluster.png

My second node has failed after a upgrade to Proxmox 3.4. I cannot resolve this issue immediately.

I would like to start the VMs that were located on the second node on my first node. When the second node is back online I can do a re-sync of the data using the first node and the source.

I cannot migrate them in the web interface of Proxmox because the second node is currently offline (see screenshot below). I need to find a way to do this manually but I can't find anything in the documentation to do so.

pve_screenshot_1.png

Here is the output of my logical volumes. As you can see the VMs with IDs 101, 102, 103, and 105 were located on the second node.

Code:
 --- Logical volume ---
  LV Path                /dev/vg0/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                vg0
  LV UUID                6sJdZu-qwqj-dmMd-Imak-NGk8-V0z8-DR2u9J
  LV Write Access        read/write
  LV Creation host, time pve3, 2014-09-04 18:49:22 +0200
  LV Status              available
  # open                 1
  LV Size                40.00 GiB
  Current LE             10240
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3


  --- Logical volume ---
  LV Path                /dev/vg0/vm-101-disk-1
  LV Name                vm-101-disk-1
  VG Name                vg0
  LV UUID                BZFIbz-rgyL-oc6B-h3DW-VmtJ-My5e-ow33WH
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-09-05 17:29:25 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-102-disk-1
  LV Name                vm-102-disk-1
  VG Name                vg0
  LV UUID                seM9aZ-k70s-Ge8u-v8ux-NPJ4-bYDe-0nwrUk
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-10-17 13:06:39 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                vg0
  LV UUID                288VtP-yXug-NQ7M-xdLQ-qT3z-5CwL-e1kzI2
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-01-03 15:54:54 +0100
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                vg0
  LV UUID                qraFwh-e8vu-dUVp-Cphc-TP0l-ALZd-bkU5XK
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-03-15 14:01:06 +0100
  LV Status              available
  # open                 1
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4


  --- Logical volume ---
  LV Path                /dev/vg0/vm-105-disk-1
  LV Name                vm-105-disk-1
  VG Name                vg0
  LV UUID                ikPpEU-n3il-aX7z-LCLV-ko7g-x8E8-lX1FkK
  LV Write Access        read/write
  LV Creation host, time pve4, 2015-03-16 15:49:38 +0100
  LV Status              NOT available
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

So the data is there on the first node but the LVs are listed as "NOT available".

Does anybody know how I can start these VMs on the first node?

I assume it involves marking the LVs as available and then starting the VMs manually or migrating the manually but I can't find this in the documentation. I've looked here:

https://pve.proxmox.com/wiki/DRBD
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster

I don't want to break anything. Is it as simple as doing the following steps?


  1. Moving the configuration files from /etc/pve/nodes/pve4/qemu-server/ to /etc/pve/nodes/pve3/qemu-server/
  2. Using lvchange -aey on the logical volumes I want to make available
  3. Restarting Promox

Thanks for your reply.
 
Last edited:
Re: Starting VMs on first node after second node failure on 2-node Proxmox cluster (D

Hi Richard,

Yes activating the logical volume is the easy part, I already did that.

After that I cannot start the VMs that were on the second node. I cannot simply copy the configuration files as I mentioned above because of rights limitations and I feel like they were put in place to prevent doing it in such a way. There must be some command for this that I haven't found yet.

I have managed to activate the logical volume of VM101 by typing

lvchange -aey /dev/vg0/vm-101-disk-1

But where to go from there I have no idea.
 
Re: Starting VMs on first node after second node failure on 2-node Proxmox cluster (D

Yes right limitations are in place to avoid that.

In a two node cluster you get problems because the vote majority is 2 votes, so if one fails the whole cluster operations must be blocked because a node cannot possible now if it's the working one (called split brain if you're interested). With a tree node cluster (or any cluster configuration with an odd number of nodes) you have'n split brains problems, and also "i" nodes can fail if you have a cluster with n=2i+1 members.

If you now what you do use the
Code:
pvecm expected 1
command. This temporary (until reboot or next pvecm expected command) let you move the config files manually.
The pmxcfs (proxmox cluster file system) merges the changes after the other node comes up again.
 
Last edited:
Re: Starting VMs on first node after second node failure on 2-node Proxmox cluster (D

Hi Thomas,

Thanks for your reply it seemed to have worked for the first VM I tested.

So what I did was activate the LV for VM 101 with the command

Code:
lvchange -aey /dev/vg0/vm-101-disk-1

Then I used

Code:
pvecm expected 1

This allowed me to move the configuration file for VM 101

Code:
mv /etc/pve/nodes/pve4/qemu-server/101.conf /etc/pve/nodes/pve3/qemu-server/101.conf

Then I opened up the Proxmox web interface and the VM showed up under my first node (PVE3) and I started the VM.

Everything seems to work fine, the VM is online now and I can access it again.

I will also move the remaining 3 VMs to my first node. Before I rebooted the second node I did shutdown all the VMs on that node. When I fix the second node and bring it online again I assume I have to change cluster vote to 2 again, or will that happen automatically?

After that I will have to re-sync the DRBD nodes using the first node as the source. Is there anything else I need to do or will pmxcfs handle everything else as you mentioned?

Will there be any issues with LVM because I manually activated the LV on the first node?

I have recovered from a DRBD split-brain situation before but that was a bit different than the problem I'm facing now.

Thank you for your time.
 
Last edited:
Re: Starting VMs on first node after second node failure on 2-node Proxmox cluster (D

Seems good to me.

I will also move the remaining 3 VMs to my first node. Before I rebooted the second node I did shutdown all the VMs on that node. When I fix the second node and bring it online again I assume I have to change cluster vote to 2 again, or will that happen automatically?

Yes set it simply to the original value with
Code:
pvecm expected 2
it won't happen anything if the node has more votes then it expects, but it's safer in case of a failure (again split brain).

To see what's the current cluster state use:
Code:
pvecm stat

After that I will have to re-sync the DRBD nodes using the first node as the source. Is there anything else I need to do or will pmxcfs handle everything else as you mentioned?

No, normally not. The pxmcfs handle such situations quite excellent without need of manual intervention.

Will there be any issues with LVM because I manually activated the LV on the first node?
I hope not, but not my area so maybe someone else could help you with that. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!