Hello,
I have two servers configured in a Proxmox 2-node cluster with LVM on top of DRDB for storage for my VMs. See diagram below (crude paint drawing
)

My second node has failed after a upgrade to Proxmox 3.4. I cannot resolve this issue immediately.
I would like to start the VMs that were located on the second node on my first node. When the second node is back online I can do a re-sync of the data using the first node and the source.
I cannot migrate them in the web interface of Proxmox because the second node is currently offline (see screenshot below). I need to find a way to do this manually but I can't find anything in the documentation to do so.

Here is the output of my logical volumes. As you can see the VMs with IDs 101, 102, 103, and 105 were located on the second node.
	
	
	
		
So the data is there on the first node but the LVs are listed as "NOT available".
Does anybody know how I can start these VMs on the first node?
I assume it involves marking the LVs as available and then starting the VMs manually or migrating the manually but I can't find this in the documentation. I've looked here:
https://pve.proxmox.com/wiki/DRBD
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
I don't want to break anything. Is it as simple as doing the following steps?
Thanks for your reply.
				
			I have two servers configured in a Proxmox 2-node cluster with LVM on top of DRDB for storage for my VMs. See diagram below (crude paint drawing

My second node has failed after a upgrade to Proxmox 3.4. I cannot resolve this issue immediately.
I would like to start the VMs that were located on the second node on my first node. When the second node is back online I can do a re-sync of the data using the first node and the source.
I cannot migrate them in the web interface of Proxmox because the second node is currently offline (see screenshot below). I need to find a way to do this manually but I can't find anything in the documentation to do so.

Here is the output of my logical volumes. As you can see the VMs with IDs 101, 102, 103, and 105 were located on the second node.
		Code:
	
	 --- Logical volume ---
  LV Path                /dev/vg0/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                vg0
  LV UUID                6sJdZu-qwqj-dmMd-Imak-NGk8-V0z8-DR2u9J
  LV Write Access        read/write
  LV Creation host, time pve3, 2014-09-04 18:49:22 +0200
  LV Status              available
  # open                 1
  LV Size                40.00 GiB
  Current LE             10240
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
  --- Logical volume ---
  LV Path                /dev/vg0/vm-101-disk-1
  LV Name                vm-101-disk-1
  VG Name                vg0
  LV UUID                BZFIbz-rgyL-oc6B-h3DW-VmtJ-My5e-ow33WH
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-09-05 17:29:25 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  --- Logical volume ---
  LV Path                /dev/vg0/vm-102-disk-1
  LV Name                vm-102-disk-1
  VG Name                vg0
  LV UUID                seM9aZ-k70s-Ge8u-v8ux-NPJ4-bYDe-0nwrUk
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-10-17 13:06:39 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  --- Logical volume ---
  LV Path                /dev/vg0/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                vg0
  LV UUID                288VtP-yXug-NQ7M-xdLQ-qT3z-5CwL-e1kzI2
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-01-03 15:54:54 +0100
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  --- Logical volume ---
  LV Path                /dev/vg0/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                vg0
  LV UUID                qraFwh-e8vu-dUVp-Cphc-TP0l-ALZd-bkU5XK
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-03-15 14:01:06 +0100
  LV Status              available
  # open                 1
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
  --- Logical volume ---
  LV Path                /dev/vg0/vm-105-disk-1
  LV Name                vm-105-disk-1
  VG Name                vg0
  LV UUID                ikPpEU-n3il-aX7z-LCLV-ko7g-x8E8-lX1FkK
  LV Write Access        read/write
  LV Creation host, time pve4, 2015-03-16 15:49:38 +0100
  LV Status              NOT available
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
	So the data is there on the first node but the LVs are listed as "NOT available".
Does anybody know how I can start these VMs on the first node?
I assume it involves marking the LVs as available and then starting the VMs manually or migrating the manually but I can't find this in the documentation. I've looked here:
https://pve.proxmox.com/wiki/DRBD
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
I don't want to break anything. Is it as simple as doing the following steps?
- Moving the configuration files from /etc/pve/nodes/pve4/qemu-server/ to /etc/pve/nodes/pve3/qemu-server/
 - Using lvchange -aey on the logical volumes I want to make available
 - Restarting Promox
 
Thanks for your reply.
			
				Last edited: