Master crash, VMs on running slave - Need to recover

z33k3r

New Member
Mar 24, 2011
13
0
1
Hello,

We have a two node master/slave Proxmox cluster setup utilizing DRBD for VM storage spanning the two nodes.

We've recently had a server crash (we think it was the RAID controller). Now I'm probably getting that replaced, but here's the issue. We had to reinstall Proxmox on the os partition due to corruption. The DRBD should still be intact. Is it possible that I can re-configure Proxmox to see the DRBD and re-configure the node as the master again? Can I get to the point of at least pulling any VM's off of the DRBD that weren't transfered before the crash?

And can I do this with 1.8 vs the 1.7 version on the slave?
 
yes, a re-installation should be possible in this case.
 
use on both 1.7 or 1.8, don´t mix. could work but not tested.
 
Ok, so I just follow standard setup procedures including the wiki entries for cluster and DRBD?
 
Ok, so I just follow standard setup procedures including the wiki entries for cluster and DRBD?
Hi,
makes the running node to the new master.
Have you tried to boot a live-cd to get on the damaged node the configs (/etc/qemu-server/*.conf)? If you copy this to the running node, you can start there the VMs (if the disks are on the drdb).

AFAIK, after new installation you don't need create drbd-devices - you need only to copy the configs from the running node (ofcourse install the drbd packets).

Udo
 
Can somebody point me to the wiki or manual to which it shows how to make the slave node a master node?
 
DRBD VM Extraction

Great, thanks for the CLI to change to master! Good stuff. Now I have one VM remaining on the DRBD that isn't visible to the current node (not migrated before crash). It was a low priority VM but non the less, would like have it back while we wait for hardware on the previous master node.

Is there a way to re-enable that VM on our slave-now-master node? Can I just add a configuration file on the node that points to the DRBD volume like the rest of the VMs? ...or do I have to finish repairing the first node?
 
Re: DRBD VM Extraction

Great, thanks for the CLI to change to master! Good stuff. Now I have one VM remaining on the DRBD that isn't visible to the current node (not migrated before crash). It was a low priority VM but non the less, would like have it back while we wait for hardware on the previous master node.

Is there a way to re-enable that VM on our slave-now-master node? Can I just add a configuration file on the node that points to the DRBD volume like the rest of the VMs? ...or do I have to finish repairing the first node?
Hi,
yes, you can create an new VM and use the diskfile in the config (edit /etc/qemu-server/VMID.conf).
Perhaps you must inside the client change something with the network-card (because new mac-address). E.G. on linux remove the entry in /etc/udev/rules.d/70-persistent-net.rules - or use that mac-address in the VMID.conf-file.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!