Do you have to specify just the "auto-promote" option on the resource or is it necessary to also specify "allow-two-primaries" to use the auto-promote feature?
Here is my hook script code to mount/dismount the LUKS encrypted disk:
#!/bin/bash
if [ "$1" == "job-start" ]
then
#echo "INFO: Calling cryptsetup"
cryptsetup luksOpen /dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0-part1 backup --key-file=/path/to/backup.key
mount...
I am in the same boat as you, looks like this is not supported directly in Proxmox 4: http://forum.proxmox.com/threads/23968-DRBD9-with-Proxmox-4-and-diverse-storage
WARNING: I've not actually used DRBD9 yet, only read some of the documentation so the following could be complete useless...
Most of my servers were built in identical pairs over time because we used DRBD.
But no two pairs of servers are identical, they were all built to serve specific needs.
Example Server A and B are identical and have small SSD disks where Server C and D are identical and have giant slow...
I have a few Ubuntu 14.04 VMs running 3.16 and they have never had this problem.
That seems to support the idea that the problem might be in the guest kernel.
It is used less than other filesystems but it is very stable and maintained, I see JFS related commits in mainline just last week.
I have several VMs using JFS mostly because it can be formatted case-insensitive which is really handy when moving files from Windows to Linux.
The roadmap contradicts Wolfgang's statement:
http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.4
I think most important thing to point out to future readers of this thread is that this ability was only added in 3.4.
If you don't have >= 3.4, its not going to work.
I think you did not understand my suggestion.
In your guest run qga
In your guest add a qga hook script so when fsfreeze-freeze is requested the qga hook script performs whatever db actions you want to freeze the db.
In your guest add qga hook script so when fsfreeze-thaw is requested the...
You can create hook scripts for qga. That would be where you would add code to issue your db commands.
You will need to read qga documentation to figure out how to do this, I only know it's possible and have never done it myself.
I believe qga writes logs in the guest, did you look there...
Is this problem limited to Debian 7 guests or has it been observed with other guest OS?
I've only seen it with Debian guests and only fix I found is to use IDE instead of virtio.
I've not tried SATA....
I find it most strange that I only see this on VMs with very little disk IO. VMs running nothing but memcached have the problem where a busy web server constantly writing logs never had the problem.
Wish I had a way to trigger the problem on demand, that would help...
Apparently writethrough seems to be the best option when using DRBD.
All other cache modes are known to cause out of sync (OOS) blocks in DRBD.
Oddly I have issues with some nodes where if I have guests using writethrough DRBD split brains under heavy IO load (eg during backups)
I've looked into this a couple years ago and my conclusion was its not possible.
The only way to get IB into a guest that I can think of is to pass the IB card directly to the guest but thats not much of a solution.
Maybe something like this could solve the problem: http://vde.sourceforge.net/...
They are on top of this:
http://pve.proxmox.com/pipermail/pve-devel/2015-May/015123.html
I suspect that once an update is out you will be able to live migrate into the updated QEMU/KVM on another server or you will need to stop/start your VMs.
I have posted about this problem long ago and never found a solution.
Very annoying as it causes the reboot to take many minutes longer than necessary.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.