Hi,
I have been using Proxmox on a stand alone server for a couple of years without any trouble. I love Proxmox and I would like to start recommending it to my customers. Lately I have been trying to implement a cluster with DRBD but I have encountered a number of problems explained below.
I have 2 servers with 1 logical volume each subdivided in 2 partitions (by Proxmox installer). I am running 1.8 with kernel 2.6.32... more specifically:
pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2-14
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
I have a number of VMs running and I managed to set up this two servers as a cluster using pveca. I would like now to enable DRBD for the second partition /dev/sda2. To do so I have followed instructions found on http://pve.proxmox.com/wiki/DRBD
However, when I try to start the drbd service I get the following:
proxmox-01:~# /etc/init.d/drbd start
Starting DRBD resources:[ d(r0) 0: Failure: (114) Lower device is already claimed. This usually means it is mounted.
[r0] cmd /sbin/drbdsetup 0 disk /dev/sda2 /dev/sda2 internal --set-defaults --create-device failed - continuing!
n(r0) ]..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 60 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 15 seconds. [wfc-timeout]
(These values are for resource 'r0'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 14]:
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
Someone told me that it may be because the service is already running, so I moved to the next step which was not successful either:
proxmox-01:~# drbdadm create-md r0
md_offset 999114268672
al_offset 999114235904
bm_offset 999083745280
Found LVM2 physical volume signature
975695872 kB data area apparently used
975667720 kB left usable by current configuration
Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
Command 'drbdmeta 0 v08 /dev/sda2 internal create-md' terminated with exit code 40
drbdadm create-md r0: exited with code 40
I am not sure what I am doing wrong... could someone please help me?
Thanks
I have been using Proxmox on a stand alone server for a couple of years without any trouble. I love Proxmox and I would like to start recommending it to my customers. Lately I have been trying to implement a cluster with DRBD but I have encountered a number of problems explained below.
I have 2 servers with 1 logical volume each subdivided in 2 partitions (by Proxmox installer). I am running 1.8 with kernel 2.6.32... more specifically:
pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.28-1pve1
vzdump: 1.2-14
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
I have a number of VMs running and I managed to set up this two servers as a cluster using pveca. I would like now to enable DRBD for the second partition /dev/sda2. To do so I have followed instructions found on http://pve.proxmox.com/wiki/DRBD
However, when I try to start the drbd service I get the following:
proxmox-01:~# /etc/init.d/drbd start
Starting DRBD resources:[ d(r0) 0: Failure: (114) Lower device is already claimed. This usually means it is mounted.
[r0] cmd /sbin/drbdsetup 0 disk /dev/sda2 /dev/sda2 internal --set-defaults --create-device failed - continuing!
n(r0) ]..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 60 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 15 seconds. [wfc-timeout]
(These values are for resource 'r0'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 14]:
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command '/sbin/drbdsetup 0 primary' terminated with exit code 17
Someone told me that it may be because the service is already running, so I moved to the next step which was not successful either:
proxmox-01:~# drbdadm create-md r0
md_offset 999114268672
al_offset 999114235904
bm_offset 999083745280
Found LVM2 physical volume signature
975695872 kB data area apparently used
975667720 kB left usable by current configuration
Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
Command 'drbdmeta 0 v08 /dev/sda2 internal create-md' terminated with exit code 40
drbdadm create-md r0: exited with code 40
I am not sure what I am doing wrong... could someone please help me?
Thanks