Hello,
i have two proxmox node, version 3.2.
i have install proxmox on a fresh debian install, to have software raid.
On each node, i would like to use :
- sda & sdb for OS raid volume, in software raid. md0 & md1 volume.
- sdc & sdd for VM raid volume, in software raid. md2 volume.
- DRBD for sync VM raid volume between the two node, with a dedicated link.
At this moment, 2 VM run well on node 1.
My problem is, i can't create drbd volume :
drbd is ok :
drbd ressource file :
lvm.conf filter :
i have tested with :
or
no success.
here some information :
have you got an idea ? thank you
i have two proxmox node, version 3.2.
i have install proxmox on a fresh debian install, to have software raid.
On each node, i would like to use :
- sda & sdb for OS raid volume, in software raid. md0 & md1 volume.
- sdc & sdd for VM raid volume, in software raid. md2 volume.
- DRBD for sync VM raid volume between the two node, with a dedicated link.
At this moment, 2 VM run well on node 1.
My problem is, i can't create drbd volume :
Code:
root@proxmox02:~# pvcreate /dev/drbdr0
Device /dev/drbdr0 not found (or ignored by filtering).
drbd is ok :
Code:
root@proxmox02:~# cat /proc/drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:47:51
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:664 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
drbd ressource file :
Code:
resource r0 {
protocol C;
startup {
wfc-timeout 15; # non-zero wfc-timeout can be dangerous (http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
degr-wfc-timeout 60;
become-primary-on both;
}
net {
cram-hmac-alg sha1;
shared-secret "my-secret";
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
on proxmox02 {
device /dev/drbd0;
disk /dev/md2;
address 192.168.192.1:7788;
meta-disk internal;
}
on proxmox03 {
device /dev/drbd0;
disk /dev/md2;
address 192.168.192.2:7788;
meta-disk internal;
}
}
lvm.conf filter :
i have tested with :
Code:
filter = [ "a/.*/" ]
or
Code:
filter = [ "a|drbd.*|", "r|.*|" ]
no success.
here some information :
Code:
root@proxmox02:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdc1[0] sdd1[1]
1943227200 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
1855337280 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
487104 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Code:
root@proxmox02:~# df -h
Sys. fich. Taille Util. Dispo Uti% Monté sur
udev 10M 0 10M 0% /dev
tmpfs 1,6G 412K 1,6G 1% /run
/dev/mapper/pve-root 74G 1,4G 69G 2% /
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 3,2G 47M 3,1G 2% /run/shm
/dev/md0 461M 58M 380M 14% /boot
/dev/mapper/pve-data 1,6T 66G 1,5T 5% /var/lib/vz
/dev/fuse 30M 20K 30M 1% /etc/pve
192.168.20.10:/volume3/vmproxmox 913G 310G 603G 34% /mnt/pve/synology
192.168.20.10:/volume1/proxmox 2,7T 1,5T 1,3T 54% /mnt/pve/synology-backup
have you got an idea ? thank you