ProxmoxVE4 and DRBD9

SEGELBERT

Member
Sep 21, 2015
6
0
21
Hello,

it can be controlled in which nodes DRBD replicated if there are four nodes
and a replica is two?

best regards,

SEGELBERT
 
push.

And another question. DRBD Client how to?

drbdmanage add-node -s -q IP-ADDR ?

best regards
 
Hello,I hope I'm on the right place, it's my first post on this forum.

I've built a cluster with 2 node with DRBD.

pve-manager/4.1-1/2f9650d4 (running kernel: 4.2.3-2-pve)
DRBD version: 9.0.0 (api:2/proto:86-110)

I joint the two node with a dedicated 2x1Gb bond link with an Intel NIC for the synchronization .

All is running but I can't have more than 500Mb sync.
I test the bond with a scp between the two node and I can have a bandwidth up to 1.5Gb.

I found this doc https://drbd.linbit.com/users-guide/ch-throughput.html but nothing change.

My goal is to use the maximum bandwidth for the replication.

This is my DRBD global_common.conf

global {
usage-count yes;
# minor-count dialog-refresh disable-ip-verification
# cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}

common {
handlers {
# These are EXAMPLE handlers only.
# They may have severe implications,
# like hard resetting the node under certain circumstances.
# Be careful when chosing your poison.

# pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
# pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
# local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}

startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
}

options {
# cpu-mask on-no-data-accessible
}

disk {
# size on-io-error fencing disk-barrier disk-flushes
# disk-drain md-flushes resync-rate resync-after al-extents
# c-plan-ahead c-delay-target c-fill-target c-max-rate
# c-min-rate disk-timeout
c-plan-ahead 0;
c-fill-target 220M;
resync-rate 220M;
}

net {
# protocol timeout max-epoch-size max-buffers unplug-watermark
# connect-int ping-int sndbuf-size rcvbuf-size ko-count
# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
# after-sb-1pri after-sb-2pri always-asbp rr-conflict
# ping-timeout data-integrity-alg tcp-cork on-congestion
# congestion-fill congestion-extents csums-alg verify-alg
# use-rle
}
}

and my drbdctrl.res

resource .drbdctrl {
net {
cram-hmac-alg sha256;
shared-secret "N8XM3xyav5fFoK344hu/";
allow-two-primaries no;
}
volume 0 {
device minor 0;
disk /dev/drbdpool/.drbdctrl_0;
meta-disk internal;
}
volume 1 {
device minor 1;
disk /dev/drbdpool/.drbdctrl_1;
meta-disk internal;
}
on digi02001 {
node-id 0;
address 192.168.1.1:6999;
}
on digi02040 {
node-id 1;
address 192.168.1.2:6999;
}
connection-mesh {
hosts digi02001 digi02040;
net {
protocol C;
}
}
}

I hope my English is sufficiently clear, and I hope you can help me.

Best regards.
 
Thank you for your responce Udo.

I put your configuration in my global_common.conf

Code:
disk {
        c-plan-ahead 0;
        c-max-rate 220M;
        c-fill-target 120M;
    }

But it does not change my replication speed...
Is there a maximum replication limit with DRBD?
 
Last edited: