Migration faster?

cyberbootje

Member
Nov 29, 2009
77
0
6
Hi

I was wondering if it is possible to tell proxmox to make a the migration between server go faster than 11M ?

I have 2 second gbit ports on both servers, a dedicated gbit switch but now i need to alter config files i assume? where?

thx
 
What exaclty are you talking about (I am not aware of any speed limit)?

Well if there isn't any speed limits, then is it possible to assign the migration section to another network?

Or do i have to delete the cluster and make the cluster with the other network ip?
 
Yes, it the other network is faster.
and how to do this? Let me say: i reach both nodes via eth0 (10.0.0.x/24 net), and use eth1 (11.0.0.x/24 net) for drbd...

the eth0 is the one configured during installation, so i use its ip to access web interface... if i make one node a master with PVECA -C, usually i give PVECA -A -H ip_address_of_master

so, could i use the eth1 ip to join cluster? Because i see that [-h ip] is between [ and ], so usually it's optional...

i think that clustering via the same drbd interface is better, as i usually use different switch or direct gigabit cable to link the 2 nodes, so no external malfuntionality is in place...


hope to be clear, sorry if not...
 
The cluster communication uses the IP assigned to eth0 (or vmbr0 if eth0 does not have an IP).
 
Code:
cat /etc/drbd.conf
global { usage-count no; }
common { syncer { rate 30M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "my-secret";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
        }
        on clu1 {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.27.101:7788;
                meta-disk internal;
        }
        on clu2 {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.27.102:7788;
                meta-disk internal;
        }
}
Code:
eth1      Link encap:Ethernet  HWaddr 00:22:15:05:f4:45
          inet addr:192.168.27.101
vmbr0     Link encap:Ethernet  HWaddr 00:22:15:04:e4:1b
          inet addr:10.0.27.101
Code:
cat /etc/hosts
127.0.0.1       localhost
10.0.27.101 clu1.calionet.it clu1 pvelocalhost
10.0.27.102 clu2.calionet.it clu2

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
eth1 is linked with a direct cable between the 2 nodes, while vmbr0 (bridged on eth0) is linked to a 100mb switch...
i received 2 split brains in 2 days... because for some reason my switch was resetted, and i've to manually recover from splitbrain...
is this due to the fact that in drbd.conf, i use CLU1 and CLU2 as node identifier, and so it resolves them via hosts file and their switched eth?
do i need to put 2 other entries in hosts, to identify the machines with the internal drbd ip, so the drbd does not rely on public net?
is my drbd.conf right, does it autorecover from split brain? i copied it from the wiki...
eventually, how to make it better working, to eliminate the needed downtime while drbd is resync?
because i have to put lvm not autostarting (drbd is not puttable secondary as has lvm on top of it...), reboot, recover split brain, reenable lvm and reboot again...
 
eventually, how to make it better working, to eliminate the needed downtime while drbd is resync?
because i have to put lvm not autostarting (drbd is not puttable secondary as has lvm on top of it...), reboot, recover split brain, reenable lvm and reboot again...

When both nodes are in primary role the 'after-sb-2pri' policy configures the split-brain recovery - in your case 'disconnect'.
To resolve you have to choose the node, whose modifications will be discarded and make him secondary. You have to shut down all VM's, which reside on the volume group.
Here is an example, execute it on the node, which will become secondary:

Code:
vgchange -an [I]<vgname>[/I]
drbdadm secondary r0
drbdadm connect r0

Where <vgname> is the name of the volume group on top of drbd.

You may also have to issue 'drbdadm connect r0' on the other node. This should make drbd sync without need for reboot.
 
Last edited by a moderator:
thanks, this is just something useful to recover without reboot... but, how to have it autorecover? do i need to configure drbd with names or ip linked to internal net?
 
thanks, this is just something useful to recover without reboot... but, how to have it autorecover? do i need to configure drbd with names or ip linked to internal net?

Autorecovery has nothing to do with drbd names or ip. Recovery policies are specified in drbd.conf and for primary/primary mode you define policy using 'after-sb-2pri'. Look at 'Automatic split brain recovery policies' in the drbd guide.
 
I know this is a pretty old post but I feel that part of it was not answered and it made me curious so I tested it.

@fluke:
In response to you mentioning "Autorecovery has nothing to do with drbd names or ip." I think he meant those as two separate questions. How to make it auto-recover when it dies, but then also whether his network config was causing his split-brain issues to begin with.

I've seen the question of auto-recover answered in several posts so I'm skipping it.

As to the other question I think his concern was unfounded.

Considering the DRBD config currently on the proxmox wiki:
Code:
on proxmox-105 {
   device /dev/drbd0;
   disk /dev/sdb1;
   address 10.0.7.105:7788;
   meta-disk internal;
}

on proxmox-106 {
   device /dev/drbd0;
   disk /dev/sdb1;
   address 10.0.7.106:7788;
   meta-disk internal;
}
it is clear the "on proxmox-105" and "on proxmox-106" refer to the hostname of the nodes. Does DRBD need that hostname in a network capacity? Does it need to look up that hostname? If eth0 is down will this prevent these functions and prevent the DRBD resource from coming up successfully even if a nic has been dedicated to DRBD?

This may be obvious to more experienced users but not so to newer ones.

So I took a standard pve1.6 two node cluster and added DRBD as described on the wiki. I then unplugged the eth0 line from both nodes and rebooted them. The DRBD resource came up just fine. I stopped DRBD on both nodes and re-started it on both; within the wfc-timeout. The resource came back online without issue.

The original poster seemed concerned that he might be running into a split-brain scenario because his switch was failing and dropping eth0, despite having eth1 dedicated to DRBD. As best as I can tell from the test this was likely not the case and the hostname in the config file is only checked on each machine locally to determine which section of the config file to run.

I do not believe that the hostname needs to be looked up or that there needs to be a way to look up the IP of the dedicated DRBD interface (unless a hostname is used there too instead of hard-coding an IP...and I wouldn't recommend that!).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!