live migration of KVM VM's from master to slave unreliable

Re: Live migration of KVM VM's from slave to master still not working...

I will try to reproduce your issue here, pls tell me in detail what I need to setup that I got exactly the same.
OK, here is the layout:

Cluster members: Two Dell PE1950's.
Shared storage: Dell PE2950, PERC6/i RAID 10. Storage is done with an NFS export

Master

  • Pair of quad core Xeons (E5410 @ 2.33GHz, Model 23 Stepping 10)
  • 8 gigs Multibit ECC RAM
  • BIOS 2.7.0
  • PERC 6/i Integrated (FW 6.1.1-0047)
  • Pair of Broadcom NetXtreme II BCM5708 1000Base-T (B2) PCI-X 64-bit 133MHz network interfaces
Slave

  • Single Dual core Xeon (5110 @ 1.60GHz, Model 15 Stepping 6)
  • 4 gigs Multibit ECC RAM
  • BIOS 2.6.1
  • PERC 5/i Integrated (FW5.2.2-0072)
  • pair of Broadcom NetXtreme II BCM5708 1000Base-T (B1) PCI-X 64-bit 133MHz network interfaces
Both cluster members have the boot drive is a RAID-1 pair with no other local storage and have the two built-on network interfaces configured as an active-backup bond0

Network connectivity is 100mbit

KVM guests are fresh 64 bit installs of Debian 6 (Squeeze) using the VDMK disk image format. I tried a 32 bit guest install with a raw disk image with no changes in results.

Anything else you need?
 
Re: Live migration of KVM VM's from slave to master still not working...

Greetings.

Anything new on this little issue I'm having? (One of my bosses asked today, so I'm checking in :) )
 
Re: Live migration of KVM VM's from slave to master still not working...

why do you have only "..Network connectivity is 100mbit"? - Why not 1000mbit?
 
Re: Live migration of KVM VM's from slave to master still not working...

Unfortunately, that's all I have to work with at this time.

(My personal theory is the company is too cheap to spring for a gig switch, but that's neither here nor there ;)).
 
Re: Live migration of KVM VM's from slave to master still not working...

Unfortunately, that's all I have to work with at this time.

(My personal theory is the company is too cheap to spring for a gig switch, but that's neither here nor there ;)).
Hi,
bad luck. I have tried one time a live migration with an 100MBit/s connection. This will only succeed, if the VM is very calm. Otherwise the memory-content change faster than the rsync work - so it's impossible to migrate.

Udo
 
Re: Live migration of KVM VM's from slave to master still not working...

Unfortunately, that's all I have to work with at this time.

(My personal theory is the company is too cheap to spring for a gig switch, but that's neither here nor there ;)).

If it's possible you could try to connect them directly with one cable on other interface.
 
Re: Live migration of KVM VM's from slave to master still not working...

Hi,
bad luck. I have tried one time a live migration with an 100MBit/s connection. This will only succeed, if the VM is very calm. Otherwise the memory-content change faster than the rsync work - so it's impossible to migrate.
I would have an easier time believing this if migrations didn't work from the master to the slave as well.

Plus, when I have attempted this, the only thing running in the vm was a ssh session with top going, and the guest that was the test subject for the migration was the only guest on the host.
 
Re: Live migration of KVM VM's from slave to master still not working...

If it's possible you could try to connect them directly with one cable on other interface.
This might be possible. The machines are in separate racks with about three racks in between, plus I'd have to reconfigure the networking so it's not using both of the interfaces for failover.

I will have to think about that one to see if it's even possible.
 
Re: Live migration of KVM VM's from slave to master still not working...

Ok, the servers have finally been moved to the same physical rack. Shared storage machine is on one switch, with the hosts on another switch.

Layout:

Host 1: primary interface- switch 1
Host 1: secondary interface- switch 2

Host 2: primary interface- switch 1
Host 2: secondary interface- switch 2

Storage: switch 3

Code:
#pveversion -v
pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.35-1-pve
proxmox-ve-2.6.35: 1.8-11
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-1-pve: 2.6.35-11
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.27-1pve1
vzdump: 1.2-13
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6

Still 100 mbit at the moment.

Tried a live migration, with the following results:

Host 1 to host 2 -> quick and flawless
Host 2 to host 1 -> same situation as before, runs really slow and never finishes due to the remaining memory to transfer increasing every so often.

Any ideas? I'd think it's the network, IF the migration from #1 to #2 wasn't so flawless (#1, while being the larger/newer of the two machines, is also the one currently running all five of the guests, and the test guest is doing nothing but running top).
 
Re: Live migration of KVM VM's from slave to master still not working...

Ok, the servers have finally been moved to the same physical rack. Shared storage machine is on one switch, with the hosts on another switch.

Layout:

Host 1: primary interface- switch 1
Host 1: secondary interface- switch 2

Host 2: primary interface- switch 1
Host 2: secondary interface- switch 2

Storage: switch 3

Code:
...

Any ideas?  I'd think it's the network, [B]IF[/B] the migration from #1 to #2 wasn't so flawless (#1, while being the larger/newer of the two machines, is also the one currently running all five of the guests, and the test guest is doing nothing but running top).[/QUOTE]
Hi,
please test the networkspeed in both directions between the nodes with iperf for both interfaces (perhaps one direction is slow - problems with autonegotiation or so)!
I assume the cluster communications was done with the primary interface (vmbr0) - perhaps it's better to change the network-layout to use the secondary interface for this.

Udo
 
[SOLVED] Re: Live migration of KVM VM's from slave to master working!

Hi,
please test the networkspeed in both directions between the nodes with iperf for both interfaces (perhaps one direction is slow - problems with autonegotiation or so)!
I assume the cluster communications was done with the primary interface (vmbr0) - perhaps it's better to change the network-layout to use the secondary interface for this.

Udo
Thank you for the reply Udo. I feel like a maroon for missing this.

For some reason, the secondary machine is auto-negotiating at 100mbit-HD, with the primary at 100mbit-FD. I would have imagined that there would still be a problem from one to another with this setup, but stranger things have happened.

I was told by the network group that all the machines are supposed to be FD, not HD, and since the two machines are attached to the same switch, one would think that they would match.

*sigh* Apparently not. I really should know better (and do, usually) than to assume things like this.

So I forced it with mii-tool. Ethtool still shows HD, but iperf shows similar performance between the two (93-94 MBit/sec), and the live migration between the two works. #2 to #1 was, perhaps, slightly slower, but I'm blaming the lesser amount of 'machine grunt' for that.

So, now I get to try to pitch a case to the controller(s) to get me a few Intel based network cards and a gige switch to have all the machines talk to each other on a separate network, one that I have some control over and can configure the way that will work best for these machines.

Thank you all again for the help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!