migration slow, disable encryption?

q16marvin

Renowned Member
Jul 16, 2013
50
0
71
hi,

when i migrate a kvm i get not more then 45MB/s. I have a 1GB Network. Both CPU are running on 100% sshd.



Is it possible to deactivate the encryption or dont use ssh to migrate?


Thx!
 
Last edited:
yes sure,

edit /etc/pve/datacenter.cfg
and add

migration_unsecure: 1


hi,

seems not work, i add this line to datacenter.cfg, it was automaticly add to other cluster nodes. i reboot all nodes. but when i migrate i reach still not more than 45MB/s. the ssh or sshd process is still 100%:

Unbenannt.png


Thx!
 

mhh my qemu is 1.4 so it should work or?


proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1


Dec 13 10:58:32 starting migration of VM 142 to node 'proxmox2' (192.168.xxx.xxx)
Dec 13 10:58:32 copying disk images
vm-142-disk-1.raw

rsync status: 32768 0% 0.00kB/s 0:00:00
rsync status: 252215296 1% 42.55MB/s 0:08:07
rsync status: 454066176 2% 37.48MB/s 0:09:07
rsync status: 650149888 3% 36.16MB/s 0:09:22
rsync status: 880541696 4% 36.24MB/s 0:09:14
rsync status: 1082687488 5% 37.93MB/s 0:08:45
rsync status: 1329364992 6% 39.07MB/s 0:08:23
rsync status: 1522302976 7% 34.28MB/s 0:09:28
rsync status: 1731624960 8% 39.92MB/s 0:08:02
rsync status: 1941274624 9% 39.05MB/s 0:08:08
rsync status: 2148106240 10% 38.09MB/s 0:08:15
rsync status: 2375024640 11% 43.21MB/s 0:07:11
rsync status: 2608889856 12% 44.68MB/s 0:06:52
rsync status: 2826862592 13% 40.32MB/s 0:07:31
rsync status: 3037593600 14% 36.11MB/s 0:08:18
rsync status: 3260514304 15% 43.40MB/s 0:06:49
rsync status: 3470622720 16% 38.50MB/s 0:07:36
rsync status: 3679125504 17% 39.84MB/s 0:07:16
rsync status: 3883270144 18% 37.26MB/s 0:07:41
rsync status: 4096229376 19% 33.17MB/s 0:08:31
rsync status: 4303028224 20% 38.44MB/s 0:07:16
rsync status: 4514873344 21% 33.62MB/s 0:08:12
rsync status: 4751163392 22% 38.41MB/s 0:07:05
rsync status: 4969103360 23% 38.23MB/s 0:07:01
rsync status: 5171412992 24% 36.16MB/s 0:07:20
rsync status: 5370576896 25% 46.48MB/s 0:05:38
rsync status: 5611618304 26% 38.23MB/s 0:06:45
rsync status: 5841518592 27% 46.57MB/s 0:05:27
rsync status: 6018531328 28% 56.87MB/s 0:04:25
rsync status: 6250299392 29% 55.29MB/s 0:04:28
rsync status: 6491930624 30% 57.64MB/s 0:04:13
rsync status: 6658981888 31% 54.58MB/s 0:04:25
rsync status: 6872268800 32% 39.73MB/s 0:05:58
rsync status: 7097843712 33% 53.81MB/s 0:04:20
rsync status: 7332397056 34% 43.15MB/s 0:05:20
rsync status: 7555809280 35% 53.27MB/s 0:04:15
rsync status: 7731478528 36% 41.78MB/s 0:05:21
rsync status: 7979106304 37% 59.08MB/s 0:03:43
rsync status: 8213626880 38% 55.96MB/s 0:03:51
rsync status: 8397389824 39% 58.95MB/s 0:03:36
rsync status: 8648884224 40% 59.99MB/s 0:03:28
rsync status: 8829337600 41% 57.52MB/s 0:03:34
rsync status: 9045540864 42% 51.55MB/s 0:03:55
rsync status: 9293627392 43% 59.15MB/s 0:03:21
rsync status: 9480273920 44% 59.27MB/s 0:03:17
rsync status: 9720692736 45% 57.32MB/s 0:03:20
rsync status: 9911009280 46% 60.31MB/s 0:03:07
rsync status: 10100375552 47% 60.41MB/s 0:03:03
rsync status: 10352885760 48% 60.20MB/s 0:03:00
rsync status: 10538221568 49% 59.25MB/s 0:03:00
rsync status: 10771496960 50% 55.62MB/s 0:03:07
rsync status: 11009196032 51% 56.67MB/s 0:03:00
rsync status: 11189813248 52% 57.21MB/s 0:02:55
rsync status: 11436294144 53% 58.77MB/s 0:02:46
rsync status: 11617337344 54% 58.55MB/s 0:02:44
rsync status: 11861819392 55% 58.29MB/s 0:02:41
rsync status: 12046401536 56% 57.38MB/s 0:02:40
rsync status: 12293439488 57% 58.90MB/s 0:02:32
rsync status: 12478873600 58% 59.07MB/s 0:02:28
rsync status: 12720898048 59% 57.70MB/s 0:02:28
rsync status: 12891619328 60% 55.19MB/s 0:02:31
rsync status: 13130432512 61% 56.94MB/s 0:02:23
rsync status: 13373145088 62% 57.87MB/s 0:02:16
rsync status: 13551403008 63% 57.06MB/s 0:02:15
rsync status: 13793918976 64% 57.82MB/s 0:02:09
rsync status: 13976764416 65% 58.27MB/s 0:02:05
rsync status: 14220492800 66% 58.11MB/s 0:02:01
rsync status: 14403960832 67% 58.37MB/s 0:01:58
rsync status: 14648246272 68% 58.24MB/s 0:01:54
rsync status: 14834008064 69% 59.23MB/s 0:01:49
rsync status: 15064793088 70% 55.02MB/s 0:01:53
rsync status: 15294889984 71% 54.86MB/s 0:01:50
rsync status: 15481700352 72% 59.52MB/s 0:01:38
rsync status: 15731818496 73% 59.63MB/s 0:01:34
rsync status: 15923904512 74% 60.28MB/s 0:01:29
rsync status: 16107405312 75% 58.79MB/s 0:01:29
rsync status: 16343203840 76% 56.22MB/s 0:01:29
rsync status: 16580608000 77% 56.60MB/s 0:01:24
rsync status: 16766566400 78% 57.45MB/s 0:01:20
rsync status: 17007214592 79% 57.38MB/s 0:01:16
rsync status: 17192484864 80% 58.91MB/s 0:01:10
rsync status: 17427595264 81% 56.05MB/s 0:01:10
rsync status: 17616764928 82% 58.31MB/s 0:01:04
rsync status: 17848926208 83% 55.35MB/s 0:01:03
rsync status: 18097307648 84% 59.22MB/s 0:00:55
rsync status: 18267570176 85% 55.54MB/s 0:00:56
rsync status: 18513788928 86% 58.70MB/s 0:00:49
rsync status: 18697814016 87% 59.08MB/s 0:00:45
rsync status: 18945277952 88% 59.00MB/s 0:00:41
rsync status: 19132252160 89% 59.69MB/s 0:00:38
rsync status: 19375587328 90% 58.02MB/s 0:00:35
rsync status: 19557187584 91% 57.62MB/s 0:00:32
rsync status: 19802652672 92% 58.52MB/s 0:00:27
rsync status: 19981008896 93% 57.21MB/s 0:00:25
rsync status: 20215627776 94% 55.94MB/s 0:00:21
rsync status: 20459552768 95% 58.16MB/s 0:00:17
rsync status: 20632895488 96% 56.03MB/s 0:00:14
rsync status: 20890189824 97% 61.34MB/s 0:00:09
rsync status: 21078114304 98% 59.71MB/s 0:00:06
rsync status: 21263220736 99% 59.11MB/s 0:00:03
rsync status: 21474836480 100% 50.43MB/s 0:06:46 (xfer#1, to-check=0/1)

sent 21477457999 bytes received 31 bytes 52835075.10 bytes/sec
total size is 21474836480 speedup is 1.00
Dec 13 11:05:27 migration finished successfuly (duration 00:06:55)
TASK OK
 
Last edited:
I think that this option only applies to live migration when it copies the RAM.
It does not apply to the rsync operation to copy a VM image file.

I know it works perfectly fine for speeding up the RAM copy when doing live migrations, I regurarly get over 500MB/sec on my Infiniband cluster.
 
I think that this option only applies to live migration when it copies the RAM.
It does not apply to the rsync operation to copy a VM image file.

I know it works perfectly fine for speeding up the RAM copy when doing live migrations, I regurarly get over 500MB/sec on my Infiniband cluster.


mhh okay, so there is no way to migrate (without live) from one node to other a little bit faster? we have no HA running, when i have to update node 1 i only want to move all kvm's to other node.

the problem is not ne "speed", much more is the high cpu load during this time, so other "running" kvm's on both nodes are getting extrem slow until unusable.

thx!

Erik
 
mhhh no one a idea? i have to migrate a 200GB kvm and it will need about 90 minutes :(

Yep migration is done over ssh so the bottle neck is going to be a single CPU core. I highly doubt its all CPU causing the bottle neck, if you are moving a 200GB image, my guess is that you are seeing slow downs due to lack of IO. Your best bet is to move over to central storage or some type of DRBD setup.
 
Yep migration is done over ssh so the bottle neck is going to be a single CPU core. I highly doubt its all CPU causing the bottle neck, if you are moving a 200GB image, my guess is that you are seeing slow downs due to lack of IO. Your best bet is to move over to central storage or some type of DRBD setup.

but i dont want to buy a seperate storage for only one migration?!?! why is it not possible to copy data between nodes without ssh?
 
As far as i know rsync needs ssh. Even without SSH you wont be getting much increase improvement due to 1g network. With a shared storage this issue can be easily addressed. If budget is tight or do not have enough spare parts laying around, all you need is a basic dual core pc with minimum 4gb ram with some hdds. Setup a FreeNAS system with NFS or iSCSI, then move all VMs there. This will eliminate long waits of migration. Within few minutes all VMs can be moved to a different node.

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 
As far as i know rsync needs ssh. Even without SSH you wont be getting much increase improvement due to 1g network. With a shared storage this issue can be easily addressed. If budget is tight or do not have enough spare parts laying around, all you need is a basic dual core pc with minimum 4gb ram with some hdds. Setup a FreeNAS system with NFS or iSCSI, then move all VMs there. This will eliminate long waits of migration. Within few minutes all VMs can be moved to a different node.

Sent from my ASUS Transformer Pad TF700T using Tapatalk

in german someone describe the problem:

http://geekparadise.de/2010/07/schlechte-performance-mit-ssh-rsync-daemon-beheben/

But i dont understand it realy :(

So i have read about something changing the encryption method like this:

/usr/bin/rsync --progress --sparse --whole-file /var/lib/vz/images/125/vm-125-disk-1.qcow2 root@192.168.200.32:/var/lib/vz/images/125/vm-125-disk-1.qcow2 -e "ssh -c arcfour"

when i run this manualy it works much better.

where did i have to change this?
 
I believe that Proxmox no longer forces a particular cipher.
So you should be able to simply edit the ssh client config to set the preferred cipher

edit /etc/ssh/ssh_config and add something like this:
Code:
Host *
Ciphers arcfour,blowfish


The Ciphers line shoud contain all the ciphers you want to use in the order that you most prefer.
arcfour is not as secure as other ciphers, so you might want to not use Host *, instead you can specify the specific hosts this applies to.
 
in german someone describe the problem:

http://geekparadise.de/2010/07/schlechte-performance-mit-ssh-rsync-daemon-beheben/

But i dont understand it realy :(

All this mentions is the usage of rsync in a way where you have the receiving end also run an rsync demon that accepts connections. This is because the problem mainly boils down to network basics: If you want to connect from Host A to Host B to transfer a file, you need a listening service on Host B that accepts connections. On a "normal" setup this almost always the sshd. So if you want to skip the use of ssh, you need a different demon on Host B... like the afore mentioned rsync demon.

Heres a randomly selected $searchengine hit describing the setup: http://www.jveweb.net/en/archives/2011/01/running-rsync-as-a-daemon.html
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!