VM with large amounts of RAM

adamb

Famous Member
Mar 1, 2012
1,329
77
113
Hey all. Looking to get some input on running a VM with large amount of ram. Looking at running roughly 700GB inside the VM. Is it safe to say that live migration is pretty much pointless? I am thinking even with a 10GB backend, aes-ni support, live migration is still going to take a long time. Anyone running VM's with large amounts of ram?

Or is it possible to run live migration with ssh as that is most definitively the bottle neck, an option to disable ssh for live migrations would be great.
 
Last edited:
Hey all. Looking to get some input on running a VM with large amount of ram. Looking at running roughly 700GB inside the VM. Is it safe to say that live migration is pretty much pointless?
Yes, I would also guess that live migration will take a very long time (if ever finished, because of load on the VM - normaly such big VMs do also a lot).
My biggest VM has 24GB ram and take some time to migrate (app. 5min.) - even with 10GBE.

Don't know if the ballon-driver can help to reduced the used memory before live migration?

Udo
 
Yes, I would also guess that live migration will take a very long time (if ever finished, because of load on the VM - normaly such big VMs do also a lot).
My biggest VM has 24GB ram and take some time to migrate (app. 5min.) - even with 10GBE.

Don't know if the ballon-driver can help to reduced the used memory before live migration?

Udo

I had a feeling, its just a bummer that its so limited by ssh. Even with AES-NI support ssh is still going to hit a bottle neck, if we could utilize the entire 10GB connection live migration would rip.

There isn't going to be a ton of activity in the ram, its essentially just a large high performance read cache.
 
It would be great if I could just use something similar to avoid ssh, I have no issue doing this from command line if I could get it working.

TCP example:

1. Start the VM on B with the exact same parameters as the VM on A, in migration-listen mode:
B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))
2. Start the migration (always on the source host):
A: migrate -d tcp:B:4444 (or other PORT)
3. Check the status (on A only):
A: (qemu) info migrate
 
You can made a request on proxmox bugzilla.

This has already been discuss on proxmox dev mailing, but dietmar don't want to do migration without encrypted tunnel.
But yes, ssh is the bottleneck.

Maybe some kind of secure network definition for uncrypted migration inside proxmox could help ?
 
You can made a request on proxmox bugzilla.

This has already been discuss on proxmox dev mailing, but dietmar don't want to do migration without encrypted tunnel.
But yes, ssh is the bottleneck.

Maybe some kind of secure network definition for uncrypted migration inside proxmox could help ?

That is a bummer, not sure on the reasoning as security options should be left for us to decide not the dev's. They are concerned about forcing ssh but not a firewall, just doesn't make much sense. It's really to bad as we have over 3000 clients we would look to switch over to proxmox, but due to this limitation, its just not an option.
 
That is a bummer, not sure on the reasoning as security options should be left for us to decide not the dev's. They are concerned about forcing ssh but not a firewall, just doesn't make much sense. It's really to bad as we have over 3000 clients we would look to switch over to proxmox, but due to this limitation, its just not an option.


you can try this patch:
qemu-server: add support for unsecure migration (setting in datacenter.cfg)

http://permalink.gmane.org/gmane.linux.pve.devel/1636


or,
As you can try to build hpn-ssh, which don't have this bandwith limitation problem

http://blog.admiralakber.com/?p=248

http://www.psc.edu/index.php/hpn-ssh
 
you can try this patch:
qemu-server: add support for unsecure migration (setting in datacenter.cfg)

http://permalink.gmane.org/gmane.linux.pve.devel/1636


or,
As you can try to build hpn-ssh, which don't have this bandwith limitation problem

http://blog.admiralakber.com/?p=248

http://www.psc.edu/index.php/hpn-ssh

I appreciate the tip, I will give this a try, definitely worth a shot!

I've messed with hpn-ssh quite a bit, I am a bit leary of it, but it definitely works, I have some security concerns with it.
 
Nobody suggest to run without a firewall (you probably miss understood that)!

I understand that nobody suggests to run without a firewall, but the fact is you leave it up to the administrator to make that decision. The same should be done with ssh, we are all capable of determining if ssh is needed or not, the dev's of KVM didn't feel the need to force ssh on us, which is why they left the option of plain old tcp. My job is to make those types of security decisions, not you.

Ssh should be in the same boat, not advised to run without, but we should have the option.

As ssh is pretty much a waste of cpu cycles on a 2 node cluster with a dedicated cluster backend with no switch in the middle.
 
Last edited:
for my curiosity, may I ask which kind of use has a such a vm? 700GB is not "large", is "immensely huge"...

Marco

Our software is very dependent on random reads. Obviously the first logical choice is SSD's. The industry we are in requires we use self encrypting drives (SED) which is far from complete in the SSD world. So we decided to load our servers up with lots of ram and do a large read cache. This is working out great for us and we simply use a tool called vmtouch to pre-cache specified files.

We are now looking to move to a virtual environment and would love to keep the ability of live migrations!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!