MMU problem for memory sharing and ballooning.

makton

Member
Dec 7, 2009
30
0
6
What a way to start my first post. I'm testing the possibility of using Proxmox for a production environment and have the following test system.

AMD Phenom 4X 2.3GIG
2GIG of ram
320GIG sata drive

I used "proxmox-ve_1.4-4390.iso" for the install and currently have 2 VMs with 512MB of ram a piece. One VM is 2003 server and the other is Debian Linux "Lenny". Both VMs are KVMS.

I'm vary pleased with the system except for one thing. I am unable to get the unused memory from the guest VMs back to the host. I read in the forums that i can use "hugepages" and use ballooning to sove this problem. However, I keep running into a problem when I used either the hugepages or the ballooning.

Example:
Code:
qm> info balloon
Using KVM without synchronous MMU, ballooning disabled

When I tried to get the hugepages started, It stated that I do not have MMU notification and disabled my -mem_path.

from what I see the kernel config, I have them enabled:

Code:
proxmox:/boot# cat config-2.6.24-8-pve | grep MMU
CONFIG_MMU=y
CONFIG_GART_IOMMU=y
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
# CONFIG_IOMMU_DEBUG is not set

What exactly is the problem with this MMU? I have no logs on any of this but the part where it wont let me use the hugepages or the balloon. Any ideas?

Also, my server is updated with your respository, and kernel 2.6.24-9

Thanks in advance.
Mike
 
Last edited:
ok, I been doing a ton of googling regarding the MMU issue. I would like to use the "hugepages" for my KVM andit seems there was a problem with the kernel writing to parts of memory that might belong to other guests. I found these posts of kernel patches and they were dated for the begining of this year.

So, my question is, do we have the mmu sync disable on our kernels? and what is the status of this patch to our kernels? This is the only hurdle I need to overcome before I can show my results to my boss and possiby use this technology for my work. TY...

Also, this is what I'm talking about

http://patchwork.kernel.org/patch/2212/
 
balloon works here, but you need a resonable new guest kernel. What guest kernel do you use?
 
balloon works here, but you need a resonable new guest kernel. What guest kernel do you use?

Sorry, your are right - it does not work with the current kernel (I tested with the new kernel). I will try to find out whats wrong.
 
Seems the MMU notifiers patch is misssing in 2.6.24. But we try to release a kernel whith support for that next month.
 
ty. I wasn't really sure if it was a Kernel problem or possibly an AMD problem as it uses iommu and not really a normal mmu. I'm taking this work for testing on 2 HP 1u rackmounts

2.8gig duel Zeons (with VT)
and a netapp SAN

one server has 6 gigs of ram and the other has four. this is phase 2, but I won't beable to continue till we we have a solution for the ballooning. I'm having to go against VMware ESXI servers which use a form of memory sharing.

you would laugh at the testing I'm done at home. Have a VM hoster in a VM hoster and have a SAN in a VM hoster. All that to test migration and clustering. Absolutly amazing!!!

Thank again and I await the new kernel.
 
Wanted to report that the 2.6.32 kernel has the ballooning and hugepages working in it again. Sadly, the result is not what I expected and I was hoping the hugepages will allow for the unused guest memory to return to the host for other guests. This isn't the case. I jsut moved my problem from the main memory to the hugepages allociation, which is really no different and give no accurate acout of how much memory the guest is using.

Any other ideas?
 
Wanted to report that the 2.6.32 kernel has the ballooning and hugepages working in it again. Sadly, the result is not what I expected and I was hoping the hugepages will allow for the unused guest memory to return to the host for other guests. This isn't the case.

AFAIK ballooning is not automatic. Instead you can use the 'balloon' comand to set guest memory (or a management daemon need to do that).

Anyways, ballooning only helps you when the VMs does not use the memory, so it has zero advantage when the VMs really use memory.

Please try KSM instead.
 
It looks as if the KSM is automatic and I verified that it is on. Guess I need matching operating systems to really test the KSM. KSM doesn't give back the memory, but instead shares the memory with like systems that are not going to right to it.

EX. the main OS of windows is usually reading and not writing, so if I have mulitple matching windows systems, the reading portion of their memory will be put together(shared) and will only be seperated if a piece needs to be wirtten to.

This doesn't resolve the dynamic memory sharing I'm used to seeing from VMware. I'm guessing we don't have that yet and I do await it. Still going to test out the KSM.
 
This doesn't resolve the dynamic memory sharing I'm used to seeing from VMware. I'm guessing we don't have that yet and I do await it. Still going to test out the KSM.

Overcommiting memory is a bad idea IMHO, and RAM is cheap these days.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!