Memory leaks Tomcat / Java & Can convert to KVM?

mikeborschow

New Member
Jul 9, 2008
29
0
1
I'm having problems with memory leaks when running Scalix mail server. It seems to be related to Tomcat / Java. Before diving head in on trying to explain my headache, I was wondering if anybody has experience regarding use of Java/Tomcat in an open container.

Also, I was curious, is it possible to convert my open container VE to a KVM machine?
 
I'm having problems with memory leaks when running Scalix mail server. It seems to be related to Tomcat / Java. Before diving head in on trying to explain my headache, I was wondering if anybody has experience regarding use of Java/Tomcat in an open container.

Also, I was curious, is it possible to convert my open container VE to a KVM machine?

how much memory/swap do you assing to your Scalix? which template to you use, which Scalix version?

I personally have no experience running Scalix in a container, but I used Zimbra (I assigned 2 gb memory, 2gb swap) - working without problems.

Migration from Container to KVM: never tried as this makes no sense for me - but should work but no tool available.
 
1GB memory. I don't know how to set swap for the container. Using CentOS 4 template and Scalix version 11.4.0
 
1GB memory. I don't know how to set swap for the container. Using CentOS 4 template and Scalix version 11.4.0

see this screenshot of my zimbra server. this memory usage is just after starting the machine - tomcat is quite hungry here.
 

Attachments

  • zimbra-container-on-proxmox-ve.png
    zimbra-container-on-proxmox-ve.png
    28.7 KB · Views: 18
I did not upgrade to the latest version as my machine is in production and I have not taken on that task yet ... I am assuming that's why I don't have a 'swap' field. I did read in a post on openVZ about making adjustments to any of the memory segments that were showing a failcnt in cat /proc/user_beancounters. In my case, it was the privvmpages, so I increased the limit by 10 units (=40,960 bites of memory) and the problem with the fail counts went away. In my case, this is container #101, so I went edit (using vim) /etc/vz/conf/101.conf. Original value of PRIVVMPAGES="262144:274644" and I changed the limit value (right of the colon) to 274654. This has made all the fail count errors go away, but I hate editing in this file! I guess I need to take the plunge into migrating to the latest version of ProxmoxVE. Any body using it in production?
 
I did not upgrade to the latest version as my machine is in production and I have not taken on that task yet ... I am assuming that's why I don't have a 'swap' field. I did read in a post on openVZ about making adjustments to any of the memory segments that were showing a failcnt in cat /proc/user_beancounters. In my case, it was the privvmpages, so I increased the limit by 10 units (=40,960 bites of memory) and the problem with the fail counts went away. In my case, this is container #101, so I went edit (using vim) /etc/vz/conf/101.conf. Original value of PRIVVMPAGES="262144:274644" and I changed the limit value (right of the colon) to 274654. This has made all the fail count errors go away, but I hate editing in this file! I guess I need to take the plunge into migrating to the latest version of ProxmoxVE. Any body using it in production?

add pve-auto to the file and click one time save on the web interface,
see http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
and the part: "Move OpenVZ containers to Proxmox VE"

and yes, we use beta2 in production environments, accepting the shortcomings from beta stadium.
 
I don't understand "add pve-auto" ... to what file? I also don't know where to click save ... is this in the new version? And although it is probably covered somewhere, can you help me find the info on migration to the updated version that takes care of any of the bugs that have been reported here. Remember, we're online with the system. Sorry if I'm asking awkward questions.
 
I don't understand "add pve-auto" ... to what file? I also don't know where to click save ... is this in the new version? And although it is probably covered somewhere, can you help me find the info on migration to the updated version that takes care of any of the bugs that have been reported here. Remember, we're online with the system. Sorry if I'm asking awkward questions.

pve-auto is explained in the migration how to,
see http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
 
Tomcat Java in KVM

I finally did it, I recreated the server running Scalix with Tomcat/Java in a kvm machine. Now that it is out of the ve container it is working much much better.

From what I can tell, based on what I've read in other places, the tomcat / java somehow senses all the machines memory, not just the limitation of the ve container ... therefore it believe it has lots of room to play and start to immediately eat up memory resources.

Maybe all that is true???... but I know this for a fact, the performance in KVM is much better, the memory leak is gone, and it is very stable.

This is not to say that I did everything as I should have in the ve container ... but I have set up other ve machines with no problem, this was the first problem and it was the only machine running java or tomcat.

Just FYI for whatevery it's worth. -m
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!