OpenVZ to Proxmox.

risturiz

Renowned Member
Feb 19, 2009
17
0
66
pelox.gusl.org.ve
Hi, i´m doing some test to migrate all my servers to Proxmox... I have OpenVZ from git and OpenVZ stable version in my servers... One of the problem is when i backup a VE with vzdump and restore in Proxmox i have problems with memory allocation with kmemsize.

OpenVZ:

uid resource held maxheld barrier limit failcnt
101: kmemsize 389176 1074473 2752512 2936012 0

Proxmox:

uid resource held maxheld barrier limit failcnt
300: kmemsize 2112373 2770723 2752512 2936012 62

Why this happen? The VE is the same... (Only change ORIGIN_SAMPLE="pve.auto")... But the same without changing anyting.

Other problem is network in KVM "very" slow... I really don´t care about it because i know KVM is in high development, so maybe in the future this will be fixed... But in th other side i have bonding, and sometimes i feel that proxmox network is little slow (maybe the vmbr0?... i don´t use bridge in my servers with bonding).

Taking again the mem problem in migration... I have etch, and step by step i´m migratin VE to lenny, if i dist-upgrade in OpenVZ not problem at all but if i move first the VE to Proxmox and then try to dist-upgrade, again many memory problems... The changes in the Proxmox website don´t help at all.

All my servers are HP and IBM with VT support... Lan gigabyte... Last version of Proxmox... I hope i can figure out the mem problem in VE, because with this i can migrate all my VE without problem.

Thanks for any help.
 
Hi, i´m doing some test to migrate all my servers to Proxmox... I have OpenVZ from git and OpenVZ stable version in my servers... One of the problem is when i backup a VE with vzdump and restore in Proxmox i have problems with memory allocation with kmemsize.

OpenVZ:

uid resource held maxheld barrier limit failcnt
101: kmemsize 389176 1074473 2752512 2936012 0

Proxmox:

uid resource held maxheld barrier limit failcnt
300: kmemsize 2112373 2770723 2752512 2936012 62

Why this happen? The VE is the same... (Only change ORIGIN_SAMPLE="pve.auto")... But the same without changing anyting.

after migrated your containers do not forget to adapt the memory setting on the web interface and click save (to check if its writable).

Other problem is network in KVM "very" slow... I really don´t care about it because i know KVM is in high development, so maybe in the future this will be fixed... But in th other side i have bonding, and sometimes i feel that proxmox network is little slow (maybe the vmbr0?... i don´t use bridge in my servers with bonding).[/quote]

test without bonding, better?

Taking again the mem problem in migration... I have etch, and step by step i´m migratin VE to lenny, if i dist-upgrade in OpenVZ not problem at all but if i move first the VE to Proxmox and then try to dist-upgrade, again many memory problems... The changes in the Proxmox website don´t help at all.

All my servers are HP and IBM with VT support... Lan gigabyte... Last version of Proxmox... I hope i can figure out the mem problem in VE, because with this i can migrate all my VE without problem.

Thanks for any help.
 
OpenVZ:

uid resource held maxheld barrier limit failcnt
101: kmemsize 389176 1074473 2752512 2936012 0

Proxmox:

uid resource held maxheld barrier limit failcnt
300: kmemsize 2112373 2770723 2752512 2936012 62

Why this happen?

Your VE simply uses too much kernel memory?
 
Actually doing some test.

after migrated your containers do not forget to adapt the memory setting on the web interface and click save (to check if its writable).

Yeah not problem with that... Writable when you change the variable ORIGIN_SAMPLE="pve.auto".


test without bonding, better?
I tested when install Proxmox first time... Again not problem at all (but this was in versión 1.0)... I upgrade to 1.1 and have bonding since that... Maybe the performance is not really slow in VE containers but in KVM the network have problems, drop packages, etc... And with multiple type of network cards is the same.
 
Please explain.

Your VE simply uses too much kernel memory?

When i install OpenVZ in servers, basically i conf the VE with vzctl,etc... Only if the container fail i modify the user_beancounters looking at /proc/user_beancounters for failcnt (this is manual all the time).

My question is about why if the container works great in OpenVZ, when i migrate the same container to Proxmox i have memory problems... The container config change in any way?

Doing some tests i can´t modify kmensize in Proxmox (web interface)... I´m using script https://wiki.openvz.org/Human_readable_user_beancounters to have better looking in UB... I don´t have any new VE from Proxmox (all tests are from migration, live and with vzdump)... I´m going to create some VE to see what values have in the begin.

Thanks for your fast response.
 
My question is about why if the container works great in OpenVZ, when i migrate the same container to Proxmox i have memory problems... The container config change in any way?

As you know, we also use OpenVZ. So the only reason can be that we use a different version (2.6.24).

Doing some tests i can´t modify kmensize in Proxmox (web interface)...

We use a much simpler resource model. You can only set memory/swap. Other values are computed to contain reasonable defaults.

- Dietmar
 
Last edited by a moderator:
Ok

As you know, we also use OpenVZ. So the only reason can be that we use a different version (2.6.24).

Mmmm ok... So i can manage my VE with vzctl without problem?


We use a much simpler resource model. You can only set memory/swap. Other values are computed to contain reasonable defaults.

- Dietmar

Ok, i really don´t have problem with that... I´m doing some tests :)

Thanks.
 
As you know, we also use OpenVZ. So the only reason can be that we use a different version (2.6.24).



We use a much simpler resource model. You can only set memory/swap. Other values are computed to contain reasonable defaults.

- Dietmar

I forgot to ask how proxmox compute the other values.

Thanks.
 
Ok... I will

if you need detailed info take a look at the source code.

I don´t know if this values are normal for any installation of proxmox... All my VE have barrier and limit fixed to -> 9223372036854775807 ... With any mem config it´s the same.
Any other person that have installed proxmox can check this please? inside the container wha values have "cat /proc/user_beancounters"... And thanks for the help.
Maybe i want to control a lot the memory, but this is the way i´m doing in OpenVZ.

Thanks.
 
Hi Dietmar...

I'm sorry to ask, but I'm not very good at reading source code.

I don't understand this statement:

We use a much simpler resource model. You can only set memory/swap. Other values are computed to contain reasonable defaults.

- Dietmar


Does this mean that the reasonable defaults are calculated when a VPS is migrated in?

What happens is a VPS, either one originally created on Proxmox or migrated in from OpenVZ, when it needs more resources, for example if the user installed more software? Are the settings dynamically adjusted?

I have been using OpenVZ for years and run mission critical services on it. I run it based on Debian using 2.6.18-fza-5-amd64. It runs very well, but I am often frustrated by the endless tweaking for all the complex settings followed by the need to run that little app that checks to make sure they are consistent.

Has this problem been eliminated in Proxmox?

Thanks...

Jim
 
Does this mean that the reasonable defaults are calculated when a VPS is migrated in?

No, we do not change anything when a VPS is migrated in.

Instead, create a dummy VPS with the setting you want, and copy the generated config.

What happens is a VPS, either one originally created on Proxmox or migrated in from OpenVZ, when it needs more resources, for example if the user installed more software? Are the settings dynamically adjusted?

If a VPS runs out of ram it fails - exactly the same as on any physical hardware. What do you mean by 'dynamically adjusted'? The whole purpose of resource limits is to limit resources, so it makes no sense to dynamically adjust the limits ;-)

I have been using OpenVZ for years and run mission critical services on it. I run it based on Debian using 2.6.18-fza-5-amd64. It runs very well, but I am often frustrated by the endless tweaking for all the complex settings followed by the need to run that little app that checks to make sure they are consistent.

Has this problem been eliminated in Proxmox?

yes.
 
No, we do not change anything when a VPS is migrated in.

Instead, create a dummy VPS with the setting you want, and copy the generated config.

So my conf is not migrated (if using vzmigrate) or restored (if using vzdump)?

If a VPS runs out of ram it fails - exactly the same as on any physical hardware. What do you mean by 'dynamically adjusted'? The whole purpose of resource limits is to limit resources, so it makes no sense to dynamically adjust the limits ;-)

Yeah, I can see I was not making too much sense there ;)

What I was thinking about is that in OpenVZ, I'll often run out of one resource that is related to another. For example, if I allocate 50GB disk space, I'll run out of inodes long before I hit 50GB. This is a real annoyance with my mail servers. I use Cyrus, which keeps one file per mail message, so I have a bunch of little files. I just having to keep increasing the inode parameter.

This is also true with all the RAM parameters. For example, I'll run out of kernel ram, and not be able to use the ram allocated.

So I am forever watching user bean counters to see which resource a VPS ran out of and tweaking it. I even wrote scripts to monitor it. Its a pain in the neck.

I like the ability to just tell it how much RAM, how much disk space and be done with it!
 
So my conf is not migrated (if using vzmigrate) or restored (if using vzdump)?

It is migrated an also resored (why do you think it is not?). It is just not automatically converted if you migrate from a non-Proxmox OpenVZ host.

What I was thinking about is that in OpenVZ, I'll often run out of one resource that is related to another. For example, if I allocate 50GB disk space, I'll run out of inodes long before I hit 50GB. This is a real annoyance with my mail servers.

We currently use a large limit for inodes (220000/GB). This is about 4096 bytes per inode, which is basically unlimited.

This is also true with all the RAM parameters. For example, I'll run out of kernel ram, and not be able to use the ram allocated.

So I am forever watching user bean counters to see which resource a VPS ran out of and tweaking it. I even wrote scripts to monitor it. Its a pain in the neck.

I like the ability to just tell it how much RAM, how much disk space and be done with it!

Thats what we are trying to achieve.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!