Proxmox VE 1.9 released!

Our package already include those fixes - what patch do you miss exactly?

Sorry, i have strange problems with vzctl start/stop --wait . Think this go away with new vzctl because see vzctl: 3.0.28-1pve5 in pveversion. Thanks for the great work!

Small question: I used the word "unlimited" in my VE configs, but the Proxmox GUI cannot work with this settings. The same with SWAPPAGES and PHYSPAGES. The GUI doesnt support it (looks so). Do you think you can make a update before 2.0 for the stable 2.6.32 kernel of your 1.9 GUI? I can send you configs and screenshots.
 
Small question: I used the word "unlimited" in my VE configs, but the Proxmox GUI cannot work with this settings.

unlimited should work. If not, then it is a bug. Just post the config that does not work.

The same with SWAPPAGES and PHYSPAGES. The GUI doesnt support it (looks so). Do you think you can make a update before 2.0 for the stable 2.6.32 kernel of your 1.9 GUI? I can send you configs and screenshots.

We can't support SWAPPAGES in 1.X (because kernel 2.6.18 does not support it)
 
Code:
ONBOOT="no"
PHYSPAGES="0:524288"
SWAPPAGES="0"
KMEMSIZE="512M:640M"
LOCKEDPAGES="512M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
DCACHESIZE="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

DISKSPACE="52428800:52428800"
DISKINODES="5120000:5120000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"

IOPRIO="4"

# CPU fair sheduler parameter
CPUUNITS="100"
CPUS="1"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/$VEID"
OSTEMPLATE="debian-6.0-standard_6.0_amd64"
ORIGIN_SAMPLE="pve.auto"
IP_ADDRESS="192.168.0.40"
HOSTNAME="hostname"
NAMESERVER="8.8.8.8"
SEARCHDOMAIN="hostname"
DISABLED="no"
CPULIMIT="100"

The RAM limit is displaying wrong - but the container work right and the ram inside ct is right too. Based on a openvz sample. I hope this can help you.
 
We just released Proxmox VE 1.9 - including a lot of fixes and updates - a big thanks to all beta testers. This release includes the long awaited new stable OpenVZ (2.6.32) and also latest KVM 0.15 with KSM support. Release notes: Roadmap How to get the latest version: Downloads __________________ Best regards, Martin Maurer

Wonderful work. You deserve appreciation and thanks for the hard work.

But I encountered a few issues after upgrade from 1.8 to 1.9 as of below informing for your perusal:

1) The machine reboots without any errors (nothing in dmesg output) on times. This never happened before for about more than a year with pve testing.

2) The machine got much slower (though having adequate memory and processing power) compared to before (Now even a single openvz container running with very optimized apache2, varnish, postfix and mysql without any aggressive transaction consumes more than a GB. I had to assign 1.5GB for a single container (ve) to run, else there were a lot of failcnt in user_beancounters.

3) The default pve kernel does not seem to compiled with CONFIG_UBC_KEEP_UNUSED=n to refresh the resource utilization refreshing as described here: http://wiki.openvz.org/UBC_failcnt_reset#How_to_clear_failcnt.3F

4) I am thinking of upgarding it to debian squeeze by recompiling the source packages, but could not find the debian/rules in the sources. Is there a specific way of compiling in pve? Thanks again!!
 
Last edited:
The RAM limit is displaying wrong - but the container work right and the ram inside ct is right too. Based on a openvz sample. I hope this can help you.

You can't simply set ORIGIN_SAMPLE="pve.auto".

We use that to indicate that the configuration is generated by pve, which is obviously not the case.
If you want to manage the CT with pve you should go the the VM configuration page, correct the values, then press save.
After that you should have correct values in the configuration.

Does that work for you?
 
3) The default pve kernel does not seem to compiled with CONFIG_UBC_KEEP_UNUSED=n to refresh the resource utilization refreshing as described here: http://wiki.openvz.org/UBC_failcnt_reset#How_to_clear_failcnt.3F

Because that is not recommended.

4) I am thinking of upgarding it to debian squeeze by recompiling the source packages, but could not find the debian/rules in the sources. Is there a specific way of compiling in pve? Thanks again!!

Take a look at the Makefiles.
 
You can't simply set ORIGIN_SAMPLE="pve.auto".
...
Does that work for you?
Sorry, dont know it.

I follow your instruction and get this conf after save the same value for ram

Code:
ONBOOT="no"
PHYSPAGES="0:524288"
SWAPPAGES="0"
KMEMSIZE="512M:640M"
LOCKEDPAGES="unlimited"
PRIVVMPAGES="524288:536788"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="524288:unlimited"
OOMGUARPAGES="524288:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
DCACHESIZE="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

But after the save - when I open the CT again - the value for RAM "2048" stands now under SWAP and RAM is 0.

Sorry if this make work for you - it's just a experiment - i can use only Proxmox GUI for manage my CT. Just try to config the CT with the new way in 2.6.32 - and think this is a small bug. Thanks for your interrest.
 
2) The machine got much slower (though having adequate memory and processing power) compared to before (Now even a single openvz container running with very optimized apache2, varnish, postfix and mysql without any aggressive transaction consumes more than a GB. I had to assign 1.5GB for a single container (ve) to run, else there were a lot of failcnt in user_beancounters.

Did you use a 2.6.32 kernel before? CPU limits did not work in those kernel, but they now work - so maybe that is the resason for the 'slowness'?
 
Is the 2.6.35 kernel similarly up to date with upstream KVM features?
 
Last edited by a moderator:
We are experiencing slowness and high loads on the PVE host as well since updating last night to 1.9. (We have been using 2.6.32 before as well.)
CPU is 30-40% used, there is no IO going on, yet the guests perform poorly and the host reports much higher average load numbers than before.

pve-load.jpg
 
you need to dig deeper, check all processes and load, also look for ksmd.
 
Ksmd is not running, we only have OpenVZ guests at the moment.

What I'm seeing after running top and htop for a while, is that the usual guest processes that use the most CPU (apache and mysqld processes mainly) are sometimes locking up for 100% for a couple of seconds, then backing down to their "real" usage which can be anything between 10-40%.

This is a fresh thing, only since the upgrade to PVE 1.9 / proxmox 2.6.32-6-pve.
 
Last edited:
pls run pveperf (but only if the server is idle) and give details about your hardware. what kernel did you run before?
 
Sorry can't run pveperf properly, these are production servers so can't really shut them down.

Hardware is almost identical, both are Core2 Quad boxes with 8GB RAM, Adaptec 2610 PCI RAID with 6 disks.
We have been running the previous kernel, 2.6.32-4-pve without any problems.

Since the upgrade to 1.9, every guest and the host system as well are running slow and exhibiting high load numbers.

Please advise about how to downgrade the kernel to the previous version, as this kernel is making our service unusable.
 
just download the wanted kernel and install with dpkg, make sure your grub points to the right kernel (see menu.lst).

but if you upgraded, the old kernel should be still there anyways.
 
Thanks. Couple of questions:

1. Any idea why this load might be happening?
2. Is the list of changes from 2.6.32-4 to 2.6.32-6 available somewhere?
3. Is it possible that the Adaptec RAID driver might be the cause?

Some additional info:
- mysqld processes tend to lock up a CPU core for 100% (usually for a couple of seconds, sometimes stuck forever)
- htop shows high "Soft IRQ" load on the CPU cores (gray bars)
- disk IO seems to be affected, vzdump threw lots of errors on a VPS last night
- KSM is not active, ksm/pages_sharing shows 0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!