Proxmox VE 1.9 released!

pls run pveperf (but only if the server is idle)

Tom, we have managed to run pveperf a couple of times on one of the servers before and after going back to the previous kernel (which btw solved the load problem completely).

Here are the results, first for 2.6.32-6-pve:
Code:
CPU BOGOMIPS:      24000.21
REGEX/SECOND:      775172
HD SIZE:           19.69 GB (/dev/mapper/pve-root)
BUFFERED READS:    60.17 MB/sec
AVERAGE SEEK TIME: 9.32 ms
FSYNCS/SECOND:     416.25
DNS EXT:           123.49 ms
DNS INT:           0.75 ms

And after the reboot to 2.6.32-4-pve:
Code:
CPU BOGOMIPS:      24001.33
REGEX/SECOND:      776829
HD SIZE:           19.69 GB (/dev/mapper/pve-root)
BUFFERED READS:    60.79 MB/sec
AVERAGE SEEK TIME: 9.49 ms
FSYNCS/SECOND:     406.84
DNS EXT:           127.04 ms
DNS INT:           0.90 ms

As you can see there is no significant difference when idle (but there is a huge difference in performance when under load).

Is there a list of patches to see the difference between the two kernels?
We would really like to nail this problem down.
 
...

Is there a list of patches to see the difference between the two kernels?
...

not really, the 2.6.32-4 is based on Squeeze, the 2.6.32-6 is based on RHEL61.
 
not really, the 2.6.32-4 is based on Squeeze, the 2.6.32-6 is based on RHEL61.

Any idea how we should investigate the cause of this huge load in 2.6.32-6 ?

If it didn't show up for you when testing, maybe it's hardware specific.
Is the Adaptec driver different between the two kernels that handles our 2610SA controller?
 
Just did the update to proxmox 1,9 on a test system but our scsi controller (driver GDTH) driver is missing now;

xxx:/lib# dpkg -l | grep kernel
ii pve-kernel-2.6.32-4-pve 2.6.32-33 The Proxmox PVE Kernel Image
ii pve-kernel-2.6.32-6-pve 2.6.32-43 The Proxmox PVE Kernel Image
xxx:/lib# find . -name gdth.ko
./modules/2.6.32-4-pve/kernel/drivers/scsi/gdth.ko
trox:/lib# grep -i gdth /boot/config-2.6.32-*
/boot/config-2.6.32-4-pve:CONFIG_SCSI_GDTH=m
/boot/config-2.6.32-6-pve:# CONFIG_SCSI_GDTH is not set

Please include the gdth driver in your next kernel compile again and many thanks for the great work.
 
Please can you revert that change and test again?

I've just read in a different thread that 2.6.32-6 now respects the CPU flag. It that is true under OpenVZ, it would most likely answer the problems we've been having with slow performance (all our OpenVZ VE's were set for 1 CPU).

High load average was reported while at the same time there was relatively low cpu and io utilization, and some processes maxing out cores - which could have happened as the guests were starved of CPU resources.

Will test tonight and report back.
 
Last edited:
yes. that means if you got a dual socket server with 4 cores (2x4) you got the full cpu power inside a container in 1.8 with 2.6.32, but ONLY if you used the 2.6.32 unstable OpenVZ branch. the recommended branch for OpenVZ was 2.6.18 also limited the cpu as expected. now, 2.6.32-6 is the way to go.

So if you used 2.6.32-4 and now 2.6.32-6 without changing the cpu flag you will see dramatic lower cpu power inside the container which is expected. Martin will post some benchmark results in another thread, showing the difference.
 
Hello Dietmar,

your new kernel will work fine, No boot problem, the driver seems to work. I will run a few test VM for some days and will report back. Many thanks for the quick fix.
 
I can report that too.
I upgraded to 1.9 and with the -6 kernel, the load is on 100 -_-

My openvz cant work correctly, all the windows eat a lot more of cpu resource.
With kernel -4 works ok.
 
Last edited:
i had a problem with the kvm-module just after upgrading. after rebooting the cluster i couldn't start my kvm's anymore. when i tried to create a new one at the frontend, i got "Attention: KVM module not loaded. Maybe you need to enable Intel VT / AMD-V support in the BIOS."

/etc/modules was empty, so i had to put it there and loading the module manually with modprobe kvm-intel.
qm start <vmid> responded with an VM is locked (backup), but that was maybe another issue or side effect.
maybe this is helpful to other people.

is there a reason, why it isnt automatically started at boot time anymore ?

Code:
web2:~# pveversion -v
pve-manager: 1.9-24 (pve-manager/1.9/6542)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-47
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-47
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-2pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6

4 cores:
Code:
web2:~# cat /proc/cpuinfo
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 42
model name    : Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
stepping    : 7
cpu MHz        : 3093.417
cache size    : 8192 KB
physical id    : 0
siblings    : 4
core id        : 0
cpu cores    : 4
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 13
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat xsaveopt pln pts tpr_shadow vnmi flexpriority ept vpid
bogomips    : 6186.83
clflush size    : 64
cache_alignment    : 64
address sizes    : 36 bits physical, 48 bits virtual
power management:
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!