Horrible performance after upgrade from 1.3 to 1.4

tnelson

New Member
Oct 28, 2009
8
0
1
Well, the title says it all. I've been using Proxmox since 1.1(VERY good software, thanks devs!) on a box with these specs:

-Dual Opeteron 280 (Dual core)
-16GB RAM
-Dual 1 TB SATA-II HDD (WD Caviar Black)
-3WARE 9650SE (RAID-1)
-Dual Broadcom BCM5721 Gigabit NICs

All of my virtual hosts are running under OpenVZ (no KVM support here...). Usage is Zimbra mail server, Debian Web/DB server, Asterisk PBX, and a handful of other Debian/Centos LAMP type setups. Memory usage is well under 4GB for all of these containers when running.

In the past, performance has been great, no problems. However, I upgraded to 1.4 yesterday and now I'm finding heavy system usage. It is troublesome to track down however. If I stop all containers and let the box 'settle' a bit, then run 'uptime' I'm still seeing loads like this:
Code:
11:45:03 up 1 day, 54 min,  2 users,  load average: 5.22, 4.85, 4.60
However, looking at top and htop, I'm not seeing any processes using any amount of CPU. It's as though I'm seeing 'phantom' resource usage.

So, I must ask... is there a fix for this? If not, is there a way to downgrade back to 1.3? How should I proceed?
 
System load average (reported by top and w) is a complex number, which includes both processor and IO utilization. If there is no process with high CPU usage, you can assume that something is using the IO resources heavily.

Try to install iostat (apt-get install sysstat on the proxmox host), and possibly other tools to investigate this issue further.
 
I've been using sysstat, htop, dstat, etc all to no avail. I'm unable to find the cause. AND, to make things worse, this is purely random. As mentioned in my OP, is it possible the latest kernel is introducing issues? I just read a couple of other threads that seem to indicate the latest kernel could be at fault.

What's next? How can I troubleshoot further? Barring that, *IS* there a way to downgrade to the previous version?
 
I already have those kernels still installed from the prior upgrades. Is it as simple as changing the default entry in grub to boot a different kernel? Will any of the supporting packages fail due to the older kernel version? I'm not using KVM at all... just OpenVZ.

Code:
vms1:~# dpkg -l | grep pve
ii  libpve-storage-perl               1.0-4                    Proxmox VE storage management library
ii  pve-firmware                      1                        Binary firmware code for the pve-kernel
ii  pve-kernel                        2.6.24-16                The Proxmox VE Kernel Image
ii  pve-kernel-2.6.24-5-pve           2.6.24-6                 The Proxmox PVE Kernel Image
ii  pve-kernel-2.6.24-7-pve           2.6.24-11                The Proxmox PVE Kernel Image
ii  pve-kernel-2.6.24-8-pve           2.6.24-16                The Proxmox PVE Kernel Image
rc  pve-kvm                           86-3                     Full virtualization on x86 hardware
ii  pve-manager                       1.4-9                    The Proxmox Virtul Environment
ii  pve-qemu-kvm                      0.11.0-2                 Full virtualization on x86 hardware
ii  vzctl                             3.0.23-1pve3             OpenVZ - server virtualization solution - co
 
I already have those kernels still installed from the prior upgrades. Is it as simple as changing the default entry in grub to boot a different kernel? Will any of the supporting packages fail due to the older kernel version? I'm not using KVM at all... just OpenVZ.

AFAIK it will not fail (if you only use OpenVZ).
 
I have not had time to try the older kernels yet... will try tonight.

lsmod output:
Code:
vms1:~# lsmod
Module                  Size  Used by
vzethdev               23552  0
vznetdev               32904  6
simfs                  14064  4
vzrst                 158248  0
vzcpt                 131256  0
tun                    23168  2 vzrst,vzcpt
vzdquota               60016  4 [permanent]
vzmon                  58008  8 vzethdev,vznetdev,vzrst,vzcpt
vzdev                  12808  4 vzethdev,vznetdev,vzdquota,vzmon
xt_tcpudp              12160  0
xt_length              10752  0
ipt_ttl                10624  0
xt_tcpmss              11008  0
xt_TCPMSS              13440  0
iptable_mangle         13824  4
iptable_filter         13568  4
xt_multiport           12160  0
xt_limit               11904  0
ipt_tos                10368  0
ipt_REJECT             13824  0
ip_tables              33384  2 iptable_mangle,iptable_filter
x_tables               34056  10 xt_tcpudp,xt_length,ipt_ttl,xt_tcpmss,xt_TCPMSS,xt_multiport,xt_limit,ipt_tos,ipt_REJECT,ip_tables
ipv6                  350848  148 vzrst,vzcpt,vzmon
bridge                 75304  0
jedec_probe            30976  0
cfi_probe              16384  0
gen_probe              12800  2 jedec_probe,cfi_probe
psmouse                54684  0
floppy                 78056  0
pcspkr                 12160  0
mtd                    27144  0
evdev                  22912  0
serio_raw              16388  0
tg3                   147588  0
thermal                26912  0
k8temp                 14976  0
chipreg                12416  2 jedec_probe,cfi_probe
map_funcs              10880  0
button                 18080  0
processor              49096  1 thermal
sg                     49560  0
scsi_wait_scan          9984  0
virtio_blk             16264  0
virtio                 14336  1 virtio_blk
dm_mod                 80248  7
usbhid                 43616  0
hid                    53312  1 usbhid
usb_storage            92608  0
libusual               30816  1 usb_storage
sd_mod                 41088  3
sr_mod                 27812  0
ide_disk               26624  0
ide_generic             9728  0 [permanent]
ide_cd                 43552  0
cdrom                  49064  2 sr_mod,ide_cd
i2c_nforce2            16000  0
i2c_core               36480  1 i2c_nforce2
ohci_hcd               40092  0
ssb                    46596  1 ohci_hcd
ehci_hcd               50188  0
usbcore               180784  6 usbhid,usb_storage,libusual,ohci_hcd,ehci_hcd
amd74xx                19224  0 [permanent]
ide_core              147608  4 ide_disk,ide_generic,ide_cd,amd74xx
pata_amd               24196  0
sata_nv                39688  0
pata_acpi              17152  0
ata_generic            17156  0
libata                189616  4 pata_amd,sata_nv,pata_acpi,ata_generic
shpchp                 46236  0
pci_hotplug            43056  1 shpchp
3w_9xxx                45956  2
scsi_mod              189752  7 sg,scsi_wait_scan,usb_storage,sd_mod,sr_mod,liba                                              ta,3w_9xxx
isofs                  47784  0
msdos                  19840  0
fat                    68912  1 msdos
 
I have 4 VZ containers running all the time, sometimes up to 10 when I'm doing dev work. And, I have the same symptoms as you. It is totally random also.

Well, I just rebooted into the kernel from Proxmox 1.3 and so far, things seem to be fine again. I'll keep a close eye on it and report back with any updates.

Code:
vms1:~# uname -a
Linux vms1 2.6.24-7-pve #1 SMP PREEMPT Fri Aug 21 09:07:39 CEST 2009 x86_64 GNU/Linux
Dietmar- Is there any other information I can provide that would be of help in troubleshooting this? Did you see anything suspect in the output of lsmod from my box?

Thanks!
 
1. Could you please provide the instructions/command that you followed to reboot into 1.3?

2. In 1.3 you did not have that issue at all?

3. Are there any risk to reboot into the 1.3 kernel from 1.4? I use the adaptec raid 5405z with 512ram. Is there a change I can run into driver issues by downgrading the kernel?
 
Dietmar- Is there any other information I can provide that would be of help in troubleshooting this? Did you see anything suspect in the output of lsmod from my box?

Please can you test with older kernels (as noted above)?
 
1. Could you please provide the instructions/command that you followed to reboot into 1.3?

@hisaltesse: Please can you also test with older kernels? I dont think it is related to user space tools.

ftp://pve.proxmox.com/debian/dists/l...24-6_amd64.deb
ftp://pve.proxmox.com/debian/dists/l...24-8_amd64.deb
ftp://pve.proxmox.com/debian/dists/l...4-11_amd64.deb

Does one of those kernels work?
progress.gif
 
Dietmar,

How do I install a previous kernel? Do I just install the deb file and reboot? It would be helpful if you can provide the exact commands to run so we are all on the same page. Thanks.
 
Dietmar,

How do I install a previous kernel? Do I just install the deb file and reboot? It would be helpful if you can provide the exact commands to run so we are all on the same page. Thanks.

I guess the answer that appears to be very difficult for you to give is the following:

1. Download the kernel using the command:
Code:
wget <kernel ftp url provided above>

2. Run the command
Code:
dpkg -i <kernel file>
to install it.

3. Edit the file /boot/grub/menu.lst and change the order of the kernel info at the bottom of the file, placing first the one that you want to automatically boot into. However even without editing this file you will have a change to select which kernel to boot into at boot time.

----
If you guys would provide steps to your requested test it could help making sure that everyone is on the same page and that the steps are reproduced properly.
 
@hisaltesse: Please can you also test with older kernels? I dont think it is related to user space tools.

ftp://pve.proxmox.com/debian/dists/l...24-6_amd64.deb
ftp://pve.proxmox.com/debian/dists/l...24-8_amd64.deb
ftp://pve.proxmox.com/debian/dists/l...4-11_amd64.deb

Does one of those kernels work?
progress.gif

@Dietmar,

I will install the previous kernel pve-kernel-2.6.24-7-pve_2.6.24-11_amd64.deb and reboot the production server in about 5 hours from now and will observer the data and let you know.
 
I installed pve-kernel-2.6.24-7-pve_2.6.24-11_amd64.deb and left it run for about 3 hours and still noticed load spikes....

I just installed pve-kernel-2.6.24-7-pve_2.6.24-8_amd64.deb and will let it run for a while and report back.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!