Odd Performance issues

carlosmp

Renowned Member
Jun 2, 2010
46
1
73
I've installed Proxmox 2.3 on a pair of nodes on a Dell C6100. Each node is a dual L5520, with 48GB RAM, and 2x500GB RE3 drives RAID1 (mdadm) (just to boot - it's what i had lying around unused), and using a FreeNAS server for storage of images, private, templates, etc. It seems that performance on both servers is very "lagged". There's no other way to describe it. To reboot a guest, I need to issue two reboot commands, and the second one will finally take. yum updates on the containers (centos 5.9, 6.3)

top shows normal usage on the nodes,
node01 - .07, .12, .23
node02 - 0.2, 0.28, 0.36

wa doesn't show more than 0.2 that I've seen

dd speed tests to the NFS storage show 95-105MB/s, using dd if=/dev/zero of=/mnt/pve/fn18/test.dd bs=2M count=50k

Storage is on a separate network, using 4 bonded gigabit interfaces in 802.3ad mode (with Hp Procurve 2848 switches detecting/building the dynamic links), and the same on the FreeNAS side.

I've updated all nodes to amke sure it wasn't a kernel issue. I've tried to move boxes over to one node to see if performance increases and find a "bad" guest, but there seems to be no noticable difference.

Any and all help is greatly appreciated.

/etc/pve/storage.cfs (the backup is on a separate interface not to saturate the primary link during backup)
Code:
nfs: fn18stor-primary
        path /mnt/pve/fn18stor-primary
        server 172.30.31.10
        export /mnt/volume01/pvestorage
        options timeo=20,retrans=5,rsize=8192,wsize=8192,intr,vers=4,proto=tcp
        content images,iso,vztmpl,rootdir
        maxfiles 1


dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0


nfs: pvebackup
        path /mnt/pve/pvebackup
        server 172.30.30.18
        export /mnt/volume01/pvebackup
        options vers=3
        content backup
        maxfiles 7

node01 - pveperf
Code:
CPU BOGOMIPS:      72527.60REGEX/SECOND:      859928
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    108.97 MB/sec
AVERAGE SEEK TIME: 8.77 ms
FSYNCS/SECOND:     741.86
DNS EXT:           17.96 ms
DNS INT:           8.34 ms

node01 - pveversion -v
Code:
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-93
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1

node02 - pveperf
Code:
CPU BOGOMIPS:      72526.00REGEX/SECOND:      877240
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    112.10 MB/sec
AVERAGE SEEK TIME: 8.66 ms
FSYNCS/SECOND:     703.91
DNS EXT:           18.11 ms
DNS INT:           6.19 ms

node-2 - pveversion -v
Code:
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-93
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1

/etc/network/interfaces (both nodes are the same, except for IP on vmbr0, bond0)
Code:
auto eth1.20
iface eth1.20 inet manual
        mtu 9000


auto eth1.21
iface eth1.21 inet manual
        mtu 9000


auto eth1.22
iface eth1.22 inet manual
        mtu 9000


auto eth1.23
iface eth1.23 inet manual
        mtu 9000


auto eth1.27
iface eth1.27 inet manual
        mtu 9000


auto eth1.29
iface eth1.29 inet manual
        mtu 9000


auto eth1.30
iface eth1.30 inet manual
        mtu 9000


auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual
        mtu 9000


auto eth1
iface eth1 inet manual
        mtu 9000


auto eth2
iface eth2 inet manual
        mtu 9000


auto eth3
iface eth3 inet manual
        mtu 9000


auto eth4
iface eth4 inet manual
        mtu 9000


auto eth5
iface eth5 inet manual
        mtu 9000


auto bond0
iface bond0 inet static
        address  172.30.31.20
        netmask  255.255.255.0
        slaves eth2 eth3 eth4 eth5
        bond_miimon 100
        bond_mode 802.3ad


auto vmbr0
iface vmbr0 inet static
        address  172.30.30.90
        netmask  255.255.255.0
        gateway  172.30.30.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        pre-up ifconfig bond0 mtu 9000


auto vmbr1
iface vmbr1 inet manual
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0


auto vmbr20
iface vmbr20 inet manual
        bridge_ports eth1.20
        bridge_stp off
        bridge_fd 0


auto vmbr21
iface vmbr21 inet manual
        bridge_ports eth1.21
        bridge_stp off
        bridge_fd 0


auto vmbr22
iface vmbr22 inet manual
        bridge_ports eth1.22
        bridge_stp off
        bridge_fd 0


auto vmbr23
iface vmbr23 inet manual
        bridge_ports eth1.23
        bridge_stp off
        bridge_fd 0


auto vmbr27
iface vmbr27 inet manual
        bridge_ports eth1.27
        bridge_stp off
        bridge_fd 0


auto vmbr30
iface vmbr30 inet manual
        bridge_ports eth1.30
        bridge_stp off
        bridge_fd 0


auto vmbr29
iface vmbr29 inet manual
        bridge_ports eth1.29
        bridge_stp off
        bridge_fd 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!