Problem upgrading from VE 2.1 to VE 2.3

R

rtapp00

Guest
Hi guys new to the forum-

Recently I upgraded my proxmox host from VE 2.1 to the most recent release VE 2.3-13/7946f1f1. After doing so I noticed that the I/O performance on all of my VM's went south (Currently using NFS share on my freenas box for all VM storage needs (never had performance issues before).At first I thought maybe it was my gigabit switch that was to blame or maybe a bad nic however that was not the case and I have since ruled that out. Can anyone shed some light on this issues? Maybe i missed a step during the move to the latest build.

Any help is greatly appreciated :cool:
 
post your 'pveversion -v'
 
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-93
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-19-pve: 2.6.32-93
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1
 
Tom -

Thanks for the advice however all my VMs on this host are still having I/O latency issues after completing an update to the latest kernel. When attempting to install any OS on new VMs post upgrade to 2.3 it seems as if the VM either reboots or the virtual disk times out - this holds true to local and nfs shares. I did notice I have 2 different versions of the pve kernel on the host which I assume is normal? Is there a way that I can go back to the old kernel that was work at this point? What other things can I troubleshoot to find the root cause of this issue. Thanks again for the help!


pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-95
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-19-pve: 2.6.32-95
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1
 
what IO hardware do you have? show the results of pveperf, a small benchmark tool. and:


  • post the VM settings (qm config VMID)
  • do you run ext3 or ext4 on local storage?
  • what system scheduler do you run? default cfq?
 
Tom -

I have a dedicated network to my storage server which is a freenas 8 box running the NFS share where I store all my VM hard disk files. This server has not had any updates done to it recently and hosts other nfs shares with little latency. I did run pveperf however that utility only seemed to benchmark the local drive which I have attempted to use to figure out the latency issue (created a vmdk file on the local disk and still had the timeout issue). I have included the qm config for that test vm below. In terms of the system scheduler I would assume that it is the default as I have not adjusted any settings regarding that feature. On a side note I have attempted to change the test vm's disk controller to see if that fixes the issue however same result (very high latency with windows and linux).




PVEPERF:

CPU BOGOMIPS: 39733.68
REGEX/SECOND: 1375126
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 115.63 MB/sec
AVERAGE SEEK TIME: 10.76 ms
FSYNCS/SECOND: 900.63
DNS EXT: 107.63 ms
DNS INT: 0.60 ms

QM CONFIG:

root@Nitrogen:~# qm config 300
balloon: 512
bootdisk: sata0
cores: 4
ide0: VMSTRG1:300/vm-300-disk-1.qcow2,size=32G
ide2: local:iso/pbxinaflash20624-i386.iso,media=cdrom,size=685508K
memory: 1024
name: testvm
net0: rtl8139=8E:F5:50:C6:64:1D,bridge=vmbr2
ostype: win7
scsihw: virtio-scsi-pci
sockets: 1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!