Slowness in all Proxmox VMs

nourmail

Renowned Member
Mar 6, 2013
14
0
66
Hello,


I have a blocking slowness in Proxmox VMs, especially when I access to data and/or writing to hard drives.


Proxmox is installed on physical servers IBM xSystem 3550 M4 and virtualization support is enabled in the BIOS and VMs.


I use 4 hard drives configured with RAID5 to store my VMs.


My configuration is as follows:

root@proxmox01:/# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


=======================================

root@proxmox01:/# cat /etc/pve/nodes/proxmox01/qemu-server/106.conf
balloon: 512
bootdisk: ide0
cores: 4
ide0: Linux:106/vm-106-disk-1.vmdk,format=vmdk,size=100G
ide1: Linux:106/vm-106-disk-2.vmdk,format=vmdk,size=200G
ide2: Linux:iso/CentOS-7.0-1406-x86_64-DVD.iso,media=cdrom
memory: 5800
name: SRV-MAIL-CentOS7.0-IP20.26
net0: e1000=7A:92:BB:F4:2B:50,bridge=vmbr1
ostype: other
smbios1: uuid=79e0b60a-43b9-47d3-b7b5-281bb8c1b2cf
sockets: 1
root@proxmox01:/#
 
Hello,


I have a blocking slowness in Proxmox VMs, especially when I access to data and/or writing to hard drives.


Proxmox is installed on physical servers IBM xSystem 3550 M4 and virtualization support is enabled in the BIOS and VMs.


I use 4 hard drives configured with RAID5 to store my VMs.


My configuration is as follows:

root@proxmox01:/# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


=======================================

root@proxmox01:/# cat /etc/pve/nodes/proxmox01/qemu-server/106.conf
balloon: 512
bootdisk: ide0
cores: 4
ide0: Linux:106/vm-106-disk-1.vmdk,format=vmdk,size=100G
ide1: Linux:106/vm-106-disk-2.vmdk,format=vmdk,size=200G
ide2: Linux:iso/CentOS-7.0-1406-x86_64-DVD.iso,media=cdrom
memory: 5800
name: SRV-MAIL-CentOS7.0-IP20.26
net0: e1000=7A:92:BB:F4:2B:50,bridge=vmbr1
ostype: other
smbios1: uuid=79e0b60a-43b9-47d3-b7b5-281bb8c1b2cf
sockets: 1
root@proxmox01:/#
Hi,
I assume your VMs makes more IO than your storage can support.

Look with iostat (like "iostat -dm 5 /dev/sdb") to see how much IOs and troughput you reach. iostat is inside the package sysstat ("apt-get install sysstat")
You can also use "pveperf /mountpoint/of/storage" during idle time and during "slowness" to compare the speed and fsyncs.

What kind of raidcontroller do you use? A fast one?

Udo
 
Hello,

Please find attached the output of the two commands (iostat and pveperf) in normal and slowness mode.

The server Raid controller is an Integrated 6 Gbps hardware, the storage which own VMs is a Raid5

Thanks for your help.
 

Attachments

  • iostat_normal_mode.png
    iostat_normal_mode.png
    48.2 KB · Views: 15
  • iostat_slowness_mode.png
    iostat_slowness_mode.png
    88.9 KB · Views: 19
  • pveperf_normal_mode.png
    pveperf_normal_mode.png
    51.1 KB · Views: 17
  • pveperf_slowness_mode.png
    pveperf_slowness_mode.png
    100.4 KB · Views: 18
Hi,

Please find attached the print screen (pveperf.png) which show :
- The storage used
- The output of the pvestat command on normal mode (without any activity on the VM)
- The output of the pvestat command when slowness appair

Thank you for your help.
 

Attachments

  • pveperf.png
    pveperf.png
    94.5 KB · Views: 16
Hi,

I think it is simple...activate Force Write Back on your Raid Controller / Raid Volume (Bios)...and you should be fine. You should also consider NOT to use RAID5 for Virtualisation just RAID10 brings the IO-Performance.
 
Hi,
macday wrote allready, that you should enable write cache on your raid-controller. Normaly you need (or should use) an BBU to avoid data loss during power loss / kernel panics or so on.

Some embedded raidcontroller are slow and crappy - also on servers. I had lsi-crapware on sun servers which are very slow...

Fsyncs should be more than 1000!

To compare, this are values during normal operation (but with 6*SAS in raid10 on an areca raid-controller)
Code:
pveperf /mnt/local_pve
CPU BOGOMIPS:      69349.36
REGEX/SECOND:      1444597
HD SIZE:           543.34 GB (/dev/mapper/pve_local-data)
BUFFERED READS:    470.46 MB/sec
AVERAGE SEEK TIME: 5.48 ms
FSYNCS/SECOND:     4462.47
Udo