I need some advice.

abdelhakeem

New Member
Aug 14, 2010
3
0
1
Dubai, UAE
First of all, I'm very grateful for the work you have done with ProxMox and also for the active community. I will try to contribute somehow also in the future as gesture of appreciation.

We have been using ProxMox for almost 8 months now and we had some ups and downs with the system. At least all downs were easy to fix with the help of a lot of comments of users in this community forum and the responses of the support team of ProxMox. Overall it's working good.

We have four servers and the last few weeks the Windows VM's were not working that well. So I started to search on the ProxMox forum and on the net for similar experiences. According to all the experiences and adjustments that other users had, I took some actions to improve the servers, almost the same for most of servers.
My question is whether you can advise if the pveperf output is acceptable or not. If not, what can I do more to make it more stable and better performing.
Thanks in advance.

> Server 1

Server Type: DL 380 G5
Processor:
2 x Quad Core 1.6 Ghz
Memory: 10 GB
Raid Controller: HP Smart Array P400 rev B / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM - Windows Server 2003 Enterprise; Functions => Terminal Server

Actions Taken:

- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB

pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5

pveperf output:
CPU BOGOMIPS: 25601.40
REGEX/SECOND: 522164
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 114.40 MB/sec
AVERAGE SEEK TIME: 4.61 ms
FSYNCS/SECOND: 1510.94
DNS EXT: 254.16 ms
DNS INT: 0.97 ms

> Server 2
Server Type: DL 380 G5
Processor: 1 x Quad Core 2.5 Ghz
Memory: 6 GB
Raid Controller: HP Smart Array P400 rev E / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM -Windows Server 2003 Enterprise; Functions => Active Directory, Sql Server, Print Server
02 - OpenVZ - CentOS 5; Functions => DHCP Server, DNS Server
03 - KVM - Windows XP => Door Access Control Service

Actions Taken:
- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB

pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5

pveperf output:
CPU BOGOMIPS: 20000.15
REGEX/SECOND: 805366
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 166.84 MB/sec
AVERAGE SEEK TIME: 4.65 ms
FSYNCS/SECOND: 2051.38
DNS EXT: 2306.39 ms
DNS INT: 0.51 ms

> Server 3
Server Type: ML 350 G4
Processor: 1 x Dual Core 3.2 Ghz
Memory: 5 GB
VM's:
01 - OpenVZ - CentOS 5; Functions => BackupPC Server
02 - OpenVZ - CentOS 5; Functions => Openthinclient.org Server
03 - OpenVZ - CentOS 5; Functions => Webserver (Apache, PHP, Mysql)

Actions Taken:

- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel

pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5

pveperf output:
CPU BOGOMIPS: 12799.86
REGEX/SECOND: 563932
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 37.89 MB/sec
AVERAGE SEEK TIME: 5.15 ms
FSYNCS/SECOND: 153.57
DNS EXT: 774.27 ms
DNS INT: 0.94 ms

> Server 4
Server Type: ML 350 G5
Raid Controller: HP Smart Array E200 / 128MB BBWC (write cache is automatically enabled)
Processor: 1 x Quad Core 1.6 Ghz
Memory: 6 GB
VM's:
01 - KVM - CentOS (rPath); Functions => OpenFiler

Actions Taken:

- Set the Array Acceleration to 50% Read and 50% write
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB

pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1

pveperf output:
CPU BOGOMIPS: 12800.56
REGEX/SECOND: 533661
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 109.71 MB/sec
AVERAGE SEEK TIME: 5.00 ms
FSYNCS/SECOND: 364.65
DNS EXT: 781.43 ms
DNS INT: 0.82 ms

I must say that it is quite stable at the moment, but I have a feeling that I haven't reached the best what I can take out of it in order to have it stable and good performing also.

Much appreciated for your response and advise,
Abdelhakeem
 
Last edited:
Hi,
you wrote that you have an raid controller with raid 10 - also min. 4 drives. The seek times of 4.x ms shows that you use SAS-Drives.
The read performance is very low for this configuration (114 MB/s). I got with four sas-drives as raid10 four times better values:
Code:
proxmox1:~# pveperf /var/lib/vz
CPU BOGOMIPS:      27291.73
REGEX/SECOND:      1091555
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    487.82 MB/sec
AVERAGE SEEK TIME: 5.50 ms
FSYNCS/SECOND:     5851.18
DNS EXT:           63.39 ms
DNS INT:           0.50 ms

Perhaps you can try a faster Raid-controller? IO do a lot for virtualisation.

Udo
 
Hi Udo,

Thanks for your advice. I agree with you and I also think that the raid controller is not that well. I've read on several posts on the net that the HP Smart Array P400 is not really that fantastic. I will consider the changing of the raid controller also. Do you have any good experience with some raid controllers you can advice?

There is also one thing I've read on the net, which is that it is possible to enable also the drive write cache for the drives besides the automatically enabled write cache of the raid controller. It is not advised, but it will boost up the performance. At leas that's what some say according to their experience. The only condition to make sure that all data is written to the drives in time is to have a UPS in case of a power failure.

If I enable more write cache will that affect the read performance also or is this caching only for writing?

Furthermore, there are also some other actions which are possible like changing the array acceleration from 50% read and 50% write to 100% read and 0% write (at least for the host having only Terminal Server). And maybe if I adjust the read ahead caching again to more or even to less. Because the default was 256KB. And I can remember that I've seen the buffered reads somewhere around 300MB before I changed to RAID 10 and adjusted all the mentioned adjustments.

What do you think?
 
Hi Udo,

Thanks for your advice. I agree with you and I also think that the raid controller is not that well. I've read on several posts on the net that the HP Smart Array P400 is not really that fantastic. I will consider the changing of the raid controller also. Do you have any good experience with some raid controllers you can advice?
Hi Abdelhakeem,
i made good experiences with Areca Raid-controller like this http://www.areca.com.tw/products/pcietosas01.htm - the proxmox team prefer Adaptec.
There is also one thing I've read on the net, which is that it is possible to enable also the drive write cache for the drives besides the automatically enabled write cache of the raid controller. It is not advised, but it will boost up the performance. At leas that's what some say according to their experience. The only condition to make sure that all data is written to the drives in time is to have a UPS in case of a power failure.
It's depends on the raid controller if it's possible (unsafe) to enable the disk-cache. A UPS is not enough, becourse if a other part breaks (power supply, motherboard...) you got data loss.
And the disk-cache are small in comparison to the raid cache (e.g. 256MB to 16MB). So the effort will be not so much (i guess).
If I enable more write cache will that affect the read performance also or is this caching only for writing?

Furthermore, there are also some other actions which are possible like changing the array acceleration from 50% read and 50% write to 100% read and 0% write (at least for the host having only Terminal Server). And maybe if I adjust the read ahead caching again to more or even to less. Because the default was 256KB. And I can remember that I've seen the buffered reads somewhere around 300MB before I changed to RAID 10 and adjusted all the mentioned adjustments.

What do you think?
I think thats depends on your usage, but i can't believe that 100% read-cache is a good configuration. But i made no tests how the cache usage on my production raidcontroller looks like (i am not sure if i can access the cache-usage of the raid controller...)

Udo
 
Thanks Udo for your prompt feedback!

I think your advise for acquiring new and better RAID controllers is the way to go, because I don't see any other action at this moment to improve the system. Using the virtIO Disk and virtIO network card already gave it a good stability together with the proxmox-ve-2.6.18 version.

Thanks again.

Abdelhakeem
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!