First of all, I'm very grateful for the work you have done with ProxMox and also for the active community. I will try to contribute somehow also in the future as gesture of appreciation.
We have been using ProxMox for almost 8 months now and we had some ups and downs with the system. At least all downs were easy to fix with the help of a lot of comments of users in this community forum and the responses of the support team of ProxMox. Overall it's working good.
We have four servers and the last few weeks the Windows VM's were not working that well. So I started to search on the ProxMox forum and on the net for similar experiences. According to all the experiences and adjustments that other users had, I took some actions to improve the servers, almost the same for most of servers.
My question is whether you can advise if the pveperf output is acceptable or not. If not, what can I do more to make it more stable and better performing.
Thanks in advance.
> Server 1
Server Type: DL 380 G5
Processor: 2 x Quad Core 1.6 Ghz
Memory: 10 GB
Raid Controller: HP Smart Array P400 rev B / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM - Windows Server 2003 Enterprise; Functions => Terminal Server
Actions Taken:
- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 25601.40
REGEX/SECOND: 522164
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 114.40 MB/sec
AVERAGE SEEK TIME: 4.61 ms
FSYNCS/SECOND: 1510.94
DNS EXT: 254.16 ms
DNS INT: 0.97 ms
> Server 2
Server Type: DL 380 G5
Processor: 1 x Quad Core 2.5 Ghz
Memory: 6 GB
Raid Controller: HP Smart Array P400 rev E / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM -Windows Server 2003 Enterprise; Functions => Active Directory, Sql Server, Print Server
02 - OpenVZ - CentOS 5; Functions => DHCP Server, DNS Server
03 - KVM - Windows XP => Door Access Control Service
Actions Taken:
- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 20000.15
REGEX/SECOND: 805366
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 166.84 MB/sec
AVERAGE SEEK TIME: 4.65 ms
FSYNCS/SECOND: 2051.38
DNS EXT: 2306.39 ms
DNS INT: 0.51 ms
> Server 3
Server Type: ML 350 G4
Processor: 1 x Dual Core 3.2 Ghz
Memory: 5 GB
VM's:
01 - OpenVZ - CentOS 5; Functions => BackupPC Server
02 - OpenVZ - CentOS 5; Functions => Openthinclient.org Server
03 - OpenVZ - CentOS 5; Functions => Webserver (Apache, PHP, Mysql)
Actions Taken:
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 12799.86
REGEX/SECOND: 563932
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 37.89 MB/sec
AVERAGE SEEK TIME: 5.15 ms
FSYNCS/SECOND: 153.57
DNS EXT: 774.27 ms
DNS INT: 0.94 ms
> Server 4
Server Type: ML 350 G5
Raid Controller: HP Smart Array E200 / 128MB BBWC (write cache is automatically enabled)
Processor: 1 x Quad Core 1.6 Ghz
Memory: 6 GB
VM's:
01 - KVM - CentOS (rPath); Functions => OpenFiler
Actions Taken:
- Set the Array Acceleration to 50% Read and 50% write
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
pveperf output:
CPU BOGOMIPS: 12800.56
REGEX/SECOND: 533661
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 109.71 MB/sec
AVERAGE SEEK TIME: 5.00 ms
FSYNCS/SECOND: 364.65
DNS EXT: 781.43 ms
DNS INT: 0.82 ms
I must say that it is quite stable at the moment, but I have a feeling that I haven't reached the best what I can take out of it in order to have it stable and good performing also.
Much appreciated for your response and advise,
Abdelhakeem
We have been using ProxMox for almost 8 months now and we had some ups and downs with the system. At least all downs were easy to fix with the help of a lot of comments of users in this community forum and the responses of the support team of ProxMox. Overall it's working good.
We have four servers and the last few weeks the Windows VM's were not working that well. So I started to search on the ProxMox forum and on the net for similar experiences. According to all the experiences and adjustments that other users had, I took some actions to improve the servers, almost the same for most of servers.
My question is whether you can advise if the pveperf output is acceptable or not. If not, what can I do more to make it more stable and better performing.
Thanks in advance.
> Server 1
Server Type: DL 380 G5
Processor: 2 x Quad Core 1.6 Ghz
Memory: 10 GB
Raid Controller: HP Smart Array P400 rev B / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM - Windows Server 2003 Enterprise; Functions => Terminal Server
Actions Taken:
- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 25601.40
REGEX/SECOND: 522164
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 114.40 MB/sec
AVERAGE SEEK TIME: 4.61 ms
FSYNCS/SECOND: 1510.94
DNS EXT: 254.16 ms
DNS INT: 0.97 ms
> Server 2
Server Type: DL 380 G5
Processor: 1 x Quad Core 2.5 Ghz
Memory: 6 GB
Raid Controller: HP Smart Array P400 rev E / 512MB BBWC (write cache is automatically enabled)
VM's:
01 - KVM -Windows Server 2003 Enterprise; Functions => Active Directory, Sql Server, Print Server
02 - OpenVZ - CentOS 5; Functions => DHCP Server, DNS Server
03 - KVM - Windows XP => Door Access Control Service
Actions Taken:
- Changed the RAID 5 configuration to RAID 10 (1+0)
- Updated the BIOS Firmware of the server
- Updated the HP Smart Array P400 firmware
- Set the Array Acceleration to 50% Read and 50% write
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
- Changed the Disk and Network Card to virtIO
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 20000.15
REGEX/SECOND: 805366
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 166.84 MB/sec
AVERAGE SEEK TIME: 4.65 ms
FSYNCS/SECOND: 2051.38
DNS EXT: 2306.39 ms
DNS INT: 0.51 ms
> Server 3
Server Type: ML 350 G4
Processor: 1 x Dual Core 3.2 Ghz
Memory: 5 GB
VM's:
01 - OpenVZ - CentOS 5; Functions => BackupPC Server
02 - OpenVZ - CentOS 5; Functions => Openthinclient.org Server
03 - OpenVZ - CentOS 5; Functions => Webserver (Apache, PHP, Mysql)
Actions Taken:
- Downgraded from proxmox-ve-2.6.24 kernel to proxmox-ve-2.6.18 kernel
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
pveperf output:
CPU BOGOMIPS: 12799.86
REGEX/SECOND: 563932
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 37.89 MB/sec
AVERAGE SEEK TIME: 5.15 ms
FSYNCS/SECOND: 153.57
DNS EXT: 774.27 ms
DNS INT: 0.94 ms
> Server 4
Server Type: ML 350 G5
Raid Controller: HP Smart Array E200 / 128MB BBWC (write cache is automatically enabled)
Processor: 1 x Quad Core 1.6 Ghz
Memory: 6 GB
VM's:
01 - KVM - CentOS (rPath); Functions => OpenFiler
Actions Taken:
- Set the Array Acceleration to 50% Read and 50% write
- Changed the IO Scheduler Module from [cfq] to [noop] on the host
- Adjusted the read ahead cache for the host from 256KB to 8192KB
pveversion output:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-1-pve: 2.6.18-4
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
pveperf output:
CPU BOGOMIPS: 12800.56
REGEX/SECOND: 533661
HD SIZE: 4.92 GB (/dev/mapper/pve-root)
BUFFERED READS: 109.71 MB/sec
AVERAGE SEEK TIME: 5.00 ms
FSYNCS/SECOND: 364.65
DNS EXT: 781.43 ms
DNS INT: 0.82 ms
I must say that it is quite stable at the moment, but I have a feeling that I haven't reached the best what I can take out of it in order to have it stable and good performing also.
Much appreciated for your response and advise,
Abdelhakeem
Last edited: