Hi @all.
I have IO problems with my Proxmox Server. If a single VM has a peak, for the whole system IO-wait increases extremly.
At the beginning there where only one VM, so I cannot say if it were slower with the proxmox-upgrades (starting 3.0) or with more VMs coming.
Now the system is most of the time extrem slow, so I look a bit deeper and found:
#1 software raid are not supported - damn - don't realize at the beginning
#2 Ext4 has sometimes poor Performance
#3 I have absymbal fsync/sec values
So, #1 is bad, but I cannot change immediately (online hosted server), so I hope it was not the main reason for the bad IO performance.
#2 seems slower than ext3, but not soo much as here, right?
And the main reason for #3 I found, was wrong disk alignment. So I checked this and get different results:
parted says the alignment should be OK:
The build in alignment test were successfull also.
but with fdisk it seems wrong:
Maybe because fdisk don't support GPT?
What is the correct result?
Here is my pveperf output:
Not running a VM:
With running some VMs, but no traffic/workload on the VMs
Here are my versiondump:
Here my running VMs (only one really used, the others are very low frequented):
Can someone help me to find a possible reason for this?
:thorsten
I have IO problems with my Proxmox Server. If a single VM has a peak, for the whole system IO-wait increases extremly.
At the beginning there where only one VM, so I cannot say if it were slower with the proxmox-upgrades (starting 3.0) or with more VMs coming.
Now the system is most of the time extrem slow, so I look a bit deeper and found:
#1 software raid are not supported - damn - don't realize at the beginning
#2 Ext4 has sometimes poor Performance
#3 I have absymbal fsync/sec values
So, #1 is bad, but I cannot change immediately (online hosted server), so I hope it was not the main reason for the bad IO performance.
#2 seems slower than ext3, but not soo much as here, right?
And the main reason for #3 I found, was wrong disk alignment. So I checked this and get different results:
parted says the alignment should be OK:
Code:
root@abba:/# parted -s /dev/sda unit s print
Model: ATA TOSHIBA DT01ACA3 (scsi)
Disk /dev/sda: 5860533168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
3 2048s 4095s 2048s bios_grub
1 4096s 528383s 524288s raid
2 528384s 5860533134s 5860004751s raid
but with fdisk it seems wrong:
Code:
root@abba:/# fdisk -c -u -l /dev/sda
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
What is the correct result?
Here is my pveperf output:
Not running a VM:
Code:
root@abba:/# pveperfCPU BOGOMIPS: 54397.28
REGEX/SECOND: 1734493
HD SIZE: 4.96 GB (/dev/mapper/vg0-abba_root)
BUFFERED READS: 179.83 MB/sec
AVERAGE SEEK TIME: 6.55 ms
FSYNCS/SECOND: 531.51
DNS EXT: 64.11 ms
With running some VMs, but no traffic/workload on the VMs
Code:
root@abba:/# pveperf
CPU BOGOMIPS: 54402.00
REGEX/SECOND: 1581485
HD SIZE: 4.96 GB (/dev/mapper/vg0-abba_root)
BUFFERED READS: 174.91 MB/sec
AVERAGE SEEK TIME: 9.88 ms
FSYNCS/SECOND: 9.86
DNS EXT: 57.10 ms
Here are my versiondump:
Code:
root@abba:/# pveversion -v
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-20-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Here my running VMs (only one really used, the others are very low frequented):
Code:
root@abba:/# qm list|grep running
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 XXXX running 6144 4.00 8241
101 XXXX running 512 1.00 8409
103 XXXX running 1024 5.00 8548
104 XXXX running 768 1.00 8646
107 XXXX running 3072 1.00 8742
108 XXXX running 512 1.00 10803
Can someone help me to find a possible reason for this?
:thorsten