Storage Related performance issues

sourceminer

Active Member
Jan 7, 2015
48
1
26
Hey Guys first post here, I have been using Proxmox now for about 4 months. Switched from vmware.
Pretty Sweet system with one exception... the performance data kinda of lacks.

I am presently experiencing high IO Wait times 50% Plus
Current Config:
3 Hard Drives
VM 101 on 1 hard drive (separate spindle for vm)
VM 102 on SSD Hard Drive (Separate Spindle for VM)
System on separate Drive.

My system comes to a grind specifically when I am logging into my FreePBX VM (on the SSD)
I hear the churning of heads racketing away and I have no real tool to figure out why.

If my FreePBX is running on SSD, I should not hear any racket... but I do.
Why? How can I troubleshoot HIGH IO.
Seems that PVEPerf doesnt tell me anything.

The dashboards are lacking in the area of being able to see what hard drive is causing the HIGH IO. (Feature Request??)
 
My system comes to a grind specifically when I am logging into my FreePBX VM (on the SSD)
I hear the churning of heads racketing away and I have no real tool to figure out why.
Not sure how you can hear it churning since SSD dont have any moving parts. Did you mean Hybrid HDD?

Why? How can I troubleshoot HIGH IO.
Seems that PVEPerf doesnt tell me anything.
Without details of the Proxmox node specs and allocated resources for the VM, is it hard to say why the hi I/O. Too much swapping may be?
 
Not sure how you can hear it churning since SSD dont have any moving parts. Did you mean Hybrid HDD?


Without details of the Proxmox node specs and allocated resources for the VM, is it hard to say why the hi I/O. Too much swapping may be?

Here is my Version:
pve-manager/3.3-1/a06c9f73 (running kernel: 2.6.32-32-pve)

Is there a command I can run to give you these details?

Here is a VZTop


13:38:51 up 43 days, 23:31, 1 user, load average: 0.35, 0.34, 0.20
142 processes: 139 sleeping, 3 running, 0 zombie, 0 stopped
CPU0 states: 6.4% user 6.3% system 0.0% nice 0.0% iowait 86.1% idle
CPU1 states: 8.1% user 4.3% system 0.0% nice 0.0% iowait 87.0% idle
Mem: 3891376k av, 3746192k used, 145184k free, 0k shrd, 2316k buff
1033204k active, 2504488k inactive
Swap: 4194296k av, 2299324k used, 1894972k free 40084k cached


PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
2847 root 20 0 3336M 1.4G 1528 R 16.3 38.4 12047m 1 kvm
2754 root 20 0 1358M 748M 1440 S 6.1 19.6 7935m 0 kvm
42 root 25 5 0 0 0 SWN 2.3 0.0 1410m 0 ksmd
676186 root 20 0 1373M 771M 1460 S 2.3 20.2 488:35 1 kvm
195533 root 20 0 283M 26M 4436 S 1.3 0.6 0:03 1 pvedaemon worke
194642 www-data 20 0 281M 23M 3820 S 0.9 0.6 0:04 0 pveproxy worker
199206 www-data 20 0 281M 20M 3780 S 0.3 0.5 0:00 1 pveproxy worker
199225 www-data 20 0 273M 10M 3100 S 0.3 0.2 0:00 0 pveproxy worker
1 root 20 0 10604 636 604 S 0.0 0.0 0:20 0 init
2 root 20 0 0 0 0 SW 0.0 0.0 0:00 0 kthreadd
3 root RT 0 0 0 0 SW 0.0 0.0 0:10 0 migration/0
4 root 20 0 0 0 0 SW 0.0 0.0 4:02 0 ksoftirqd/0
5 root RT 0 0 0 0 SW 0.0 0.0 0:00

pveperf output: (run while 3 machines are running but mostly idle)

CPU BOGOMIPS: 10775.36
REGEX/SECOND: 1186454
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 51.87 MB/sec
AVERAGE SEEK TIME: 19.01 ms
FSYNCS/SECOND: 830.31
DNS EXT: 42.52 ms

QM Outputs: FREEPBX

root@server:~# qm config 102
bootdisk: ide0
cores: 1
ide0: SSD:vm-102-disk-1,size=10G
ide2: backups:iso/FreePBX-64bit-6.12.65.iso,media=cdrom
memory: 1000
net0: e1000=6E:2A:B6:7A:99:EE,bridge=vmbr0
ostype: other
smbios1: uuid=f9adb5ca-8c51-4cd1-9a5a-3f9c8cd72fe2
sockets: 1

Firewall
root@server:~# qm config 100
bootdisk: sata0
cores: 1
ide2: backups:iso/kerio-control-installer-8.4.0-2650.iso,media=cdrom
memory: 1024
name: Control
net0: e1000=BE:5E:29:96:88:43,bridge=vmbr0
net1: e1000=EA:CE:C0:08:C1:F7,bridge=vmbr1
onboot: 1
ostype: other
sata0: SSD:vm-100-disk-2,size=8G
smbios1: uuid=85c14f6a-4acd-4e1e-8ad5-226536614983
sockets: 1
startup: order=1

Windows Machine:

root@server:~# qm config 101
bootdisk: ide0
cores: 1
ide0: Second_Drive:vm-101-disk-1,size=45G
ide1: Second_Drive:vm-101-disk-2,size=60G
ide2: none,media=cdrom
memory: 3000
name: Connect
net0: e1000=B2:B3:21:FA:03:AB,bridge=vmbr0
onboot: 1
ostype: w2k8
sockets: 1
startup: order=3
unused0: SSD:vm-101-disk-1

Output of all disks and partitions:
root@server:~# fdisk -l


WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Device Boot Start End Blocks Id System
/dev/sda1 1 1953525167 976762583+ ee GPT


WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Device Boot Start End Blocks Id System
/dev/sdb1 2048 1953525167 976761560 83 Linux


WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sdc: 120.0 GB, 120034123776 bytes
81 heads, 63 sectors/track, 45941 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Device Boot Start End Blocks Id System
/dev/sdc1 2048 234441647 117219800 83 Linux


Disk /dev/mapper/pve-root: 103.1 GB, 103079215104 bytes
255 heads, 63 sectors/track, 12532 cylinders, total 201326592 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-root doesn't contain a valid partition table


Disk /dev/mapper/SSD--Data-vm--101--disk--1: 34.4 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders, total 67108864 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb5ada736


Device Boot Start End Blocks Id System
/dev/mapper/SSD--Data-vm--101--disk--1p1 * 2048 53247 25600 83 Linux
/dev/mapper/SSD--Data-vm--101--disk--1p2 53248 2101247 1024000 83 Linux
/dev/mapper/SSD--Data-vm--101--disk--1p3 2101248 4149247 1024000 83 Linux
/dev/mapper/SSD--Data-vm--101--disk--1p4 4149248 67108863 31479808 83 Linux


Disk /dev/mapper/SSD--Data-vm--100--disk--1: 34.4 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders, total 67108864 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x30bcf902


Device Boot Start End Blocks Id System
/dev/mapper/SSD--Data-vm--100--disk--1p1 * 16065 112454 48195 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--1p2 112455 1140614 514080 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--1p3 1140615 2168774 514080 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--1p4 2168775 67108863 32470044+ 83 Linux


Disk /dev/mapper/SSD--Data-vm--100--disk--2: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9ad47157


Device Boot Start End Blocks Id System
/dev/mapper/SSD--Data-vm--100--disk--2p1 * 2048 53247 25600 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--2p2 53248 1101823 524288 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--2p3 1101824 2150399 524288 83 Linux
/dev/mapper/SSD--Data-vm--100--disk--2p4 2150400 16777215 7313408 83 Linux


Disk /dev/mapper/SSD--Data-vm--105--disk--1: 9663 MB, 9663676416 bytes
255 heads, 63 sectors/track, 1174 cylinders, total 18874368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xddc2d856


Device Boot Start End Blocks Id System
/dev/mapper/SSD--Data-vm--105--disk--1p1 * 2048 53247 25600 83 Linux
/dev/mapper/SSD--Data-vm--105--disk--1p2 53248 2101247 1024000 83 Linux
/dev/mapper/SSD--Data-vm--105--disk--1p3 2101248 4149247 1024000 83 Linux
/dev/mapper/SSD--Data-vm--105--disk--1p4 4149248 18874367 7362560 83 Linux


Disk /dev/mapper/Second--Drive-vm--101--disk--1: 48.3 GB, 48318382080 bytes
255 heads, 63 sectors/track, 5874 cylinders, total 94371840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc89a9854


Device Boot Start End Blocks Id System
/dev/mapper/Second--Drive-vm--101--disk--1p1 * 2048 83888127 41943040 7 HPFS/NTFS/exFAT


Disk /dev/mapper/Second--Drive-vm--101--disk--2: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders, total 125829120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb31b642d


Device Boot Start End Blocks Id System
/dev/mapper/Second--Drive-vm--101--disk--2p1 2048 104853503 52425728 7 HPFS/NTFS/exFAT


Disk /dev/mapper/pve-swap: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-swap doesn't contain a valid partition table


Disk /dev/mapper/pve-data: 875.1 GB, 875116363776 bytes
255 heads, 63 sectors/track, 106393 cylinders, total 1709211648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-data doesn't contain a valid partition table


Disk /dev/mapper/SSD--Data-vm--102--disk--1: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004e1c1


Device Boot Start End Blocks Id System
/dev/mapper/SSD--Data-vm--102--disk--1p1 * 2048 616447 307200 83 Linux
/dev/mapper/SSD--Data-vm--102--disk--1p2 616448 19398655 9391104 83 Linux
/dev/mapper/SSD--Data-vm--102--disk--1p3 19398656 20971519 786432 82 Linux swap / Solaris
 
Last edited:
I should also add I cannot remove some old disks from the SSD that are not being used on any VM.
Specifically the 32GB Disk for an older version of VM 101
 
Thank you for the detailed info. But i was more asking what kind of hardware you have as Proxmox node. Such as CPU, HDD etc.


I should also add I cannot remove some old disks from the SSD that are not being used on any VM.
Specifically the 32GB Disk for an older version of VM 101
You can get a list of disk images stored on your Ceph pool using this command:
Code:
#rbd ls <pool_name>

You can delete unwanted virtual disk image using this command:
Code:
#rbd rm <pool_name>/<image_name>

Remove carefully!!! Once removed it is gone forever.
 
Last edited:
You really should swap ide for virtio. Using ide will in best case half your drives performance due to lack of NCQ and the fact that ide is emulated while virtio is paravirtualized which means virtio will have a near raw performance of your disks.
 
"You can get a list of disk images stored on your Ceph pool using this command:"
Where does Ceph come into the picture?

That was in response to his comment:
"I should also add I cannot remove some old disks from the SSD that are not being used on any VM.
Specifically the 32GB Disk for an older version of VM 101"

But i see what you are saying. I have been following another thread on Ceph. For some reason my mind thought he was using Ceph.
:D

My sincere apology.


To sourceminer ,
As mir pointed out, changing to virtio will increase performance in almost all cases. Also, try to fix the allocated RAM instead of ballooning to see if that makes a difference. For sure it will make difference in Windows VM.
 
Last edited:
Thanks for the reply's.

Allocated Ram? Not sure I follow. instead of ballooning, you suggesting to allocate dynamically with a range rather than fixed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!