Hello,
Have a 4 x Server Proxmox 5.2 Cluster and having problem my Linux guest VM. I have got unstable test results for my disk read performance. Couldnt comment it really. Whats is the best way to test it properly? and where shall i check and fix it ?
Your help will be appreciated, thanks.
Guest VM (Ubuntu 14.04 LTS)
Proxmox Server 5.2;
Talion
Have a 4 x Server Proxmox 5.2 Cluster and having problem my Linux guest VM. I have got unstable test results for my disk read performance. Couldnt comment it really. Whats is the best way to test it properly? and where shall i check and fix it ?
Your help will be appreciated, thanks.
Guest VM (Ubuntu 14.04 LTS)
Code:
agent: 1
balloon: 49152
boot: c
bootdisk: scsi0
cores: 16
cpu: host,flags=+pcid
ide1: none,media=cdrom
memory: 81920
name: xxxxx
net0: virtio=xxxxxx,bridge=vmbr0,queues=8
net1: virtio=xxxxxx,bridge=vmbr1,queues=8
numa: 1
onboot: 1
ostype: l26
scsi0: vmstorages_vm:vm-101-disk-1,size=300G
scsihw: virtio-scsi-pci
smbios1: uuid=a6c0b706-00f0-4696-9446-c5d6b769aac5
sockets: 2
Code:
root@vm:~# dd if=/dev/zero of=/root/testfile bs=10G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 31.378 s, 68.4 MB/s
Code:
root@vm:~# hdparm -Tt /dev/sda1
/dev/sda1:
Timing cached reads: 14932 MB in 1.99 seconds = 7492.03 MB/sec
Timing buffered disk reads: 242 MB in 2.65 seconds = 91.33 MB/sec
Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.3
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 3072MB)
Jobs: 1 (f=1): [m] [100.0% done] [24123KB/6162KB/0KB /s] [5706/1426/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1512: Sat Jun 30 01:10:51 2018
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=2454.1MB, bw=82163KB/s, iops=13449, runt= 30596msec
slat (usec): min=4, max=8443, avg=11.71, stdev=19.82
clat (usec): min=158, max=220366, avg=2292.13, stdev=5849.43
lat (usec): min=281, max=220398, avg=2304.09, stdev=5849.52
clat percentiles (usec):
| 1.00th=[ 458], 5.00th=[ 564], 10.00th=[ 652], 20.00th=[ 788],
| 30.00th=[ 924], 40.00th=[ 1064], 50.00th=[ 1224], 60.00th=[ 1384],
| 70.00th=[ 1624], 80.00th=[ 2192], 90.00th=[ 4384], 95.00th=[ 7264],
| 99.00th=[16192], 99.50th=[23424], 99.90th=[95744], 99.95th=[134144],
| 99.99th=[201728]
bw (KB /s): min=22112, max=147151, per=100.00%, avg=83203.52, stdev=25891.59
write: io=631875KB, bw=20652KB/s, iops=3370, runt= 30596msec
slat (usec): min=4, max=7188, avg=14.05, stdev=24.65
clat (msec): min=1, max=219, avg= 9.77, stdev=11.59
lat (msec): min=1, max=219, avg= 9.78, stdev=11.59
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 5],
| 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 9],
| 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 17], 95.00th=[ 20],
| 99.00th=[ 36], 99.50th=[ 102], 99.90th=[ 165], 99.95th=[ 186],
| 99.99th=[ 210]
bw (KB /s): min= 6249, max=36060, per=100.00%, avg=20910.00, stdev=6495.43
lat (usec) : 250=0.01%, 500=1.72%, 750=11.88%, 1000=14.71%
lat (msec) : 2=33.83%, 4=10.45%, 10=18.15%, 20=7.87%, 50=1.09%
lat (msec) : 100=0.12%, 250=0.18%
cpu : usr=6.84%, sys=26.98%, ctx=187467, majf=0, minf=6437
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=411489/w=103121/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=2454.1MB, aggrb=82162KB/s, minb=82162KB/s, maxb=82162KB/s, mint=30596msec, maxt=30596msec
WRITE: io=631874KB, aggrb=20652KB/s, minb=20652KB/s, maxb=20652KB/s, mint=30596msec, maxt=30596msec
Disk stats (read/write):
dm-0: ios=410902/102987, merge=0/0, ticks=916296/1002312, in_queue=1920508, util=99.82%, aggrios=411490/103141, aggrmerge=0/6, aggrticks=926312/1005664, aggrin_queue=1932100, aggrutil=99.71%
sda: ios=411490/103141, merge=0/6, ticks=926312/1005664, in_queue=1932100, util=99.71%
Proxmox Server 5.2;
Code:
root@pmxn4:~# hdparm -Tt /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 15638 MB in 1.99 seconds = 7845.94 MB/sec
Timing buffered disk reads: 5372 MB in 3.00 seconds = 1790.02 MB/sec
Code:
root@pmxn4:~# dd if=/dev/zero of=/root/testfile bs=10G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 2.07058 s, 1.0 GB/s
Code:
root@pmxn4:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 111.8G 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 111.8G 0 part
└─sdb9 8:25 0 8M 0 part
zd0 230:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:4 0 100M 0 part /var/lib/ceph/osd/ceph-14
└─nvme0n1p2 259:5 0 232.8G 0 part
nvme2n1 259:1 0 232.9G 0 disk
├─nvme2n1p1 259:6 0 100M 0 part /var/lib/ceph/osd/ceph-12
└─nvme2n1p2 259:7 0 232.8G 0 part
nvme1n1 259:2 0 232.9G 0 disk
├─nvme1n1p1 259:8 0 100M 0 part /var/lib/ceph/osd/ceph-13
└─nvme1n1p2 259:9 0 232.8G 0 part
nvme3n1 259:3 0 232.9G 0 disk
├─nvme3n1p1 259:10 0 100M 0 part /var/lib/ceph/osd/ceph-15
└─nvme3n1p2 259:11 0 232.8G 0 part
Code:
root@pmxn4:~# pveperf /var/lib/ceph/osd/ceph-14
CPU BOGOMIPS: 134140.48
REGEX/SECOND: 1902499
HD SIZE: 0.09 GB (/dev/nvme0n1p1)
BUFFERED READS: 33.33 MB/sec
AVERAGE SEEK TIME: 0.00 ms
FSYNCS/SECOND: 524.12
DNS EXT: 16.94 ms
DNS INT: 14.91 ms (localx.club)
Talion
Last edited: