[SOLVED] really low I/O speeds (<2MB/s)

FuriousRage

Renowned Member
Oct 17, 2014
114
3
83
Hi, im currently installing latest debian stable in a RAW format VM, with no cache settings
it has 1 TB harddrive, and after almost one hour it still sits at 0%

Install settings was Encrypted (guided) and only / partition (= full 1TB at /)
looking at the server: less then 2% cpu utilization for the vm (no other VM's running)
mem usage around 2GB (of 8 allowed)
Diskwrite hovers about 1.3 MB/s

the /storage is 5x2TB sata drives in a raid 5 setup thru mdadm, no encryption on /dev/md0.

root@Caelus:/# pveperf /dev/md0
CPU BOGOMIPS: 25599.92
REGEX/SECOND: 1935222
HD SIZE: 7392.73 GB (/dev/md0)
BUFFERED READS: 376.00 MB/sec
AVERAGE SEEK TIME: 19.77 ms
open failed at /usr/bin/pveperf line 83.

root@Caelus:/storage# pveperf /storage/
CPU BOGOMIPS: 25599.92
REGEX/SECOND: 1918090
HD SIZE: 7392.73 GB (/dev/md0)
BUFFERED READS: 155.16 MB/sec
AVERAGE SEEK TIME: 18.23 ms
FSYNCS/SECOND: 12.64
DNS EXT: 2115.26 ms
DNS INT: 2026.49 ms (local)
 
Last edited:
AVERAGE SEEK TIME: 18.23 ms
FSYNCS/SECOND: 12.64
DNS EXT: 2115.26 ms
DNS INT: 2026.49 ms (local)

This is extremely slow! The recommended FSYNCS/SECOND is minimum 500. There must be something terrible wrong with your setup since your DNS numbers and average seek time is also very bad.

From one of my hosts:
CPU BOGOMIPS: 22398.24
REGEX/SECOND: 1088130
HD SIZE: 9.84 GB (/dev/disk/by-uuid/d575d85f-6c92-4b12-bdef-4f14800e753c)
BUFFERED READS: 184.57 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND: 3260.47
DNS EXT: 57.73 ms
DNS INT: 0.69 ms (datanom.net)

And from a NFS mount to a Qnap:
CPU BOGOMIPS: 22398.24
REGEX/SECOND: 1098746
HD SIZE: 1373.87 GB (192.168.2.10:/vz)
FSYNCS/SECOND: 758.14
DNS EXT: 71.48 ms
DNS INT: 1.19 ms (datanom.net)
 
AVERAGE SEEK TIME: 18.23 ms
FSYNCS/SECOND: 12.64
DNS EXT: 2115.26 ms
DNS INT: 2026.49 ms (local)This is extremely slow! The recommended FSYNCS/SECOND is minimum 500. There must be something terrible wrong with your setup since your DNS numbers and average seek time is also very bad.

From one of my hosts:
CPU BOGOMIPS: 22398.24
REGEX/SECOND: 1088130
HD SIZE: 9.84 GB (/dev/disk/by-uuid/d575d85f-6c92-4b12-bdef-4f14800e753c)
BUFFERED READS: 184.57 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND: 3260.47
DNS EXT: 57.73 ms
DNS INT: 0.69 ms (datanom.net)

And from a NFS mount to a Qnap:
CPU BOGOMIPS: 22398.24
REGEX/SECOND: 1098746
HD SIZE: 1373.87 GB (192.168.2.10:/vz)
FSYNCS/SECOND: 758.14
DNS EXT: 71.48 ms
DNS INT: 1.19 ms (datanom.net)

Im thinking on to re-do the raid setup into a RAID0+1 instead. Might be better.. we'll see.
 
What disks do you use for your RAID (type, brand, and model)?
What file system is on /storage (and mount options)?
Do you use a HBA expander or some kind of hardware raid controller?
 
What disks do you use for your RAID (type, brand, and model)?
Do you use a HBA expander or some kind of hardware raid controller?
i have 5x Seagate Desktop HDD 2TB 7200RPM ST2000DM001
Using mdadm that was installed via proxmox install cd.
No idea what HBA expander is.
 
This disk should give lot more performance than you see.

I just re-built the raid to a raid 10.
Did a DD test with
root@Caelus:/# dd if=/dev/urandom of=/storage/test.out bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 74.5176 s, 14.4 MB/s

pveperf
root@Caelus:/# pveperf /storage/
CPU BOGOMIPS: 25599.92
REGEX/SECOND: 1999532
HD SIZE: 4620.41 GB (/dev/md0)
BUFFERED READS: 426.01 MB/sec
AVERAGE SEEK TIME: 18.25 ms
FSYNCS/SECOND: 27.93
DNS EXT: 2105.30 ms
DNS INT: 2155.91 ms (local)

So the raid level doesnt seem to affect the speed of the drives..
Compared to the OS ssd (proxmox)
root@Caelus:/# pveperf /
CPU BOGOMIPS: 25599.92
REGEX/SECOND: 1959252
HD SIZE: 27.19 GB (/dev/mapper/pve-root)
BUFFERED READS: 351.55 MB/sec
AVERAGE SEEK TIME: 0.14 ms
FSYNCS/SECOND: 47.78
DNS EXT: 2043.71 ms
DNS INT: 2033.80 ms (local)

I have no idea what the problem is.. Wounder if the BIOS-raid would do any better then mdadm? i doubt that..
 
Your OS ssd is also extremely slow. Try my mount options for all your mount points.
You should also fix your dns resolving since 2 sec delay is painly slow.
 
Your OS ssd is also extremely slow. Try my mount options for all your mount points.
You should also fix your dns resolving since 2 sec delay is painly slow.

I changed my proxmox dns server to first use 8.8.8.8 instead of my vpn's dns-services.
that brought the time to 124.07 ms ext, and 99.48 ms int.

What is your mount options?
Ah, i missed you had replied twice, trying it.
 
Last edited:
Tried the suggested fstab settings
Not much of a difference in speed
root@Caelus:/# mount -a
root@Caelus:/# dd if=/dev/urandom of=/storage/test.out bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 74.5238 s, 14.4 MB/s

root@Caelus:/# pveperf /storage/
CPU BOGOMIPS: 25599.92
REGEX/SECOND: 1977192
HD SIZE: 4620.41 GB (/dev/md0)
BUFFERED READS: 363.45 MB/sec
AVERAGE SEEK TIME: 16.81 ms
FSYNCS/SECOND: 30.76
DNS EXT: 99.00 ms
DNS INT: 121.17 ms (local)
 
This from one of my ST2000DM001, not in raid though.
Code:
dd if=/dev/urandom of=/opt/test.out bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB) copied, 69,3853 s, 15,5 MB/s
Above means your disks are working as they should.
 
Could you paste the out put from cat /proc/mounts
root@Caelus:/# cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=2036038,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1630652k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=3261300k 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,data=ordered 0 0
/dev/sda2 /boot ext3 rw,relatime,data=ordered 0 0 rpc_pipefs
/var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
/dev/md0 /storage ext4 rw,relatime,nobarrier,stripe=640,data=ordered 0 0
 
root@Caelus:/# cat /proc/mounts
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,data=ordered 0 0
/dev/sda2 /boot ext3 rw,relatime,data=ordered 0 0 rpc_pipefs
Your OS disk is a SSD which means you should use ext4 otherwise your disk will quickly get painfully slow since ext3 does not support SSD. See http://forum.proxmox.com/threads/18502-KVM-kills-SSD-performance
/dev/md0 /storage ext4 rw,relatime,nobarrier,stripe=640,data=ordered 0 0
I wonder whether stripe=640 is the issue!
Code:
       stripe=n              Number of filesystem blocks that mballoc will try to use for  allocation  size  and
              alignment.   For  RAID5/6  systems  this  should be the number of data disks * RAID
              chunk size in filesystem blocks.
 
Code:
dd if=/dev/urandom of=/storage/test.out bs=1M count=1024
Tried the suggested fstab settings
Not much of a difference in speed

Hi,
dd isn't the perfect tool, but in your case you also don't see where the bottleneck is!
urandom? storage? what's about caching?

try something like this for big blocks (best performance - has not much to do with real performance)
Code:
dd if=/dev/zero of=/storage/bigfile bs=1M count=8192 conv=fdatasync
/dev/zero is ok for hdds, but will give wrong results with ssds which have an controller which compress data!

And use atop during the write to see if one hdd is faulty.

Udo
 
Ok. im going to reinstall proxmox from scratch.

What suggested settings should i do?
So far Ext4 for the ssd (dont know why i didnt choose this before)
Any other ideas? Would a bios-raid work better then mdadm?

On a i5-4660 cpu.