Low results with pveperf (the reason for bad HDD performance?)

mcflym

Renowned Member
Jul 10, 2013
195
9
83
Here are my results of pveperf:

CPU BOGOMIPS: 19152.36
REGEX/SECOND: 962716
HD SIZE: 7.14 GB (/dev/mapper/pve-root)
BUFFERED READS: 15.87 MB/sec
AVERAGE SEEK TIME: 0.57 ms
FSYNCS/SECOND: 95.95
DNS EXT: 113.60 ms
DNS INT: 1.18 ms

I have some performance problems with my harddisks which i passthroughed to my vm via virtio. It is very slow (around 20 or 30 MB/s) and it stucks serveral times... Maybe its caused by the bad performance of the HDD (buffered reads) of my Host?

Its a SSD SATA III on a SATA II Controller.
CPU: XEON 3220 Quad Core
HDD: 5x 2TB SATA II, 1x 4TB SATA II on SATA II and SATA III Controlers
RAM: 4GB
 
Here is my fstab of the Host:

/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=xxxxxxx /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdb1 /vm_system ext4 defaults 0 0
/dev/sdi1 /vm_backup ext4 defaults 0 0

and this is the conf of the VM:

bootdisk: virtio0
cores: 1
cpu: host
ide2: local:iso/gparted-live-0.16.1-1-i486.iso,media=cdrom,size=133M
keyboard: de
memory: 3072
name: omv
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1
startup: order=1,up=25
virtio0: vm_system:100/vm-100-disk-1.raw,format=raw,size=15G
virtio1: /dev/sdc,backup=no
virtio2: /dev/sdd,backup=no
virtio3: /dev/sde,backup=no
virtio4: /dev/sdf,backup=no
virtio5: /dev/sdg,backup=no
virtio6: /dev/sdh,backup=no

and fstab of the VM (its an openmediavault VM):

# / was on /dev/vda1 during installation
UUID=ca0b8e52-0175-47f9-89e3-4edef590ba0a / ext4 errors=remount-ro 0 1
# swap was on /dev/vda5 during installation
UUID=e1823198-986b-43fb-aa14-d0cdc017af89 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
# >>> [openmediavault]
UUID=198b3802-b6e5-48b5-9916-82580aec2802 /media/198b3802-b6e5-48b5-9916-82580aec2802 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
UUID=0fd13fc0-f3e2-4b22-a99c-d866b0a07be5 /media/0fd13fc0-f3e2-4b22-a99c-d866b0a07be5 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
UUID=597c8dd4-c06f-42e0-bb43-705f664622c9 /media/597c8dd4-c06f-42e0-bb43-705f664622c9 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
UUID=bdf16d7a-084e-4435-9d11-8e381a41bdea /media/bdf16d7a-084e-4435-9d11-8e381a41bdea ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
UUID=3b398216-9709-4cb7-b1bc-d0da59cef4ed /media/3b398216-9709-4cb7-b1bc-d0da59cef4ed ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
UUID=a410c641-a8bc-4939-9d20-9ab72007e9b3 /media/a410c641-a8bc-4939-9d20-9ab72007e9b3 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqf$
/media/198b3802-b6e5-48b5-9916-82580aec2802// /home/ftp/Media_4 none bind 0 0
/media/198b3802-b6e5-48b5-9916-82580aec2802// /home/ftp/Media_share_5 none bind 0 0
/media/0fd13fc0-f3e2-4b22-a99c-d866b0a07be5// /home/ftp/Media_share_4 none bind 0 0
/media/0fd13fc0-f3e2-4b22-a99c-d866b0a07be5// /home/ftp/Media_3 none bind 0 0
/media/597c8dd4-c06f-42e0-bb43-705f664622c9// /home/ftp/Media_2 none bind 0 0
/media/597c8dd4-c06f-42e0-bb43-705f664622c9// /home/ftp/Media_share_3 none bind 0 0
/media/bdf16d7a-084e-4435-9d11-8e381a41bdea// /home/ftp/Data none bind 0 0
/media/3b398216-9709-4cb7-b1bc-d0da59cef4ed// /home/ftp/Media_share_2 none bind 0 0
/media/3b398216-9709-4cb7-b1bc-d0da59cef4ed// /home/ftp/Media_1 none bind 0 0
# <<< [openmediavault]
 
You have two separate problems:
1) Bad SSD performance of your pve root file system
2) Bad performance of your pass-through disks to a VM

1) It is hard to tell why you get those slow numbers.
- What does hdparm -tT /dev/sda tell?
- What does this tell:
- dd if=/dev/zero of=/tmp/test oflag=fdatasync bs=1M count=1024
- dd if=/tmp/test of=/dev/null bs=1M
Try these mount options for /dev/pve/root and /dev/pve/data: relatime,data=ordered,barrier=0,discard or relatime,data=ordered,barrier=0

2) Try to mount your disk pass-throughs as SATA or SCSI and see if this raises the performance. Also try different cache like write-through or writeback
 
You have two separate problems:
1) Bad SSD performance of your pve root file system
2) Bad performance of your pass-through disks to a VM

1) It is hard to tell why you get those slow numbers.
- What does hdparm -tT /dev/sda tell?
- What does this tell:
- dd if=/dev/zero of=/tmp/test oflag=fdatasync bs=1M count=1024
- dd if=/tmp/test of=/dev/null bs=1M
Try these mount options for /dev/pve/root and /dev/pve/data: relatime,data=ordered,barrier=0,discard or relatime,data=ordered,barrier=0

2) Try to mount your disk pass-throughs as SATA or SCSI and see if this raises the performance. Also try different cache like write-through or writeback

Thanks for your help so far!

here are my results:

1.)
hdparm -tT /dev/sda :
/dev/sda:
Timing cached reads: 7564 MB in 2.00 seconds = 3785.07 MB/sec
Timing buffered disk reads: 180 MB in 3.02 seconds = 59.61 MB/sec

dd doesnt work? "invalid output flag: `fdatasync'"

2.)
I tried nearly all options to mount the devices. There were nearly all the same problems like, stucking and slow speed :(

BIOS:
SATA AHCI is enabled. Some suggestions to other options there?

I changed my hardware but the strange thing is, that i had the same issues (not so many stucks and not that slow speed, but not really good --> 60-70 MB/s) with the hardware (Celeron Ivy Bridge 2,6 GHZ, 16 MB RAM).



What's the reason for this? Under Windows Server 2012 i had nearly fullspeed with the other hardware, so it only can be some settings in proxmox. Or are there maybe driver issues with SATA controllers?

Thanks in advance for your reply!

Edit:

Made the same pveperf test today, now with that result:

pveperf
CPU BOGOMIPS: 19154.08
REGEX/SECOND: 934697
HD SIZE: 7.14 GB (/dev/mapper/pve-root)
BUFFERED READS: 69.20 MB/sec
AVERAGE SEEK TIME: 0.27 ms
FSYNCS/SECOND: 99.32
DNS EXT: 91.32 ms
DNS INT: 1.13 ms (fritz.box)

Are these values ok, or is it still slow?

the vm-attached HDDs are unchanged (slow)

i changed nothing :?
 
Last edited:
really crazy things here... now i had 150 MB/s with my VM machine... i now understand nothing :/
 
cat /proc/mounts and paste it here.

Ok...

sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=502936,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=404272k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=808540k 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered 0 0
/dev/sda1 /boot ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered 0 0
/dev/sdb1 /vm_system ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sdi1 /vm_backup ext4 rw,relatime,barrier=1,data=ordered 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
 
comparing to yesterday your mount options is changed:
yesterday: /dev/pve/data ext3 defaults
today: /dev/pve/data ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered

yesterday:
/dev/pve/root ext3 errors=remount-ro
today:
/dev/pve/root ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered

So the mount options did make a difference!
 
the interesting point is, that i have changed nothing - anyway... everything is fine and i hope it works now (and not only temporary)

THANK YOU anyway!!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!