pveperf: very bad fsyncs/second :(

mcflym

Renowned Member
Jul 10, 2013
195
10
83
Hi there again.

I did the pveperf test:

CPU BOGOMIPS: 10376.46
REGEX/SECOND: 1193260
HD SIZE: 7.14 GB (/dev/mapper/pve-root)
BUFFERED READS: 64.46 MB/sec
AVERAGE SEEK TIME: 0.29 ms
FSYNCS/SECOND: 102.64
DNS EXT: 90.72 ms
DNS INT: 1.19 ms (fritz.box)


slow buffered reads and very slow fsyncs/second!

Host: Celeron G1610, 24 GB RAM, 30 GB SSD proxmox installation drive, 128 GB SSD VM-Disk, 4x2 TB and 1x4TB Storage...

What can be the reason for the very bad fsyncs/second?

I have another machine here (N54L Microserver with Turion 2 2,2 GHZ and 8 GB RAM) with over 1200 fsyncs/second.

Thanks
 

Thanks for your hint!

My filesystem is ext3 on the host and ext4 for the VMs. And in the other machine, there is a ssd (with host and vm on the same ssd), too.

the VMs are attached with this mount:

/dev/sda1 /vm_system ext4 defaults 0 0

I think it is not relevant for the pveperf test?

My fstab shows:

/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=d2f9387f-e4e5-4447-95b5-4f9bc2cc5c23 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

cat /proc/mounts:

sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=2011045,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1610760k,mode=755 0 0
/dev/mapper/pve-root / ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=3221500k 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered 0 0
/dev/sdb1 /boot ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered 0 0
/dev/sda1 /vm_system ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sdg1 /vm_backup ext4 rw,relatime,barrier=1,data=ordered 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,relatime,blkio,name=beancounter 0 0
container /proc/vz/container cgroup rw,relatime,freezer,devices,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,relatime,cpuacct,cpu,cpuset,name=fairsched 0 0
 
Last edited:
Hi there again.

I did the pveperf test:

CPU BOGOMIPS: 10376.46
REGEX/SECOND: 1193260
HD SIZE: 7.14 GB (/dev/mapper/pve-root)
BUFFERED READS: 64.46 MB/sec
AVERAGE SEEK TIME: 0.29 ms
FSYNCS/SECOND: 102.64
DNS EXT: 90.72 ms
DNS INT: 1.19 ms (fritz.box)


slow buffered reads and very slow fsyncs/second!

Host: Celeron G1610, 24 GB RAM, 30 GB SSD proxmox installation drive, 128 GB SSD VM-Disk, 4x2 TB and 1x4TB Storage...

What can be the reason for the very bad fsyncs/second?

I have another machine here (N54L Microserver with Turion 2 2,2 GHZ and 8 GB RAM) with over 1200 fsyncs/second.

Thanks
Hi,
what kind of SSD is it? Have you checked the bios-settings for the sata-port?

How looks the pveperf-output on the other SSD and of the hdd-storage?

Udo
 
Very bad numbers indeed from a SSD. Something to compare against:
Device Model: OCZ-AGILITY3
CPU BOGOMIPS: 32548.02
REGEX/SECOND: 1363135
HD SIZE: 9.18 GB (/dev/disk/by-uuid/f66cdc9c-9cf8-4c31-802d-21b9ffcf0493)
BUFFERED READS: 202.65 MB/sec
AVERAGE SEEK TIME: 0.24 ms
FSYNCS/SECOND: 1435.18
DNS EXT: 68.36 ms
DNS INT: 0.67 ms (datanom.net)

Device Model: Corsair Force GT
CPU BOGOMIPS: 23999.16
REGEX/SECOND: 1291005
HD SIZE: 9.84 GB (/dev/disk/by-uuid/d575d85f-6c92-4b12-bdef-4f14800e753c)
BUFFERED READS: 161.09 MB/sec
AVERAGE SEEK TIME: 0.26 ms
FSYNCS/SECOND: 1851.30
DNS EXT: 73.80 ms
DNS INT: 1.29 ms (datanom.net)
 
Hi,

i checked the BIOS... nothing special there.

The SSDs are Sandisk Sata3 32GB for Host and Samsung Pro 128 GB Sata3 for the VMs.

The pveperf results for the N54L:

CPU BOGOMIPS: 8785.34
REGEX/SECOND: 918604
HD SIZE: 13.53 GB (/dev/mapper/pve-root)
BUFFERED READS: 197.09 MB/sec
AVERAGE SEEK TIME: 0.31 ms
FSYNCS/SECOND: 1290.33
DNS EXT: 98.15 ms
DNS INT: 96.67 ms (strange?!)

The result for the VM SSD:

root@proxmox:~# pveperf /vm_system
CPU BOGOMIPS: 10376.90
REGEX/SECOND: 1260231
HD SIZE: 117.37 GB (/dev/sda1)
BUFFERED READS: 480.25 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND: 205.60
DNS EXT: 116.05 ms
DNS INT: 1.24 ms (fritz.box)
 
Last edited:
Unfortunately no official specs on this cache SSD, but here's this: [link]. Verify the following: AHCI is enabled in the BIOS; partitions are aligned on 4k boundaries. Test the SSD in some Windows workstation with a common test suite, such as CrystalDiskMark. It might be faulty.
 
i checked the alignment:

Disk /dev/sdb: 32.0 GB, 32017047552 bytes
255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000ce08


Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 1048575 523264 83 Linux
/dev/sdb2 1048576 62531583 30741504 8e Linux LVM

the partition sdb1 is the important partition?

2048 / 8 = 256 ... the alignment is ok.

if not:

1048576 / 8 = 131.072 ... so the alignment is crap, right?

Maybe i'll gonna do a fresh installation (oh hell :()

In this case another question (hopefully its ok): Can i install Openmediavault directly on the host without issues?

Thanks so far!!
 
Last edited:
The pve VG is most probably on sdb2, but it seems to be fine at that offset since it's divisible by 8. What about AHCI? Other than this, I have no more ideas, except for the obvious: buy a better SSD.
 
yeah thats maybe an option, BUT the performance of the Samsung SSD is not even better with the fsync/second values?
 
So guys...

i made a fresh installation with only one SSD (the Samsung 128GB 840 Pro).

After the installation i did the pveperf test:
CPU BOGOMIPS: 10376.72
REGEX/SECOND: 1216541
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 393.92 MB/sec
AVERAGE SEEK TIME: 0.06 ms
FSYNCS/SECOND: 225.13
DNS EXT: 97.47 ms
DNS INT: 55.24 ms

I think 225 are very bad too?

my /etc/fstab now is:

/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext4 defaults 0 1
UUID=68a821a5-dd5e-4037-bd11-60edc8474873 /boot ext4 discard,noatime,nodiratime 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0

I dont really know what to do?!?
 
What is your BIOS setting for the disk controller - legacy, ide or AHCI?

Paste output from fdisk -l

Paste output from cat /proc/mounts
 
Last edited:
What is your BIOS setting for the disk controller - legacy, ide or AHCI?

Paste output from fdisk -l

Well, for now i can say that the Samsung SSD is the problem.

I've tried a 60GB Kingston SSDNOW (with a standard ext3 proxmox installation)

and i have 3935 fsyncs per second on the same port!

I can't believe that the Samsung is so bad?!? I thought it is the best SSD on the market?
 
Last edited:
To compare. My fstab file has this:
# / was on /dev/mapper/pve-root during installation
UUID=d575d85f-6c92-4b12-bdef-4f14800e753c / ext4 relatime,barrier=0,errors=remount-ro 0 1
UUID=d5923032-d7c7-4bac-aea7-fcb5d9f6c0dd /var/lib/vz ext4 relatime,barrier=0 0 2
# swap was on /dev/mapper/pve-swap during installation
UUID=79152c6a-fe54-4767-bb8b-28fdd00baf15 none swap sw,barrier=0 0 0
 
Hell yeah, you are a god, man!

your settings are gold!

Look:

CPU BOGOMIPS: 10376.82
REGEX/SECOND: 1236288
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 391.81 MB/sec
AVERAGE SEEK TIME: 0.07 ms
FSYNCS/SECOND: 4425.14
DNS EXT: 137.01 ms
DNS INT: 52.98 ms

Amazing!

THANK YOU!
 
Looks god now;-)

When discard is disabled on the partitions you most add this to your daily cron job to maintain speed on the disk:

/etc/cron.daily/fstrim
-----------------------------------------------------------------
#!/bin/sh


PATH=/bin:/sbin:/usr/bin:/usr/sbin


ionice -n7 fstrim -v /


ionice -n7 fstrim -v /var/lib/vz
------------------------------------------------------------------

chmod a+x /etc/cron.daily/fstrim
 
So it was ext3 -> ext4 with "relatime,barrier=0" and all good? In wheezy this is the default IIRC. Can't remember the Proxmox default CD install. According to my own experience, ext3 with default settings performs around the same. Although I don't (yet) use SSDs with PVE. I never use disks without redundance in servers, and SSDs don't always worth it for the additional speed, and they're relatively small.

@mir: do you notice any performance advantage using either discard or that script (w/o discard)? Newer SSDs, especially the Samsungs handle GC so effectively that discard's not even necessary any more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!