pveperf fstab barrier=0

dragonslayr

Renowned Member
Mar 5, 2015
53
2
73
I've 4 sata re4 WD Hard drives. An ssd larc, and 96 gig of ram.
Am wanting to get better results on fsyncs.
I read you can add barrier=0 to fstab, but my install from proxmox cd has an fstab entry that looks a bit strange to me. Could one of you give me an example with this fstab file?

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap sw 0 0
proc /proc proc defaults 0 0


############## pveperf results
CPU BOGOMIPS: 73415.82
REGEX/SECOND: 1327349
HD SIZE: 3495.79 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 101.58
DNS EXT: 45.66 ms
DNS INT: 66.94 ms
 
barrier is an ext / xfs mount option, and not available on ZFS. for each sync request you do, two spinning disks have to actually do it. 100 such operations per seconds seems about what can be expected
 
here is an example or pveperf result on ZFS (2 x 2 TB SATA Western Digital und one Intel DC 3710 (200GB Model) as log device.

Code:
root@pve01:~# pveperf
CPU BOGOMIPS:      55994.16
REGEX/SECOND:      2719723
HD SIZE:           1558.54 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     5387.29
DNS EXT:           4.68 ms
DNS INT:           8.15 ms (proxmox.com)
 
here is an example or pveperf result on ZFS (2 x 2 TB SATA Western Digital und one Intel DC 3710 (200GB Model) as log device.

Code:
root@pve01:~# pveperf
CPU BOGOMIPS:      55994.16
REGEX/SECOND:      2719723
HD SIZE:           1558.54 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     5387.29
DNS EXT:           4.68 ms
DNS INT:           8.15 ms (proxmox.com)

I will try a log device on one of the servers and let you know how it goes! Thanks alot!
 
Here's what I got splitting up the ssd drive to log and cache 60 gig each.
CPU BOGOMIPS: 73415.82
REGEX/SECOND: 1358791
HD SIZE: 3495.79 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 251.21
DNS EXT: 52.79 ms
DNS INT: 73.73 ms

That's a bit better.. Not near what you are seeing though. However, I believe the server is sata2
Any other hints? :)
 
60G Log is way to much. The Log is kept for 5 seconds and then it get flushed to disk. You can monitor this with "zpool iostat -v 1"

Avoid using Consumer SSD's. I recommend using Intel Datacenter SSD for your Log.
 
60G Log is way to much. The Log is kept for 5 seconds and then it get flushed to disk. You can monitor this with "zpool iostat -v 1"

Avoid using Consumer SSD's. I recommend using Intel Datacenter SSD for your Log.
From the proxmox docs: The maximum size of a log device should be about half the size of physical memory.
So I figured about 1/2 of the 120 was about right.. Best I change it?
 
Ok, don't embarrass me here. :)

Samsung SSD_850_PRO_128GB
850 pro's are used in many storage pools for log so they should be ok.

Your benchmark is quit unsatisfactory. This is something to compare with.
If you notice this is a benchmark over nfs (10 Gb infiniband)
# pveperf /mnt/pve/omnios_ib_nfs/
CPU BOGOMIPS: 22399.28
REGEX/SECOND: 1137116
HD SIZE: 1037.46 GB (omnios:/vMotion/nfs)
FSYNCS/SECOND: 1829.59
DNS EXT: 25.24 ms
DNS INT: 0.97 ms (foo.tld)
 
Hmm, I just ran the test on the test machine and it's similar results. Curious that the machine have little in common.
The test machine is a consumer board with 6gps sata ports with 32 gig ram. I just now put a samsung 850 pro in it, setup the log and cache and had about the same increase.
CPU BOGOMIPS: 24000.72
REGEX/SECOND: 2346159
HD SIZE: 2390.87 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 261.87
DNS EXT: 57.02 ms
DNS INT: 91.04 ms

Only things in common are the sata re4 drives.
 
Raid 10 with 4 drives.

From this site. https://icesquare.com/wordpress/how-to-improve-zfs-performance/
I got this command, dd if=/dev/zero of=./file.out bs=1M count=10k
Here are the results, I don't know if they really mean anything.

dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 3.74864 s, 2.9 GB/s

More info..
File copies from a vm running samba on the test machine come to me at 65mb second.
 
Well, here's one for you guys. I added to my smb.conf
strict allocate = yes

And got the copies from the file server in a vm from 64mbs to 130mbs
Wow!!! Big difference!

As for pveperf?
I've no more ideas. It's just dead slow on 3 machines.
 
Well, I was wrong about samba. It's the second time I move the big file that it goes fast. Maybe coming from l2arc the second time.
:(
I've got a 4th machine with 4 drives I can take the raid card out of. More results in a bit..
 
Machine number 4
reset bios made sure bios is set to io ahci
onboard ide controller sata3
4 1tb sata re drives raid 10
Newest proxmox instal disk
no log, no l2arc
as soon as machine comes up: pveperf - fsync 115 average
I see no reason to add log and/or l2arc, because I've done that on 2 machines and only raised the fsync to 250/300
(unless someone here wants me to try it AGAIN.

So, there we have it. That's what you get with sata raid 10 on proxmox.
Are there any other test anyone would care to try?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!