Very Slow IO and Fsync Low Dell r720 VE 5.3 Latest

rshakin

New Member
Feb 17, 2019
7
0
1
40
Hi guys I am having some weird issues with my setup, here are some specs for the system that I am trying to set up here.

Dell R720 with E5-2640
128gb DDR3

1 x SATA DELL SSD
1 x SAS DELL SSD
4 x SATA MUSHKIN SSD
8 X SAS DELL 7.2 1TB Drives

I am trying to set up this as a container server, usage is going to be light plex server and own cloud setup with samba integration on my network, maybe some more vm's running debian with small distro provisioning for testing purposes.

Here's my current drive setup

4 X MUSHKIN SSD's ZFS RAID 10 (OS DRIVE VM STORAGE DEFAULT RPOOL AS CREATED BY PROXMOX INSTALLER COMPRESSION ON ASYNC13)

8 X SAS 7.2 RAIDZ1 ( ZFS POOL "TANK" ASYNC12 COMPRESSION ON)
1X SATA DELL SSD AS SLOG (TANK POOL)
1X SAS DELL SSD AS LARC2 (TANK POOL )

^ I know this is not the best setup for tank pool since i have sata and sas drives doing slog and larc2 i am not worried about that yet, since i am having issues on my RPOOL. I will have a mirrored set up 100gb SAS drives for slog and larc2 for that pool as soon as they come in.


It seems to me I should be able to get more performance out of the ssd drives especially fsyncs reading. I am also getting very high io delay even when my vm's are hosted on the ssd array what gives ?

As far as my tests they are below. Also the test of the TANK pool is included.


CPU BOGOMIPS: 120011.16

REGEX/SECOND: 1420135

HD SIZE: 443.71 GB (rpool)

FSYNCS/SECOND: 1227.75

DNS EXT: 21.08 ms

DNS INT: 160.58 ms


root@thor:~# pveperf /tank/

CPU BOGOMIPS: 120011.16

REGEX/SECOND: 1500545

HD SIZE: 6054.20 GB (tank) 8x1tb RAIDZ1WITH SLOG DEV

FSYNCS/SECOND: 351.62

DNS EXT: 22.32 ms

DNS INT: 161.05 ms



root@thor:/# pveperf tank/

CPU BOGOMIPS: 120011.16

REGEX/SECOND: 1383548

HD SIZE: 6054.20 GB (tank) SAME WITH LARC2

FSYNCS/SECOND: 130.29

DNS EXT: 22.43 ms

DNS INT: 160.71 ms

root@thor:/#



Whats a good way to test actual performance on these... because I think that having such low iops on even spinning platter drives is very low


root@thor:/# dd if=/dev/zero of=/rpool/file.out bs=4096 count=10000000

10000000+0 records in

10000000+0 records out

40960000000 bytes (41 GB, 38 GiB) copied, 97.7424 s, 419 MB/s



root@thor:/# dd if=/dev/zero of=/tank/file.out bs=4096 count=10000000

10000000+0 records in

10000000+0 records out

40960000000 bytes (41 GB, 38 GiB) copied, 102.618 s, 399 MB/s
 
Those mushkin drives are pretty fast to have those kind of numbers. I will run more tests when I get back to the system today, I wonder if I should just forgo the zfs raid and just do a hardware raid on this setup like it was from factory to get the most iops out if it.


seem to be based on samsumg sm825. I think they are pretty old right ? (2011-2012 ?).

What you need to check is if they are fast for synchronous writes.

if you have another drive to test, (bench will destroy zfs log)

check this blog:

https://www.sebastien-han.fr/blog/2...-if-your-ssd-is-suitable-as-a-journal-device/
 
Those mushkin drives are pretty fast to have those kind of numbers. I will run more tests when I get back to the system today, I wonder if I should just forgo the zfs raid and just do a hardware raid on this setup like it was from factory to get the most iops out if it.

Just forgot zfs if you don't have a datacenter ssd for slog. (you really need fast sync write for the zfs journal)
 
Just forgot zfs if you don't have a datacenter ssd for slog. (you really need fast sync write for the zfs journal)
Well the sas Hitachi ssd is 100 percent data center ssd so is the dell Samsung based one. Am I able to mix sas ssd for slog with sata sad pool ?
 
pl post
Code:
arc_summary
zpool status
zfs get all <YOUR ZPOOL>

do you've a hba or raid controller - if raid controller did you put it in "HBA" mode ?

This is an lsi 2080 based hba card. I can't get to the physical server for next couple of days but I will update the tread as soon as I get some numbers.
 
Here you go thats the output of all those commands.
 

Attachments

  • arcsummary.txt
    10.8 KB · Views: 4
  • zfs_getall.txt
    3.7 KB · Views: 4
  • zpool.txt
    1 KB · Views: 2
I just had a look at the zpool list
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdl3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0

it looked like not disks but partitions from 2 disks as mirrored. If the 2 disks had other partitions and hosted the OS on. That would greatly impacted the zpool performance.
By the way, SLOG needs 16GB by default.
From my experience if RAM more than 100GB the L2ARC drive can be saved. Just need to increase the default 50% of ram that used as ARC.
The rpool 4k performance looked normal to me caused by the recordsize set as 128K.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!