Hi guys I am having some weird issues with my setup, here are some specs for the system that I am trying to set up here.
Dell R720 with E5-2640
128gb DDR3
1 x SATA DELL SSD
1 x SAS DELL SSD
4 x SATA MUSHKIN SSD
8 X SAS DELL 7.2 1TB Drives
I am trying to set up this as a container server, usage is going to be light plex server and own cloud setup with samba integration on my network, maybe some more vm's running debian with small distro provisioning for testing purposes.
Here's my current drive setup
4 X MUSHKIN SSD's ZFS RAID 10 (OS DRIVE VM STORAGE DEFAULT RPOOL AS CREATED BY PROXMOX INSTALLER COMPRESSION ON ASYNC13)
8 X SAS 7.2 RAIDZ1 ( ZFS POOL "TANK" ASYNC12 COMPRESSION ON)
1X SATA DELL SSD AS SLOG (TANK POOL)
1X SAS DELL SSD AS LARC2 (TANK POOL )
^ I know this is not the best setup for tank pool since i have sata and sas drives doing slog and larc2 i am not worried about that yet, since i am having issues on my RPOOL. I will have a mirrored set up 100gb SAS drives for slog and larc2 for that pool as soon as they come in.
It seems to me I should be able to get more performance out of the ssd drives especially fsyncs reading. I am also getting very high io delay even when my vm's are hosted on the ssd array what gives ?
As far as my tests they are below. Also the test of the TANK pool is included.
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1420135
HD SIZE: 443.71 GB (rpool)
FSYNCS/SECOND: 1227.75
DNS EXT: 21.08 ms
DNS INT: 160.58 ms
root@thor:~# pveperf /tank/
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1500545
HD SIZE: 6054.20 GB (tank) 8x1tb RAIDZ1WITH SLOG DEV
FSYNCS/SECOND: 351.62
DNS EXT: 22.32 ms
DNS INT: 161.05 ms
root@thor:/# pveperf tank/
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1383548
HD SIZE: 6054.20 GB (tank) SAME WITH LARC2
FSYNCS/SECOND: 130.29
DNS EXT: 22.43 ms
DNS INT: 160.71 ms
root@thor:/#
Whats a good way to test actual performance on these... because I think that having such low iops on even spinning platter drives is very low
root@thor:/# dd if=/dev/zero of=/rpool/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes (41 GB, 38 GiB) copied, 97.7424 s, 419 MB/s
root@thor:/# dd if=/dev/zero of=/tank/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes (41 GB, 38 GiB) copied, 102.618 s, 399 MB/s
Dell R720 with E5-2640
128gb DDR3
1 x SATA DELL SSD
1 x SAS DELL SSD
4 x SATA MUSHKIN SSD
8 X SAS DELL 7.2 1TB Drives
I am trying to set up this as a container server, usage is going to be light plex server and own cloud setup with samba integration on my network, maybe some more vm's running debian with small distro provisioning for testing purposes.
Here's my current drive setup
4 X MUSHKIN SSD's ZFS RAID 10 (OS DRIVE VM STORAGE DEFAULT RPOOL AS CREATED BY PROXMOX INSTALLER COMPRESSION ON ASYNC13)
8 X SAS 7.2 RAIDZ1 ( ZFS POOL "TANK" ASYNC12 COMPRESSION ON)
1X SATA DELL SSD AS SLOG (TANK POOL)
1X SAS DELL SSD AS LARC2 (TANK POOL )
^ I know this is not the best setup for tank pool since i have sata and sas drives doing slog and larc2 i am not worried about that yet, since i am having issues on my RPOOL. I will have a mirrored set up 100gb SAS drives for slog and larc2 for that pool as soon as they come in.
It seems to me I should be able to get more performance out of the ssd drives especially fsyncs reading. I am also getting very high io delay even when my vm's are hosted on the ssd array what gives ?
As far as my tests they are below. Also the test of the TANK pool is included.
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1420135
HD SIZE: 443.71 GB (rpool)
FSYNCS/SECOND: 1227.75
DNS EXT: 21.08 ms
DNS INT: 160.58 ms
root@thor:~# pveperf /tank/
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1500545
HD SIZE: 6054.20 GB (tank) 8x1tb RAIDZ1WITH SLOG DEV
FSYNCS/SECOND: 351.62
DNS EXT: 22.32 ms
DNS INT: 161.05 ms
root@thor:/# pveperf tank/
CPU BOGOMIPS: 120011.16
REGEX/SECOND: 1383548
HD SIZE: 6054.20 GB (tank) SAME WITH LARC2
FSYNCS/SECOND: 130.29
DNS EXT: 22.43 ms
DNS INT: 160.71 ms
root@thor:/#
Whats a good way to test actual performance on these... because I think that having such low iops on even spinning platter drives is very low
root@thor:/# dd if=/dev/zero of=/rpool/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes (41 GB, 38 GiB) copied, 97.7424 s, 419 MB/s
root@thor:/# dd if=/dev/zero of=/tank/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes (41 GB, 38 GiB) copied, 102.618 s, 399 MB/s