Hi,
I have here a huge performance difference in using SSDs for a zfs raid configuration:
We are using two identical server (Thomas Krenn Server-Tower Intel Single-CPU TI106S with
Supermicro X11SSH-F and 8x SATA-3 (6 Gb/s) SW-Raid Controller on Board C236).
System root is installed on zfs raid1 with 2x Samsung SSD PM863a 240GB.
VM data was on a zfs raid10 with 4*4TB Enterprise grade SATA HDD.
After running two years and seeing SSD prices really lower I decided to exchange the HDD for VM data by SSDs.
First I bought 5x 2TB Samsung SSD 860Pro for building a zfs raidz1 on one Server.
As prices still were dropping I had a change to get 4x4TB Samsung SSD PM883 for building a zfs RAID10.
Comparing these two systems show a really amazing difference between them, even when rearranging the Samsung SSD 860 Pro as zfs raid10:
System 1 (SSD PM863a, Samsung SSD PM883):
pveperf /rpool/ROOT/
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3155443
HD SIZE: 180.31 GB (rpool/ROOT)
FSYNCS/SECOND: 6691.77
DNS EXT: 83.49 ms
DNS INT: 1.42 ms
pveperf /tank/vmdata
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3112874
HD SIZE: 3556.48 GB (tank/vmdata)
FSYNCS/SECOND: 6525.11
DNS EXT: 87.03 ms
DNS INT: 1.49 ms
System 2 (SSD PM863a, Samsung SSD 860 Pro):
pveperf /rpool/ROOT/
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3402691
HD SIZE: 172.30 GB (rpool/ROOT)
FSYNCS/SECOND: 6979.83
DNS EXT: 100.43 ms
DNS INT: 1.86 ms
pveperf /tank/vmdata/ (zfs RAIDZ1)
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3315807
HD SIZE: 7379.57 GB (tank/vmdata)
FSYNCS/SECOND: 767.28
DNS EXT: 73.96 ms
DNS INT: 1.63 ms
pveperf /tank/vmdata/ (zfs RAID10)
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3416612
HD SIZE: 3689.00 GB (tank/vmdata)
FSYNCS/SECOND: 728.48
DNS EXT: 104.19 ms
DNS INT: 1.44 ms
Pools were created with:
zpool create -f -o ashift=9 tank raidz1 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
zpool create -f -o ashift=9 tank mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
(I did also test the system with parameter ashift=12 for the Samsung SSD 860 Pro, but no difference either.)
Are the performance difference between consumer grade SSDs (860 Pro) and enterprise SSDs (PM883) really that huge ?
Why is there no difference in performance between an zfs RAIDZ1 and zfs RAID10 ?
I have here a huge performance difference in using SSDs for a zfs raid configuration:
We are using two identical server (Thomas Krenn Server-Tower Intel Single-CPU TI106S with
Supermicro X11SSH-F and 8x SATA-3 (6 Gb/s) SW-Raid Controller on Board C236).
System root is installed on zfs raid1 with 2x Samsung SSD PM863a 240GB.
VM data was on a zfs raid10 with 4*4TB Enterprise grade SATA HDD.
After running two years and seeing SSD prices really lower I decided to exchange the HDD for VM data by SSDs.
First I bought 5x 2TB Samsung SSD 860Pro for building a zfs raidz1 on one Server.
As prices still were dropping I had a change to get 4x4TB Samsung SSD PM883 for building a zfs RAID10.
Comparing these two systems show a really amazing difference between them, even when rearranging the Samsung SSD 860 Pro as zfs raid10:
System 1 (SSD PM863a, Samsung SSD PM883):
pveperf /rpool/ROOT/
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3155443
HD SIZE: 180.31 GB (rpool/ROOT)
FSYNCS/SECOND: 6691.77
DNS EXT: 83.49 ms
DNS INT: 1.42 ms
pveperf /tank/vmdata
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3112874
HD SIZE: 3556.48 GB (tank/vmdata)
FSYNCS/SECOND: 6525.11
DNS EXT: 87.03 ms
DNS INT: 1.49 ms
System 2 (SSD PM863a, Samsung SSD 860 Pro):
pveperf /rpool/ROOT/
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3402691
HD SIZE: 172.30 GB (rpool/ROOT)
FSYNCS/SECOND: 6979.83
DNS EXT: 100.43 ms
DNS INT: 1.86 ms
pveperf /tank/vmdata/ (zfs RAIDZ1)
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3315807
HD SIZE: 7379.57 GB (tank/vmdata)
FSYNCS/SECOND: 767.28
DNS EXT: 73.96 ms
DNS INT: 1.63 ms
pveperf /tank/vmdata/ (zfs RAID10)
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3416612
HD SIZE: 3689.00 GB (tank/vmdata)
FSYNCS/SECOND: 728.48
DNS EXT: 104.19 ms
DNS INT: 1.44 ms
Pools were created with:
zpool create -f -o ashift=9 tank raidz1 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
zpool create -f -o ashift=9 tank mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
(I did also test the system with parameter ashift=12 for the Samsung SSD 860 Pro, but no difference either.)
Are the performance difference between consumer grade SSDs (860 Pro) and enterprise SSDs (PM883) really that huge ?
Why is there no difference in performance between an zfs RAIDZ1 and zfs RAID10 ?