Hi,
Currently I am testing my new Proxmox 4.4 homeserver with following configuration
With the following bonnie++ test I get the following result
Summarized
Write: 333 MByte/sec
Rewrite: 249 MByte/sec
Read: 988 Mbyte/sec
And these are my pveperf results
I was expecting more performance from 6 SSD in RaidZ2.
I found a test where the person got twice the performance out of 6x 256GB SSD drives
source: https://calomel.org/zfs_raid_speed_capacity.html
Questions:
1) Is this the performance I can expect from 6x sumsung 850 evo in RaidZ2 configuration?
2) If not, what should I change about my zfs/proxmox settings?
3) How can I set metaslab_lba_weighting_enabled to 0 and make it persistent after reboot?
4) I am only using SSD harddrives and this should give more performance or not?
# By default metaslab_lba_weighting_enabled=1 on my system
What I tried to change zfs metaslab_lba_weighting_enabled to 0 and did not work
1) put options zfs metaslab_lba_weighting_enabled=0 in /lib/modules-load.d/zfs.conf
2) put options zfs metaslab_lba_weighting_enabled=0 in /etc/modprobe.d/zfs.conf
Below you can find detailed information about my tests and setup
note: serial number is obfuscated with XXXXXXXXXXXXX
Currently I am testing my new Proxmox 4.4 homeserver with following configuration
- 1x motherboard - Asrock C2750D4I C2750D4I
- 6x ssd - samsung 850 EVO 500GB ssds
- 4x memory 8GB ECC - Kingston Technology 8GB 1600MHz DDR3L Module KVR16LE11/8
- 1x power supply - be quiet! Pure Power L8 - 300W BN220
- 1x case - Fractal Design Node 304
- test where performed with no VMs running.
- ashift is 12
- I use the whole ssd drive
- the zfs pool is including root
- compression is on with compressor lz4
- noatime is off
- drives are directly connected to onboard sata3
With the following bonnie++ test I get the following result
Code:
root@pve1:~# bonnie++ -u root -r 1024 -s 16384 -d /rpool -f -b -n 1 -c 8
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 8 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
pve1 16G 341123 91 255745 98 1011765 99 5241 81
Latency 10201us 26746us 54778us 450ms
Version 1.97 ------Sequential Create------ --------Random Create--------
pve1 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
1 131 2 +++++ +++ 160 3 203 4 +++++ +++ 204 3
Latency 4624us 144us 22984us 4590us 12us 4595us
1.97,1.97,pve1,8,1482783936,16G,,,,341123,91,255745,98,,,1011765,99,5241,81,1,,,,,131,2,+++++,+++,160,3,203,4,+++++,+++,204,3,,10201us,26746us,,54778us,450ms,4624us,144us,22984us,4590us,12us,4595us
Write: 333 MByte/sec
Rewrite: 249 MByte/sec
Read: 988 Mbyte/sec
And these are my pveperf results
Code:
root@pve1:~# pveperf /rpool
CPU BOGOMIPS: 38401.52
REGEX/SECOND: 394533
HD SIZE: 1639.99 GB (rpool)
FSYNCS/SECOND: 637.96
DNS EXT: 45.46 ms
DNS INT: 16.91 ms (somedomain.com) note: domain is obfuscated with somedomain.com
I was expecting more performance from 6 SSD in RaidZ2.
I found a test where the person got twice the performance out of 6x 256GB SSD drives
source: https://calomel.org/zfs_raid_speed_capacity.html
6x 256GB raid6, raidz2 933 gigabytes ( w= 721MB/s , rw=530MB/s , r=1754MB/s )
Questions:
1) Is this the performance I can expect from 6x sumsung 850 evo in RaidZ2 configuration?
2) If not, what should I change about my zfs/proxmox settings?
3) How can I set metaslab_lba_weighting_enabled to 0 and make it persistent after reboot?
4) I am only using SSD harddrives and this should give more performance or not?
# By default metaslab_lba_weighting_enabled=1 on my system
Code:
root@pve1:~# cat /sys/module/zfs/parameters/metaslab_lba_weighting_enabled
1
What I tried to change zfs metaslab_lba_weighting_enabled to 0 and did not work
1) put options zfs metaslab_lba_weighting_enabled=0 in /lib/modules-load.d/zfs.conf
2) put options zfs metaslab_lba_weighting_enabled=0 in /etc/modprobe.d/zfs.conf
Below you can find detailed information about my tests and setup
Code:
root@pve1:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-2 (running version: 4.4-2/80259e05)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-84
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-89
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
Code:
root@pve1:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h5m with 0 errors on Mon Dec 26 14:09:15 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_S2XXXXXXXXXXXXX-part2 ONLINE 0 0 0
Code:
root@pve1:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 2.72T 222G 2.50T - 3% 7% 1.00x ONLINE -
Code:
root@pve1:~# sysctl vm.swappiness
vm.swappiness = 10
Code:
root@pve1:~# service ksmtuned status
? ksmtuned.service - Kernel Samepage Merging (KSM) Tuning Daemon
Loaded: loaded (/lib/systemd/system/ksmtuned.service; disabled)
Active: inactive (dead)
Code:
root@pve1:~# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 30465440 0 152460 0 0 3157 1 324 28 1 3 96 0 0
Code:
root@pve1:~# zfs get all rpool
NAME PROPERTY VALUE SOURCE
rpool type filesystem -
rpool creation Sun Dec 25 13:23 2016 -
rpool used 156G -
rpool available 1.60T -
rpool referenced 192K -
rpool compressratio 1.12x -
rpool mounted yes -
rpool quota none default
rpool reservation none default
rpool recordsize 128K default
rpool mountpoint /rpool default
rpool sharenfs off default
rpool checksum on default
rpool compression lz4 local
rpool atime off local
rpool devices on default
rpool exec on default
rpool setuid on default
rpool readonly off default
rpool zoned off default
rpool snapdir hidden default
rpool aclinherit restricted default
rpool canmount on default
rpool xattr sa local
rpool copies 1 default
rpool version 5 -
rpool utf8only off -
rpool normalization none -
rpool casesensitivity sensitive -
rpool vscan off default
rpool nbmand off default
rpool sharesmb off default
rpool refquota none default
rpool refreservation none default
rpool primarycache all default
rpool secondarycache all default
rpool usedbysnapshots 0 -
rpool usedbydataset 192K -
rpool usedbychildren 156G -
rpool usedbyrefreservation 0 -
rpool logbias latency default
rpool dedup off default
rpool mlslabel none default
rpool sync standard local
rpool refcompressratio 1.00x -
rpool written 192K -
rpool logicalused 111G -
rpool logicalreferenced 40K -
rpool filesystem_limit none default
rpool snapshot_limit none default
rpool filesystem_count none default
rpool snapshot_count none default
rpool snapdev hidden default
rpool acltype off default
rpool context none default
rpool fscontext none default
rpool defcontext none default
rpool rootcontext none default
rpool relatime off default
rpool redundant_metadata all default
rpool overlay off default
Last edited: