ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

bitblue.lab

Member
Oct 7, 2015
75
0
6
I have a node with proxmox 4 with 2x2TB Drives as a pool in ZFS and 2 SSD Drives. My config is 2x2TB drives are used as storagepool for VM-s and 25 GB partition from ssd-s is mirrored as log and from one ssd I have L2ARC Read cache 150GB in total.

PVEPERF of zfs pool was showing:

5000+ FSYNCs when installed now I am not running it because I have 7 vm-s on it

Log SSD Zil is not being used for more than 20-50MB as allocation not going more
L2ARC goes up to 20-25 GB used.

Writes are slow in windows server 2008R2 when I want to copy a file it goes fast then stops with 0MB then after some time it starts again.

I have high IO Wait 30-65% most of the time and cpu average does not go more than 15-20% because this VM of clients are not being heavily used, I didn't any custom config just the zfs arc max is 8gb and min 4gb and compression is enabled as lz4.

If I go to iotop disks aren't used more than 15-20 MB and most of time under that.

zpool iostat
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
storagepool 791G 1.04T 246 327 1.23M 2.56M


capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
storagepool 791G 1.04T 0 537 0 2.84M
mirror 791G 1.04T 0 523 0 2.41M
sdc - - 0 439 0 2.69M
sdd - - 0 388 0 2.37M
logs - - - - - -
mirror 26.0M 24.8G 0 13 0 440K
sda3 - - 0 13 0 440K
sdb3 - - 0 13 0 440K
cache - - - - - -
sda4 11.5G 63.5G 20 0 45.0K 0
sdb4 11.4G 63.5G 17 161 57.5K 459K
---------- ----- ----- ----- ----- ----- -----






System has 64GB RAM and Intel E5-1630v3 CPU thos drives are connected to 6Gbs Interfaces without any raid card.

Need your recommendations or advice because I used Proxmox for more than two years and was very happy with it, now upgraded to this better server and used zfs but its making me tired with problem of IO Wait and slow writes.
 
Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdc               0.00     0.00    0.80  359.40     3.20  3618.40    20.11     1.45    2.80  153.00    2.47   2.47  88.88

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdd               0.00     0.00    0.60  353.00     3.20  4555.20    25.78     1.12    3.16  484.00    2.34   2.40  84.72


               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storagepool   792G  1.04T      3    342  16.0K  2.69M
  mirror     792G  1.04T      3    334  16.0K  2.45M
    sdc         -      -      1    333  8.00K  2.47M
    sdd         -      -      1    337  8.00K  2.49M
logs            -      -      -      -      -      -
  mirror    20.6M  24.9G      0      7      0   244K
    sda3        -      -      0      7      0   244K
    sdb3        -      -      0      7      0   244K
cache           -      -      -      -      -      -
  sda4      12.3G  62.7G      1      0  5.00K      0
  sdb4      12.2G  62.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

This is what my system currently shows.
 
Code:
NAME          SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storagepool  1.81T   792G  1.04T         -    28%    42%  1.00x  ONLINE  -


 zpool get all storagepool
NAME         PROPERTY                    VALUE                       SOURCE
storagepool  size                        1.81T                       -
storagepool  capacity                    42%                         -
storagepool  altroot                     -                           default
storagepool  health                      ONLINE                      -
storagepool  guid                        12614093431441474024        default
storagepool  version                     -                           default
storagepool  bootfs                      -                           default
storagepool  delegation                  on                          default
storagepool  autoreplace                 off                         default
storagepool  cachefile                   -                           default
storagepool  failmode                    wait                        default
storagepool  listsnapshots               off                         default
storagepool  autoexpand                  off                         default
storagepool  dedupditto                  0                           default
storagepool  dedupratio                  1.00x                       -
storagepool  free                        1.04T                       -
storagepool  allocated                   792G                        -
storagepool  readonly                    off                         -
storagepool  ashift                      12                          local
storagepool  comment                     -                           default
storagepool  expandsize                  -                           -
storagepool  freeing                     0                           default
storagepool  fragmentation               28%                         -
storagepool  leaked                      0                           default
storagepool  feature@async_destroy       enabled                     local
storagepool  feature@empty_bpobj         active                      local
storagepool  feature@lz4_compress        active                      local
storagepool  feature@spacemap_histogram  active                      local
storagepool  feature@enabled_txg         active                      local
storagepool  feature@hole_birth          active                      local
storagepool  feature@extensible_dataset  enabled                     local
storagepool  feature@embedded_data       active                      local
storagepool  feature@bookmarks           enabled                     local
storagepool  feature@filesystem_limits   enabled                     local
storagepool  feature@large_blocks        enabled                     local


zfs list -o name,compression,recordsize
NAME                       COMPRESS  RECSIZE
storagepool                     lz4     128K
storagepool/vm-100-disk-1       lz4        -
storagepool/vm-101-disk-1       lz4        -
storagepool/vm-102-disk-1       lz4        -
storagepool/vm-103-disk-2       lz4        -
storagepool/vm-103-disk-3       lz4        -
storagepool/vm-104-disk-1       lz4        -
storagepool/vm-109-disk-1       lz4        -
storagepool/vm-110-disk-1       lz4        -
storagepool/vm-111-disk-1       lz4        -
storagepool/vm-112-disk-1       lz4        -
storagepool/vm-280-disk-1       lz4        -
storagepool/vm-280-disk-2       lz4        -
storagepool/vm-300-disk-1       lz4        -
storagepool/vm-500-disk-1       lz4        -
storagepool/vm-600-disk-1       lz4        -
storagepool/vm-600-disk-2       lz4        -
 
what do you mean with that ? All my virtual machines are part of Storagepool then when I copy big file it goes fast for example for first 500 MB-s then it goes 0 hangs up a bit then goes again slowly with 20-30MB/s or less than same again.
 
how to do that test should I use something with dd ? Can you help on that and as I said in the topic my two problems are slow write speed and high iowait.
 
This when I download with 112MB/s one 10gb file
Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storagepool   794G  1.04T      0    849  4.00K   106M
  mirror     794G  1.04T      0    842  4.00K   105M
    sdc         -      -      0    827      0   103M
    sdd         -      -      0    851  4.00K   106M
logs            -      -      -      -      -      -
  mirror    3.57M  24.9G      0      6      0   232K
    sda3        -      -      0      6      0   232K
    sdb3        -      -      0      6      0   232K
cache           -      -      -      -      -      -
  sda4      13.4G  61.6G      9      0  20.0K      0
  sdb4      13.3G  61.7G      4    143  10.5K  14.7M
----------  -----  -----  -----  -----  -----  -----

This when copy paste a 10GB file
Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storagepool   804G  1.03T     25    984   296K   121M
  mirror     804G  1.03T     25    926   296K   116M
    sdc         -      -     13    943   188K   118M
    sdd         -      -     11    926   108K   116M
logs            -      -      -      -      -      -
  mirror    6.07M  24.9G      0     57      0  5.12M
    sda3        -      -      0     57      0  5.12M
    sdb3        -      -      0     57      0  5.12M
cache           -      -      -      -      -      -
  sda4      19.2G  55.8G     15      0  34.5K      0
  sdb4      19.0G  56.0G     24      0  53.5K      0
----------  -----  -----  -----  -----  -----  -----
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storagepool   811G  1.02T     76    983  6.62M  99.1M
  mirror     811G  1.02T     76    969  6.62M  98.5M
    sdc         -      -     25    794  1.38M  98.5M
    sdd         -      -     50    786  5.24M  97.3M
logs            -      -      -      -      -      -
  mirror    93.4M  24.8G      0     13      0   628K
    sda3        -      -      0     13      0   628K
    sdb3        -      -      0     13      0   628K
cache           -      -      -      -      -      -
  sda4      20.7G  54.3G  1.01K      0   126M      0
  sdb4      20.6G  54.4G  1.01K    733   127M  19.1M
----------  -----  -----  -----  -----  -----  -----

Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           7.74    0.00    4.48   43.14    0.00   44.65


Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          13.33    0.00    2.47   40.79    0.00   43.41
 
Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdc               0.00     0.00    0.80  359.40     3.20  3618.40    20.11     1.45    2.80  153.00    2.47   2.47  88.88

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdd               0.00     0.00    0.60  353.00     3.20  4555.20    25.78     1.12    3.16  484.00    2.34   2.40  84.72


               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storagepool   792G  1.04T      3    342  16.0K  2.69M
  mirror     792G  1.04T      3    334  16.0K  2.45M
    sdc         -      -      1    333  8.00K  2.47M
    sdd         -      -      1    337  8.00K  2.49M
logs            -      -      -      -      -      -
  mirror    20.6M  24.9G      0      7      0   244K
    sda3        -      -      0      7      0   244K
    sdb3        -      -      0      7      0   244K
cache           -      -      -      -      -      -
  sda4      12.3G  62.7G      1      0  5.00K      0
  sdb4      12.2G  62.8G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

This is what my system currently shows.

This is strange that you have a lot more write in your storagepool than logs ssd.
for your vms disk option, which cache mode do you use ?
 
Is Write-Back-Caching not asynchronous writing in ZFS? Then your ZIL is useless and you have to wait for your disk. Therefore the low ZIL usage (already pointed out to you in other threads of yours)

Normally, you'll fill your ram with dirty pages and it'll eventually sync to disk (IMHO 5 secs on zfs). Depending on the data to be written, you're limited by ONE rotational disk and you're expecting a fast response time with only one disk? Depending on the data, worst case is kind of random - possible if using a serial data stream when mixed with snapshots, so you can get down to a few KB/sec in writing speed. Your speed observations fit perfectly to this scenario.
 
It is not that bad. The 5 secs txg flush period is there also for being able to re-order the writes to be more sequential.
File copying outside or inside the VM is asynchronous AFAIK, except when you run the whole VM in sync or set sync=always on its ZVOL.

One issue might be the (pretty) big fragmentation: 28%. Another issue is something I've hit myself a few times: slow I/O, low ARC usage with lots of free memory and a echo "3">/proc/sys/vm/drop_caches cures it (ARC goes to its limit, I/O is faster again). Until next time.
 
Useful information:

# cat /sys/module/zfs/parameters/zfs_arc_max
# zfs get dedup,atime,copies,primarycache,secondarycache


Can you try `# zfs set sync=disabled storagepool ` to see does it low the IO wait.

Can you print smart information of your SSD ?


One issue might be the (pretty) big fragmentation: 28%.


Its not file fragmentation. Its free space block fragmentation. Before I had rebuild one of my pool it was 58% and still was running good.
 
Of course there are good and bad experiences on the fragmentation level. And of course it was about free space fragmentation because we're talking about slow write performance. Fragmented files do not affect write performance, but fragmented free space does (more time to hunt for places to write to), just like allocating memory with a fragmented heap.

Anyway what is interesting is that although the writes are slow (~4MB/sec per vdev), the very small reads are much much slower (400ms latency at 0.6-0.8 reads/sec).
 
Useful information:

# cat /sys/module/zfs/parameters/zfs_arc_max
# zfs get dedup,atime,copies,primarycache,secondarycache


Can you try `# zfs set sync=disabled storagepool ` to see does it low the IO wait.

Can you print smart information of your SSD ?





Its not file fragmentation. Its free space block fragmentation. Before I had rebuild one of my pool it was 58% and still was running good.

Code:
cat /sys/module/zfs/parameters/zfs_arc_max
8589934592


zfs get dedup,atime,copies,primarycache,secondarycache
NAME                       PROPERTY        VALUE           SOURCE
storagepool                dedup           off             default
storagepool                atime           off             local
storagepool                copies          1               default
storagepool                primarycache    all             local
storagepool                secondarycache  all             local
storagepool/vm-100-disk-1  dedup           off             default
storagepool/vm-100-disk-1  atime           -               -
storagepool/vm-100-disk-1  copies          1               default
storagepool/vm-100-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-100-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-101-disk-1  dedup           off             default
storagepool/vm-101-disk-1  atime           -               -
storagepool/vm-101-disk-1  copies          1               default
storagepool/vm-101-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-101-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-102-disk-1  dedup           off             default
storagepool/vm-102-disk-1  atime           -               -
storagepool/vm-102-disk-1  copies          1               default
storagepool/vm-102-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-102-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-103-disk-2  dedup           off             default
storagepool/vm-103-disk-2  atime           -               -
storagepool/vm-103-disk-2  copies          1               default
storagepool/vm-103-disk-2  primarycache    all             inherited from storag                                                            epool
storagepool/vm-103-disk-2  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-103-disk-3  dedup           off             default
storagepool/vm-103-disk-3  atime           -               -
storagepool/vm-103-disk-3  copies          1               default
storagepool/vm-103-disk-3  primarycache    all             inherited from storag                                                            epool
storagepool/vm-103-disk-3  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-104-disk-1  dedup           off             default
storagepool/vm-104-disk-1  atime           -               -
storagepool/vm-104-disk-1  copies          1               default
storagepool/vm-104-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-104-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-109-disk-1  dedup           off             default
storagepool/vm-109-disk-1  atime           -               -
storagepool/vm-109-disk-1  copies          1               default
storagepool/vm-109-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-109-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-110-disk-1  dedup           off             default
storagepool/vm-110-disk-1  atime           -               -
storagepool/vm-110-disk-1  copies          1               default
storagepool/vm-110-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-110-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-111-disk-1  dedup           off             default
storagepool/vm-111-disk-1  atime           -               -
storagepool/vm-111-disk-1  copies          1               default
storagepool/vm-111-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-111-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-112-disk-1  dedup           off             default
storagepool/vm-112-disk-1  atime           -               -
storagepool/vm-112-disk-1  copies          1               default
storagepool/vm-112-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-112-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-280-disk-1  dedup           off             default
storagepool/vm-280-disk-1  atime           -               -
storagepool/vm-280-disk-1  copies          1               default
storagepool/vm-280-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-280-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-280-disk-2  dedup           off             default
storagepool/vm-280-disk-2  atime           -               -
storagepool/vm-280-disk-2  copies          1               default
storagepool/vm-280-disk-2  primarycache    all             inherited from storag                                                            epool
storagepool/vm-280-disk-2  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-300-disk-1  dedup           off             default
storagepool/vm-300-disk-1  atime           -               -
storagepool/vm-300-disk-1  copies          1               default
storagepool/vm-300-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-300-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-500-disk-1  dedup           off             default
storagepool/vm-500-disk-1  atime           -               -
storagepool/vm-500-disk-1  copies          1               default
storagepool/vm-500-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-500-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-600-disk-1  dedup           off             default
storagepool/vm-600-disk-1  atime           -               -
storagepool/vm-600-disk-1  copies          1               default
storagepool/vm-600-disk-1  primarycache    all             inherited from storag                                                            epool
storagepool/vm-600-disk-1  secondarycache  all             inherited from storag                                                            epool
storagepool/vm-600-disk-2  dedup           off             default
storagepool/vm-600-disk-2  atime           -               -
storagepool/vm-600-disk-2  copies          1               default
storagepool/vm-600-disk-2  primarycache    all             inherited from storag                                                            epool
storagepool/vm-600-disk-2  secondarycache  all             inherited from storag                                                            epool

SSD Smart report
Code:
=== START OF INFORMATION SECTION ===
Model Family:     Intel 730 and DC S3500/S3700 Series SSDs
Device Model:     INTEL SSDSC2BB300G4
Serial Number:    BTWL34460AHP300PGN
LU WWN Device Id: 5 5cd2e4 04b51a5a6
Firmware Version: D2010370
User Capacity:    300,069,052,416 bytes [300 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Nov 25 23:36:29 2015 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled


=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


General SMART Values:
Offline data collection status:  (0x02) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    2) seconds.
Offline data collection
capabilities:                    (0x79) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (   2) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.


SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0032   100   100   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       10534
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       68
170 Available_Reservd_Space 0x0033   100   100   010    Pre-fail  Always       -       0
171 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0
172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
174 Unsafe_Shutdown_Count   0x0032   100   100   000    Old_age   Always       -       62
175 Power_Loss_Cap_Test     0x0033   100   100   010    Pre-fail  Always       -       624 (59 517)
183 SATA_Downshift_Count    0x0032   100   100   000    Old_age   Always       -       5
184 End-to-End_Error        0x0033   100   100   090    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
190 Temperature_Case        0x0022   078   078   000    Old_age   Always       -       22 (Min/Max 18/26)
192 Unsafe_Shutdown_Count   0x0032   100   100   000    Old_age   Always       -       62
194 Temperature_Internal    0x0022   100   100   000    Old_age   Always       -       31
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
225 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       200029
226 Workld_Media_Wear_Indic 0x0032   100   100   000    Old_age   Always       -       369
227 Workld_Host_Reads_Perc  0x0032   100   100   000    Old_age   Always       -       0
228 Workload_Minutes        0x0032   100   100   000    Old_age   Always       -       37660
232 Available_Reservd_Space 0x0033   100   100   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0032   100   100   000    Old_age   Always       -       0
234 Thermal_Throttle        0x0032   100   100   000    Old_age   Always       -       0/0
241 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       200029
242 Host_Reads_32MiB        0x0032   100   100   000    Old_age   Always       -       142118


SMART Error Log Version: 1
No Errors Logged


SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      9909         -
# 2  Short offline       Completed without error       00%      9909         -
# 3  Short offline       Completed without error       00%      9909         -
# 4  Short offline       Completed without error       00%      9908         -
# 5  Short offline       Completed without error       00%      9906         -
# 6  Short offline       Completed without error       00%      9905         -
# 7  Short offline       Completed without error       00%       117         -
# 8  Short offline       Completed without error       00%       117         -
# 9  Short offline       Completed without error       00%       117         -
#10  Short offline       Completed without error       00%        22         -
#11  Short offline       Completed without error       00%        22         -
#12  Short offline       Completed without error       00%        15         -
#13  Short offline       Completed without error       00%        15         -
#14  Short offline       Completed without error       00%        13         -
#15  Short offline       Completed without error       00%        13         -
#16  Short offline       Completed without error       00%        12         -
#17  Short offline       Completed without error       00%         0         -


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.





=== START OF INFORMATION SECTION ===
Model Family:     Intel 730 and DC S3500/S3700 Series SSDs
Device Model:     INTEL SSDSC2BB300G4
Serial Number:    BTWL313300YW300PGN
LU WWN Device Id: 5 001517 8f35f7b74
Firmware Version: D2010370
User Capacity:    300,069,052,416 bytes [300 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Nov 25 23:37:07 2015 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled


=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


General SMART Values:
Offline data collection status:  (0x02) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    2) seconds.
Offline data collection
capabilities:                    (0x79) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (   2) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.


SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0032   100   100   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       13609
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       32
170 Available_Reservd_Space 0x0033   100   100   010    Pre-fail  Always       -       0
171 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0
172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
174 Unsafe_Shutdown_Count   0x0032   100   100   000    Old_age   Always       -       29
175 Power_Loss_Cap_Test     0x0033   100   100   010    Pre-fail  Always       -       619 (78 517)
183 SATA_Downshift_Count    0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0033   100   100   090    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
190 Temperature_Case        0x0022   078   078   000    Old_age   Always       -       22 (Min/Max 18/26)
192 Unsafe_Shutdown_Count   0x0032   100   100   000    Old_age   Always       -       29
194 Temperature_Internal    0x0022   100   100   000    Old_age   Always       -       31
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
225 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       177745
226 Workld_Media_Wear_Indic 0x0032   100   100   000    Old_age   Always       -       410
227 Workld_Host_Reads_Perc  0x0032   100   100   000    Old_age   Always       -       0
228 Workload_Minutes        0x0032   100   100   000    Old_age   Always       -       37606
232 Available_Reservd_Space 0x0033   100   100   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0032   100   100   000    Old_age   Always       -       0
234 Thermal_Throttle        0x0032   100   100   000    Old_age   Always       -       0/0
241 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       177745
242 Host_Reads_32MiB        0x0032   100   100   000    Old_age   Always       -       140355


SMART Error Log Version: 1
No Errors Logged


SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     12985         -
# 2  Short offline       Completed without error       00%     12984         -
# 3  Short offline       Completed without error       00%     12984         -
# 4  Short offline       Completed without error       00%     12983         -
# 5  Short offline       Completed without error       00%     12983         -
# 6  Short offline       Completed without error       00%     12981         -
# 7  Short offline       Completed without error       00%      1835         -
# 8  Short offline       Completed without error       00%      1823         -
# 9  Short offline       Completed without error       00%      1823         -
#10  Short offline       Completed without error       00%      1821         -
#11  Short offline       Completed without error       00%      1819         -
#12  Short offline       Completed without error       00%      1817         -
#13  Short offline       Completed without error       00%      1817         -
#14  Short offline       Completed without error       00%      1806         -
#15  Short offline       Completed without error       00%      1806         -
#16  Short offline       Completed without error       00%         9         -
#17  Short offline       Completed without error       00%         2         -
#18  Short offline       Completed without error       00%         1         -
#19  Short offline       Completed without error       00%         0         -


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

I tried before to disable zil but its the same high io and same write speeds.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!