more samsung ssd woes

Discussion in 'Proxmox VE: Installation and configuration' started by zedicus, Jun 13, 2018 at 00:43.

  1. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    CPU BOGOMIPS: 67194.56
    REGEX/SECOND: 1968961
    HD SIZE: 881.38 GB (rpool/ROOT/pve-1)
    FSYNCS/SECOND: 1203.74
    DNS EXT: 20.35 ms
    DNS INT: 21.52 ms (subliminal.local)


    pair of 860 evos in raidz mirror 1gb drives
    this is the rpool and i was hoping to share it with VM's but not with those speeds

    atop shows 2% on the disks
    ashift is 12
    scheduler is noop

    server has 64gb ram
    ZFS is limited to 8gb

    can this be made tolerable or do i need to toss more $ at it?
     
  2. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,356
    Likes Received:
    191
    Hi,

    this is for a Samsung EVO not bad.
    What do you think you get and why can't you use it for the VM too?

    But anyway for a real test you should use FIO.

    Code:
    fio --filename=/rpool/data/fiotest.fio --bs=[4k|128k} --rw=[read|write] --name=test --direct=[0|1] --sync=[1|0] --size=10G --numjobs=1
    
     
  3. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    i will try that today.

    i have a small old raid 5 array in my old server that gets 3300 fsyncs. i figured a pair of ssd even zfs would be way faster. guess i should have done my math instead of just assuming.
     
  4. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,356
    Likes Received:
    191
    This comparison is not valid because your raid must use a cache what is normally an SDRAM.
    SDRAM is at the moment faster than VNAND.
    Also, the Samsung 860 EVO is a consumer SSD and not an enterprise-class SSD with high sync capability.
    If you what increase you sync performance you can add a ZIL device like the Intel optane memory.
     
  5. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    if i added an nvme zill what type of increase in fsync do you think i would be getting?
     
  6. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,278
    Likes Received:
    110
    That depends in the end on the type of workload your VMs will produce. But the reviews on the Intel Optane show a good 4k seq write.
     
  7. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    mostly data storage but there is 2 web portals hosted and 1 shared mysql database. and the typical DC and windows VMs for virtual desktops. only about 5 of those though.

    and i push backups regularly so for the main VM store i am more interested in performance, it is on a zfs mirror just as a 'can't hurt' scenario.

    would a 16gb optane be enough for the slog from the 1tb mirror or would a 32gb optane be best?

    thanks for the help.
     
  8. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,278
    Likes Received:
    110
    If you have the choice of those two, then why not just go for the bigger?
     
  9. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    actually based on IOPS i am looking at going with the 58gb.
    it is substantially faster for not much more coin.
     
  10. zedicus

    zedicus New Member

    Joined:
    Mar 5, 2014
    Messages:
    15
    Likes Received:
    4
    would it be worth reloading proxmox onto the optane and setting up a 32gb partition for SLOG and JUST hosting VMs off the SSD zfs mirror?
     
    #10 zedicus, Jun 13, 2018 at 17:25
    Last edited: Jun 13, 2018 at 18:03
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice