Freezing/stutter of guest OS under high host IO

Discussion in 'Proxmox VE: Installation and configuration' started by Blake.v, Jul 10, 2019.

  1. Blake.v

    Blake.v New Member

    Joined:
    Jul 17, 2017
    Messages:
    5
    Likes Received:
    0
    This seams to be a well known issue, but I have yet to find a solution. When transferring files from the guest vm (windows 10 with GPU passthough) to the ZFS array the windows vm becomes nearly unusable. I em fairly new to Proxmox (I was running esxi on this hardware just fine). The windows guest had bean running well for 2-3 weeks before I installed the raid card and setup ZFS.

    The windows VM is running off the NVMe, and is the only VM right now.

    Hardware:
    Ryzen 1700
    32gb ram
    ASRock x270
    1TB NVMe drive for Proxmox, VM storage and slog
    LSI 9205-8i with SAS expander backplane and 4x4TB drives in RaidZ1

    Thanks
     
  2. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,798
    Likes Received:
    346
    RAID card in IT-mode?
     
  3. Blake.v

    Blake.v New Member

    Joined:
    Jul 17, 2017
    Messages:
    5
    Likes Received:
    0
    Part of the problem was self inflicted, I forgot ZFS would grab half the total ram by default, lowered that to 8GB, that got me from total freezing to mild annoyance.
    The card is in IT mode.
    I just updated to the latest firmware to see if it would help, but doesn't seam to have made much of a difference.
    Still seeing ~15% io delay and inconsistent slow transfers (bouncing from 10-200MB/s).

    This is just an old array of 5400rpm drives for bulk media storage, but I planned to pickup 12 7200rpm SAS drives for a much larger and faster array. And I expect this issue to get worse.
     
  4. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,798
    Likes Received:
    346
    Unfortunately, this conclusion is wrong. The more RAM ZFS has, the more blocks are cached in the ARC and the lower I/O delay you see in a mixed read/write system (as with any other filesystem). Another point is that the ARC ZFS uses gets freed if you need more RAM for other things like VMs and due to the first point, this decreases the performance of the system. Also, the workload on your system can have a huge impact on the performance. If you have a lot of sync writes, your single vdev storage cannot keep up. This and only this problem can be fixed by a dedicated SLOG device in ZFS.

    4 drives in RAIDz1 cannot be fast and you cannot expect miracles. If you setup a new array, try to have as many vdevs as possible. The vdevs are the RAID0 layer so that multiple vdevs scale perfectly. Having ESXi on the RAID is a much better option, performance wise, but you will not have the cool features ZFS offers. ZFS can be fast, but you need to have the hardware for that. In direct comparison of ZFS to the same disks with a good RAID controller (BBU and Write Cache), ZFS looses (performance wise).

    I'm also running a RAIDz1 on 4 disks in my home setup and it works, it's not fast but I can enjoy the features of ZFS. I also plan to upgrade to more vdevs to increase performance.
     
  5. Blake.v

    Blake.v New Member

    Joined:
    Jul 17, 2017
    Messages:
    5
    Likes Received:
    0
    This is not an issue with zfs. This is not a new array, it's mostly full of media.
    The vm is NOT on the zfs array. And the ZFS array is not subject to mixed read/write.

    The VM is installed on the NVMe drive, and proxmox is sharing the ZFS array via samba.
    If I transfer a bunch of files from the VM C: or D: drive (a passed though SSD) to the mapped network drive I see huge io delay and stuttering of the VM.
     
  6. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,798
    Likes Received:
    346
    Is the SLOG involved in your copy action, can you monitor that? What NVMe do you have?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice