Search results

  1. K

    Low Performance on VMs stored over NFS

    Ok, thz a lot, I installed a new VM following those best practices. This are FIO results on Proxmox: root@pve-gvip02:~# cd /mnt/pve/pve-nfs-boot/ root@pve-gvip02:/mnt/pve/pve-nfs-boot# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1...
  2. K

    Low Performance on VMs stored over NFS

    I benchmarking with CrystalDiskMark on windows and with dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync on linux. I don't have iSCSI avaiable right now, but i'm trying get over fc to test. Windows VM config file: agent: 1 bios: ovmf boot: order=ide0 cores: 18 efidisk0...
  3. K

    ZFS with Raid Controller

    Thx for the reply. In this case I already have raid and standby disks defined by hardware, and I would like to leave it that way since I have support when it comes to the server, disks and raid controller. The only feature I'd like to take advantage of in ZFS is the ability to mirror the VMs'...
  4. K

    Low Performance on VMs stored over NFS

    Hi guys, I have a NAS connected to my Proxmox 7.3 cluster in which to store the VMs' disks. If I test the read/write speed in the nodes I have around 500-800MB/s but if I test the speed inside the VMs It's at 50-100MB/s. The tests are carried out on both Windows and Linux VMs with the same...
  5. K

    ZFS with Raid Controller

    Hi guys, I would like to know if there is any way in which I can use ZFS (to be able to use Replication) if I have a hardware raid controller. Can ZFS be configured as a single disk and have the raid handled by the hardware? My hardware is 2: HPE Proliant DL360 Gen10 8SFF CTO with HPE Smart...