Hello,
inside the proxmox GUI I've created a 4 member RAID10 HDD ZFS pool (4T SAS HDDs).
I placed a VM disk inside this pool and wanted to copy a big repo from a physical machine to the VM (SCSI disk, discard=on,iothread=1,cache=none)
source (physical machine):
tar -cpf - repo | mbuffer -s 256k -m 2G -O 192.168.100.50:9090
destination (proxmox VM):
mbuffer -s 256k -m 2G -I 9090 | tar -xpf -
But this does not work at all!
inside the proxmox GUI I've created a 4 member RAID10 HDD ZFS pool (4T SAS HDDs).
I placed a VM disk inside this pool and wanted to copy a big repo from a physical machine to the VM (SCSI disk, discard=on,iothread=1,cache=none)
source (physical machine):
tar -cpf - repo | mbuffer -s 256k -m 2G -O 192.168.100.50:9090
destination (proxmox VM):
mbuffer -s 256k -m 2G -I 9090 | tar -xpf -
But this does not work at all!
- the mbuffer quickly fills up at the destination
- the VM becomes completely unresponsive - every command takes ages
- the proxmox host load jumps to 40-50
- the IOPs on one HDD (iostat) varies between 500 and 800 (way to much for a HDD)
- the mbuffer at the destination stays at 0%
- I constantly get 110 MiB/s transfer rate (1Gbit network connection)
- the VM stays responsible
- the host load increases only slightly
- the IOPs on one HDD (iostat) varies between 60 and 130
- In the test above the zvol generates way to much IOPs for a HDD
- Repeating the test on a SSD ZFS zvol works better, but generates 2000-3000 IOPs on the SSD
- I've tried with different volblocksize and filesystems (xfs, ext4) - but that did not improve the results