root@pve2:~# pveperf
CPU BOGOMIPS: 76799.52
REGEX/SECOND: 4762554
HD SIZE: 441.71 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 1008.93
So I installed proxmox on a new server, looking at the speeds of the fio-benchmark the NVMe drive seems to perform quite well:
formated as ext4 I see:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=4g --blocksize=4096k --ioengine=libaio --fsync=100 --iodepth=32 --direct=1 --numjobs=1 --runtime=60
READ: bw=1548MiB/s (1623MB/s), 1548MiB/s-1548MiB/s (1623MB/s-1623MB/s), io=4096MiB (4295MB), run=2646-2646msec
WRITE: bw=1500MiB/s (1573MB/s), 1500MiB/s-1500MiB/s (1573MB/s-1573MB/s), io=4096MiB (4295MB), run=2730-2730msec
using all the fancy features of ZFS seems to come along with a lower write performance, so using ZFS:
READ: bw=1210MiB/s (1268MB/s), 1210MiB/s-1210MiB/s (1268MB/s-1268MB/s), io=4096MiB (4295MB), run=3386-3386msec
WRITE: bw=509MiB/s (534MB/s), 509MiB/s-509MiB/s (534MB/s-534MB/s), io=4096MiB (4295MB), run=8047-8047msec
but now, using a ZFS block device formated with ext4 in a Debian 10 VM these numbers go even lower:
READ: bw=303MiB/s (318MB/s), 303MiB/s-303MiB/s (318MB/s-318MB/s), io=1936MiB (2030MB), run=6380-6380msec
WRITE: bw=339MiB/s (355MB/s), 339MiB/s-339MiB/s (355MB/s-355MB/s), io=2160MiB (2265MB), run=6380-6380msec
using NTFS on windows, the results look better (using a different benchmark)
READ: 640MB/s
WRITE: 251MB/s
might be the ext4 on top of the ZFS, so I used a RAID1 of two SATA SSDs and passed through the devices
READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=1936MiB (2030MB), run=16745-16745msec
WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=2160MiB (2265MB), run=16745-16745msec
Caching was set to None, also Writeback did not really improve this..
Can someone please confirm these numbers or help me get better IO performance please!
With ESXi I am loosing about 5% with disk passthrough - will I need to passthrough the whole controller?
CPU BOGOMIPS: 76799.52
REGEX/SECOND: 4762554
HD SIZE: 441.71 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 1008.93
So I installed proxmox on a new server, looking at the speeds of the fio-benchmark the NVMe drive seems to perform quite well:
formated as ext4 I see:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=4g --blocksize=4096k --ioengine=libaio --fsync=100 --iodepth=32 --direct=1 --numjobs=1 --runtime=60
READ: bw=1548MiB/s (1623MB/s), 1548MiB/s-1548MiB/s (1623MB/s-1623MB/s), io=4096MiB (4295MB), run=2646-2646msec
WRITE: bw=1500MiB/s (1573MB/s), 1500MiB/s-1500MiB/s (1573MB/s-1573MB/s), io=4096MiB (4295MB), run=2730-2730msec
using all the fancy features of ZFS seems to come along with a lower write performance, so using ZFS:
READ: bw=1210MiB/s (1268MB/s), 1210MiB/s-1210MiB/s (1268MB/s-1268MB/s), io=4096MiB (4295MB), run=3386-3386msec
WRITE: bw=509MiB/s (534MB/s), 509MiB/s-509MiB/s (534MB/s-534MB/s), io=4096MiB (4295MB), run=8047-8047msec
but now, using a ZFS block device formated with ext4 in a Debian 10 VM these numbers go even lower:
READ: bw=303MiB/s (318MB/s), 303MiB/s-303MiB/s (318MB/s-318MB/s), io=1936MiB (2030MB), run=6380-6380msec
WRITE: bw=339MiB/s (355MB/s), 339MiB/s-339MiB/s (355MB/s-355MB/s), io=2160MiB (2265MB), run=6380-6380msec
using NTFS on windows, the results look better (using a different benchmark)
READ: 640MB/s
WRITE: 251MB/s
might be the ext4 on top of the ZFS, so I used a RAID1 of two SATA SSDs and passed through the devices
READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=1936MiB (2030MB), run=16745-16745msec
WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=2160MiB (2265MB), run=16745-16745msec
Caching was set to None, also Writeback did not really improve this..
Can someone please confirm these numbers or help me get better IO performance please!
With ESXi I am loosing about 5% with disk passthrough - will I need to passthrough the whole controller?
Last edited: