Hi all,
i have a proxmox cluster that utilizes shared luns (LVM) from an IBM 9500 flashsystem. The storage is attached to the hosts via FC16 for main use and via 100G for iscsi/NVMEoverTCP ethernet connections. The storage can supposely reach 1M+ iops.
I was doing various tests the other day and i realised that the performance was not what i expected. So can you advise if you believe that the numbers i give below are good enough or bad enough and what my expectations should be?
Hosts use 10G adapters for ethernet storage connections. Proxmox using linux bridge for vmbr2 which is what the VM uses for storage connections. VM ethernet adapter for storage is virtio with 4 queues. VM is using MTU 1500.
This is the command i use:
fio --name=iscsi-test --filename=/dev/xxxx --rw=randrw --bs=4k --iodepth=32 --numjobs=4 --runtime=20s --time_based --group_reporting --direct=1 --ioengine=libaio --rwmixread=70
These are the results:
While the fio is doing random read-write, are the above numbers too small? When i do sequential operations i get the full 10G performance with much less IOPS.
Is the 60K-70K IOPs total, my limit when the storage claims to have millions of IOPs? Is there anything i can do to improve performance?
Regards!
sp
PS: I haven't done tests with the hosts themselves (over FC/iSCSI/NVME_o_TCP) but in any case i am mainly interested in the performance of VMs .
i have a proxmox cluster that utilizes shared luns (LVM) from an IBM 9500 flashsystem. The storage is attached to the hosts via FC16 for main use and via 100G for iscsi/NVMEoverTCP ethernet connections. The storage can supposely reach 1M+ iops.
I was doing various tests the other day and i realised that the performance was not what i expected. So can you advise if you believe that the numbers i give below are good enough or bad enough and what my expectations should be?
Hosts use 10G adapters for ethernet storage connections. Proxmox using linux bridge for vmbr2 which is what the VM uses for storage connections. VM ethernet adapter for storage is virtio with 4 queues. VM is using MTU 1500.
This is the command i use:
fio --name=iscsi-test --filename=/dev/xxxx --rw=randrw --bs=4k --iodepth=32 --numjobs=4 --runtime=20s --time_based --group_reporting --direct=1 --ioengine=libaio --rwmixread=70
These are the results:
sdb disk attached to VM from the host which is attached to storage via FibreChannel 16. This is a virtio disk attached to the VM.
read: IOPS=36.0k, BW=140MiB/s (147MB/s)
write: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)
nvme0n1 disk attached to VM via NVME over TCP single path (NVMEoTCP running inside VM)
read: IOPS=41.2k, BW=161MiB/s (169MB/s)
write: IOPS=17.7k, BW=69.2MiB/s (72.6MB/s)
sdc disk attached to VM via iSCSI single path(iscsi running inside VM)
read: IOPS=40.6k, BW=159MiB/s (166MB/s)
write: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)
/dev/mapper/mpatha disk attached to VM via iSCSI and using two paths(iscsi running inside VM)
read: IOPS=47.6k, BW=186MiB/s (195MB/s)
write: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)
nvme0n1 disk attached to VM via NVME over TCP using two paths (NVMEoTCP running inside VM)
read: IOPS=40.0k, BW=156MiB/s (164MB/s)
write: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)
While the fio is doing random read-write, are the above numbers too small? When i do sequential operations i get the full 10G performance with much less IOPS.
Is the 60K-70K IOPs total, my limit when the storage claims to have millions of IOPs? Is there anything i can do to improve performance?
Regards!
sp
PS: I haven't done tests with the hosts themselves (over FC/iSCSI/NVME_o_TCP) but in any case i am mainly interested in the performance of VMs .
Last edited: