Proxmox vs ESXi Storage Performance - Tuning iSCSI?

You should increase benchmark size as I wrote and you will see that your iscsi peak advances go away.
Edit: It's not uncommon for a pve cluster to have >100 machines (vm or lxc) and a multiple of 10TB of data and you cannot exapolate benchmark results of 8 to 16 GB based on your prefered iscsi protocol to eg 16TB in reality.

This is using 128GB test files, I knocked up everything to be WAY above the RAM available to the client and host.

What filesystem do you use under the hood of your nfs ?
Whatever Pure's file system is, which I'm sure is some proprietary erasure coded file system.
 
This is using 128GB test files
"--numjobs=8 --size=1G" = is 8G in fio line1
"--numjobs=8 --size=1G --iodepth=64" = is 8G in fio line2
"--size=1G --iodepth=64" = is 1G in fio line3
"--numjobs=8 --size=2G --iodepth=64" = is 16G in fio line4
Whatever Pure's file system is, which I'm sure is some proprietary erasure coded file system.
Looks like pure block storage from X20 datasheet without filesystem which is done on volume at host if wanted.
https://www.purestorage.com/content/dam/pdf/de-de/datasheets/ds-flasharray-x-de-de.pdf
 
"--numjobs=8 --size=1G" = is 8G in fio line1
"--numjobs=8 --size=1G --iodepth=64" = is 8G in fio line2
"--size=1G --iodepth=64" = is 1G in fio line3
"--numjobs=8 --size=2G --iodepth=64" = is 16G in fio line4

Right, I changed those to --size=128G on all the scripts.
 
  • Like
Reactions: waltar
Oh, and I also bumped the time up from 60s to 360s, just to make sure it was all warmed up.
 
Hey all,

Just wanted to get some help on this again.

I'm trying to verify that the Proxmox host is using all available NICs for the multipath setup.

I have two NICs configured, x.x.254.60 and x.x.254.61, when I use `iscsiadm -m session -P3`, I only see the 254.60 address being used.
 
I have two NICs configured, x.x.254.60 and x.x.254.61, when I use `iscsiadm -m session -P3`, I only see the 254.60 address being used.
You may want to use some of the tips described here :
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/#prepare-the-shared-storage-device

Specifically confirm your storage reporting: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/#multipath-reporting-behaviors


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
It's not specific to LVM but it is critical part to setting up Multipath properly to use with LVM later.


Pretty much. Behavior C and behavior D


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Right, we use behavior C. Two controllers, with two connections each. So on the Proxmox client, we see 4 paths.

However, it looks like the Proxmox client is only using 1 interface of the NIC, while the client has two interfaces available.

When running
iscsiadm -m session -P3

I see that it's saying
1737140078344.png
All of them list 10.10.254.60, but there is also 61 available.



I went through the SUSE guide and created the two ifaces
After doing that, I ran the following commands:
iscsiadm -m discovery -t st -p 10.10.254.50:3260 –interface=ens2f0np0 --discover
iscsiadm -m discovery -t st -p 10.10.254.50:3260 –interface=ens2f1np1 --discover

Both commands return 8 paths for some reason, should just be 4 from each from my understanding

Plus, when I run iscsiadm -m session -P3
I can see there is now multi iface names, but they have the incorrect IP address showing
1737140328182.png

Possibly because the hwaddress says default instead of what the iface lists?
1737140368173.png
 
Are you saying that all your IPs are on the same subnet? If so, then what you have is a "custom" Behavior A That is likely your issue, as this is very uncommon config.

Take a look at your routing table, the first interface that came up is likely the "primary" for communication on that subnet.

The best practice is to have two subnets. Otherwise you will need to do a lot of custom iscsiadm cli.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I may have got progress, got the iface to be mapped to the MAC
1737141051976.png
Now there's a correct IP to MAC mapping.

What I'm trying to replicate is how multipath is setup on a ESXi server. Where they are on the same subnet, but it can use both ethernet interfaces.

edit:
I'm trying to identify if that's why iSCSI seems to have such bad performance compared to the same exact equipment running ESXi. We followed the best practice guide from Pure on setting this up.
 
Last edited:
What I'm trying to replicate is how multipath is setup on a ESXi server.
Sounds like you are trying to mimic Port Binding.

At this point you are operating well outside of the PVE framework, i.e. directly in the Linux iSCSI Kernel/Userland. You can certainly get this working with enough effort. I'd recommend thoroughly documenting things, as you will need to do the same "song and dance" on new hosts.

I imagine you cant re-IP you storage side for isolated subnets because that would be my suggestion for long term installation.

Good luck



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Yeah, that would make sense. From what it looks like, we have a iscsi software adaper, which has port binding.
1737141761151.png
Each iSCSI port group then has 4 active connections, one for each of the available connections on the storage target. Thus, each host has 8 connections.

I'll see if I can get this working under Linux. I'm hoping to see a performance increase if possible, as the 30-40% decrease in speed and IOPS is kind of disheartening.
 
I'll see if I can get this working under Linux. I'm hoping to see a performance increase if possible, as the 30-40% decrease in speed and IOPS is kind of disheartening.
Out of curiosity, did you check with your SAN vendor whether they have a guide on getting same-subnet multipath configured with Linux?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Okay, I've confirmed that adding the MAC to the iface for multipath is now allowing both interfaces to be used.
1737569096881.png
With that setup, I was able to get the following through iSCSI multipathing.

Read IO: 200K IOPS
Read Throughput: 4GB/s
Write IO: 99K IOPS
Write Throughput: 2.3GB/s
 
"All flash network appliance with dual controllers with two 25GbE each, all four NICs used for multipath" -
the theoretically network bandwidth could be serve 2x 6 = 12 GB/s from all flash (raidset ?), no info on ssd/nvme type and number yet.
So how to interpret the results to the appliance capability in it's config is still unknown.
 
"All flash network appliance with dual controllers with two 25GbE each, all four NICs used for multipath" -
the theoretically network bandwidth could be serve 2x 6 = 12 GB/s from all flash (raidset ?), no info on ssd/nvme type and number yet.
So how to interpret the results to the appliance capability in it's config is still unknown.
Pure X20 appliance
 
Pure X20 appliance
What I could found is up to 300.000k @32k iops for X20 (so math say that's 9,6 GB/s at that io size), X20 could have 3 I/O cards/controller, so your dual ports/ctrl are completely sufficient. If you are happy with your results and measurements on proxmox instead of vmware before then it's fine.