Hi all,
please could someone explain me what might be wrong with our new storage to perform really badly? We are slowly moving from VMware to PVE and also from old EMC VNXe3200 to 2x SuperMicro storage servers based on X10SRi-F-O with SLES HA. Those SuperMicro servers are not in production yet and testing is not optimistic so far. We are using NFS for shared storages and here are numbers:
and I have no clue what could I do to increase FSYNCS on SLES HA storage - even PC grade WD RED RAID5 with onboard Intel or Marvell SATA controller is outperforming our enterprise grade SuperMicro HA storage with LSI SAS3008 controller...
I have already tried tuning TCP, NFS params, ext4/XFS mount opts, BIOS and controller (not much I could do there) and nothing... Moving 80GB qcow2 disk took 12 minutes when moving from NVMe to SLES HAstorage, 5 minutes when moving from NVMe to EMC and 4 minutes when moving from SLES HAstorage to NVMe. So reading is not an issue, writing with rsync 80GB file to SLES HAstorage also flies (311.13MB/s, 4m23s) but disk moves + FSYNCS scares me a lot. I would be glad for any hint. Thank you.
MartyZ
please could someone explain me what might be wrong with our new storage to perform really badly? We are slowly moving from VMware to PVE and also from old EMC VNXe3200 to 2x SuperMicro storage servers based on X10SRi-F-O with SLES HA. Those SuperMicro servers are not in production yet and testing is not optimistic so far. We are using NFS for shared storages and here are numbers:
Bash:
#pveperf on NVMe RAID0 over NFS
CPU BOGOMIPS: 211209.12
REGEX/SECOND: 2172544
HD SIZE: 744.86 GB (proxmox01:/nvme)
FSYNCS/SECOND: 2669.72
DNS EXT: 25.32 ms
DNS INT: 0.88 ms
#pveperf on WD-RED RAID5 over NFS
CPU BOGOMIPS: 211209.12
REGEX/SECOND: 2242008
HD SIZE: 5566.31 GB (proxmox03:/wdred)
FSYNCS/SECOND: 1381.81
DNS EXT: 26.93 ms
DNS INT: 0.88 ms
#pveperf on TOSHIBA MG04ACA400E RAID10 over NFS
CPU BOGOMIPS: 211209.12
REGEX/SECOND: 2120654
HD SIZE: 502.96 GB (SLESHAstorage:/exports)
FSYNCS/SECOND: 24.65
DNS EXT: 25.98 ms
DNS INT: 0.90 ms
I have already tried tuning TCP, NFS params, ext4/XFS mount opts, BIOS and controller (not much I could do there) and nothing... Moving 80GB qcow2 disk took 12 minutes when moving from NVMe to SLES HAstorage, 5 minutes when moving from NVMe to EMC and 4 minutes when moving from SLES HAstorage to NVMe. So reading is not an issue, writing with rsync 80GB file to SLES HAstorage also flies (311.13MB/s, 4m23s) but disk moves + FSYNCS scares me a lot. I would be glad for any hint. Thank you.
MartyZ