Hello, today I was doing some more tests on the production cluster instead of lab cluster. When I run fio with bigger iodepth on production cluster, I am getting better results, so it looks like the 100Mbit problem is only in the lab..
PRODUCTION: fio --ioengine=libaio --direct=1 --sync=1...
Yes I was also thinking about that, but there is only one active interface on the node which is also used for internet connection in VMs, I've been downloading ISOs with about 37MB/s speeds so definitely not 100Mbit on server side.
On storage side, there was 1.2GB/s speed reading from NTFS...
Actually it is almost same when using 16k, 32k or 64k blocks, almost like it has been cut off at 11MB/s..:
iodepth=1, 16k: READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=616MiB (646MB), run=60003-60003msec
iodepth=1, 32k: READ: bw=10.1MiB/s (10.6MB/s)...
Thanks I am looking at it but don't really see what should I change.
Meanwhile I tried fio benchmark tool but it just confirms the results I had with backup speeds:
Locally stored VM:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based...
Hello,
I've found a few threads about similiar problem but none of them have a solution.
We have a 3 node cluster with multipathed iSCSI storage using IBM Storwize SAN. In Proxmox 6.4 which we use, there is a shared LVM created which stores VMs on all nodes. We didn't notice any performance...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.