Hello,
I have a weird issue: I cannot seem to move past the 1 Gbps mark when migrating VM disks to a Synology NAS.
The setup:
3 nodes connected to a 100Gbps Mikrotik switch
2 Synology NASes connected to the same Mikrotik switch using 25Gbps each
What I tested so far:
PVE Node to Synology NAS bandwidth using iperf3: ~20Gbps
DD-ing a file on the Synology itself got me around 1.6GB/s - RAID 5 over 4 Samsung SSDs
From these tests, I concluded that the network is fine and can go over 1Gbps, the disks are capable of writing way more than ~110 MB/s
Still, when I'm migrating a disk from PVE local disk to the NAS or from a NAS to the other, I can't seem to go over ~110MB/s write speed.
What I already tried:
1. Setting the Datacenter -> Migration settings -> Network to the correct network (100Gbps subnet => adapter)
2. Editing the Synology I/O Queue Depth from 64 to 128 since Synology says that it can enhance throughput on 10/40GbE networks.
3. NFS storage instead of iSCSI - even though there shouldn't be a huge difference. Unfortunately, there isn't.
Something "weird" that I saw, when the transfer starts, I'm transferring the first few GBs in a matter of seconds, but then it dials down. This means that the network is indeed capable of delivering the data. The question is what's stopping it - PVE or Synology?
Edit #2: I have tried to
and it was constantly over 1GB/s so I don't think it's the disks slowing down...?
What else can I do to try to fix this?
Thank you!
I have a weird issue: I cannot seem to move past the 1 Gbps mark when migrating VM disks to a Synology NAS.
The setup:
3 nodes connected to a 100Gbps Mikrotik switch
2 Synology NASes connected to the same Mikrotik switch using 25Gbps each
What I tested so far:
PVE Node to Synology NAS bandwidth using iperf3: ~20Gbps
DD-ing a file on the Synology itself got me around 1.6GB/s - RAID 5 over 4 Samsung SSDs
From these tests, I concluded that the network is fine and can go over 1Gbps, the disks are capable of writing way more than ~110 MB/s
Still, when I'm migrating a disk from PVE local disk to the NAS or from a NAS to the other, I can't seem to go over ~110MB/s write speed.
What I already tried:
1. Setting the Datacenter -> Migration settings -> Network to the correct network (100Gbps subnet => adapter)
2. Editing the Synology I/O Queue Depth from 64 to 128 since Synology says that it can enhance throughput on 10/40GbE networks.
3. NFS storage instead of iSCSI - even though there shouldn't be a huge difference. Unfortunately, there isn't.
Something "weird" that I saw, when the transfer starts, I'm transferring the first few GBs in a matter of seconds, but then it dials down. This means that the network is indeed capable of delivering the data. The question is what's stopping it - PVE or Synology?
Edit #2: I have tried to
Bash:
dd if=/dev/zero of=/synology/file bs=1G count=300 status=progress
What else can I do to try to fix this?
Thank you!
Last edited: