[SOLVED] iSCSI disk migration speed

hac3ru

Member
Mar 6, 2021
35
0
11
32
Hello,

I have a weird issue: I cannot seem to move past the 1 Gbps mark when migrating VM disks to a Synology NAS.

The setup:
3 nodes connected to a 100Gbps Mikrotik switch
2 Synology NASes connected to the same Mikrotik switch using 25Gbps each

What I tested so far:
PVE Node to Synology NAS bandwidth using iperf3: ~20Gbps
DD-ing a file on the Synology itself got me around 1.6GB/s - RAID 5 over 4 Samsung SSDs
From these tests, I concluded that the network is fine and can go over 1Gbps, the disks are capable of writing way more than ~110 MB/s

Still, when I'm migrating a disk from PVE local disk to the NAS or from a NAS to the other, I can't seem to go over ~110MB/s write speed.

What I already tried:
1. Setting the Datacenter -> Migration settings -> Network to the correct network (100Gbps subnet => adapter)
2. Editing the Synology I/O Queue Depth from 64 to 128 since Synology says that it can enhance throughput on 10/40GbE networks.
3. NFS storage instead of iSCSI - even though there shouldn't be a huge difference. Unfortunately, there isn't.

Something "weird" that I saw, when the transfer starts, I'm transferring the first few GBs in a matter of seconds, but then it dials down. This means that the network is indeed capable of delivering the data. The question is what's stopping it - PVE or Synology?
Edit #2: I have tried to
Bash:
dd if=/dev/zero of=/synology/file bs=1G count=300 status=progress
and it was constantly over 1GB/s so I don't think it's the disks slowing down...?

What else can I do to try to fix this?

Thank you!
 
Last edited:
dd if=/dev/zero of=/synology/file bs=1G count=300 status=progress
This is not a valid write test. If you have an intellegent filesystem that'll detect the zeros, it'll writes holes and no data at all. Therefore please use fio for any kind of compatable benchmark.
 
fio --filename img --size=100GB --direct=1 --rw=randrw --bs=64k --numjobs=4 --name=test-01 returns a write speed of 1.8GB/s constantly. The test ran for about 5 minutes.

Anyways, since we're talking RAID5 over 4 SSDs, they should be capable of writing way over 100MB/s - you can get to 100MB/s with HDDs + RAID5 these days, so I don't expect the disks to be the issue.
 
Last edited:
a write speed of 1.8GB/s constantly. The test ran for about 5 minutes.
Perfect, so it's not the disk speed.

you can get to 100MB/s with HDDs + RAID5 these days, so I don't expect the disks to be the issue.
You get 230 MB/s from 10k SAS drive ... for years.

Still, when I'm migrating a disk from PVE local disk to the NAS or from a NAS to the other, I can't seem to go over ~110MB/s write speed.
Just to be clear ... what is "other"?

local -> NAS
NAS -> local

are most probably limited by local speed?

What about

iSCSI -> NFS (should be both on the NAS)

is also slow?
 
other means NAS-01 to NAS-02.
So I tried:
local -> NAS-01
NAS-01 -> local
NAs-01 -> NAS-02
NAS-02 -> NAS-02
The speeds are ~110MB/s

iSCSI -> NFS was about the same, 110MB/s

But, something even weirder: now it works via iSCSI from
NAS-02 -> NAS-01 with 800MB/s. This is after I ran the fio (I know, it makes absolutely no sense). Still, I've transferred about 1TB since my last reply so .... what the hell?

Edit #1: I can see the bandwidth on the switches, the NAS-02 port is Receiving (from the NAS to the Switch) 6 Gbps, I can see RX and TX on the server's ports 6 Gbps, on both ports, and I see TX on NAS-01 all that traffic... So it's definitely not the network, I'd say.

Edit #2: I don't get what's going on. For the last 20 minutes, I'm moving disks around and it works as expected. I'll give it the night and test it again in the morning...
 
Last edited:
Hello,

I'm glad to say that everything works fine.
I don't think it was a RAID scrub, as the local test using fio worked perfectly. I have no clue what happened....

Still, thank you for your time! I appreciate it!
 
I am dealing with this issue. It's taking absurdly too long to migrate a 300gb disk to an iSCSI LVM lun. The VM is offline. The only connection between the NAS and Proxmox server is 10gb. I attempted to run the above fio command (which I am not familiar with) and I am assuming since I am not using iSCSI direct mode I'm unable to run it, complains about being out of space, and I don't see where in the shell there is access to the LVM volume? Maybe the problem is LVM? That seems... odd, though.

I was wondering why the performance was awful when running VMs off this storage. This issue is only with Proxmox, all other devices on my network communicate properly at full speed.

Edit: Read speeds are great. Still digging into this, will reply if I figure out what's going on. Kind of have to otherwise I won't be using iSCSI.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!