@wbumiller Sorry to ping you, do you know if this is a known limitation with the dd command in qemu-img? Maxing out throughput at 75MB/s on NVMe storage.
qemu-img dd -f raw -O raw osize=32212254720 if=/root/test-netcat-plaindd.raw of=/root/test-local-qemu-dd.rawqemu-img dd -f raw -O raw bs=16M osize=32212254720 if=/root/test-netcat-plaindd.raw of=/root/test-local-qemu-dd.rawWhy dont you use shared storage method - such as NFS. You will need to tune NFS a bit. But we managed to migrated VMs with 8/9 TB over night with this method.Appreciate it!
We have a few 10TB+ VMs to move and they wouldn't be done within a week of starting them lol
I learned on the last large cutover that the current tool slows down over time after the first few TB. It went from roughly 30 mins for 60GB to about 60 mins for 60GB. I was praying it would be done before Monday morning and luckily it finished 10PM on Sunday.
I'll keep cracking away at it and see if I can come up with something realistic. I really need to just setup another 25GbE host so I don't have to bum others to test lol.
Why dont you use shared storage method - such as NFS. You will need to tune NFS a bit. But we managed to migrated VMs with 8/9 TB over night with this method.
...
I do not want to discourage this discussion to improve the performance, I just want to share a "workaround" if someone is struggling with import speed.
...
Why dont you use shared storage method - such as NFS. You will need to tune NFS a bit. But we managed to migrated VMs with 8/9 TB over night with this method.
Let me share some figures from our migrations.
Native ESXi importer: about 110/130 MB a sec. Which is fine for our standard VM size, about 30 min, but anything above 500GB becomes too slow.
NFS import: We have established two NFS servers, one physical box with Raid controller and SATA SSD disks and one NFS server as VM on Ceph(NVMe disks). The first step is to storage migrate VMs to NFS, you can do it Live.
By tuning NFS(I do not recall which options we have used now), ESXi was copying data with about 1.1GB/s to NFS on Ceph and about 700MB/s to physical NFS. At that stage you need to power off VM and start the import on PVE. qm disk import was running at about 330MB/s per disk. So if you have multiple disks you can achieve quite good transfer rate. Also at this import NFS on Ceph was doing way better especially importing several disks at the same time, but I do not recall actual numbers.
We have not tried running vmdk in place and import it while it is running.
I do not want to discourage this discussion to improve the performance, I just want to share a "workaround" if someone is struggling with import speed.
I wish that native import speed improves as NFS import is more complicated and we still need to migrate about 110 VMs with 50TB of data.
This looks awesome! I'll give this a test tomorrow and post the results.
Hm, netcat isn't encrypted though or am I'm missing something?
Well I prefer a slow transfer to one without encryption but to each their own. Imho it would be quite a bad idea to integrate a "fast migration procedure" into PVE if it needs clear-text-transfer over the network.Correct, in testing the encryption process is a pretty big bottle neck. I can try it again with the more beefy hardware but the main issue is VMware has hampered the speed of which SSH can transfer files.
While I agree security is a good concern, in this case, is it? At least in our environment (and I know not everyone does it this way), we have all of our servers on the same L2 network dedicated to VM storage traffic, so none of it ever leaves the switch stack to be intercepted.
@spencerh
Do you mind doing a short test from me? In my testing I have ran into an interesting, but need to verify if it's a hardware/config issue or something with ESXi itself.
When running a single threaded iperf3 test from Proxmox to ESXi, it gets ~25gbps as expected. But the other way around hit's about 16gbps. After updating the Intel E810 driver it now hits ~20gbps, but still not max. Just making sure it's not effecting the transmit limit. Considering there seems to be sometype of 500MB/s limit in ESXi in a TCP stream, I don't think it'll matter, but just being sure.
I looked around the internet and there's lots of talk about people hitting some type of 500MB/s limit in ESXi. It seems a little too "clean" of a number, as it's suspiciously 4x1GbE. If that can be solved or at least increased, that would make a single import even faster.
Update on the code, I'm re-implimenting the change into a fresh copy of the 1.0.1 version of the FUSE application. Much cleaner code to work on and making sure it's done correctly with all of the checks and management to make sure this works as intended every time.
[root@my-esx:~] /usr/lib/vmware/vsan/bin/iperf3.copy -s -B 192.168.0.123 -p 8000
-----------------------------------------------------------
Server listening on 8000 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.0.150, port 53776
[ 5] local 192.168.0.123 port 8000 connected to 192.168.0.150 port 53790
iperf3: getsockopt - Function not implemented
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 872 MBytes 7.30 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 1.00-2.00 sec 1.07 GBytes 9.24 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 2.00-3.00 sec 1.08 GBytes 9.24 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 3.00-4.00 sec 1.07 GBytes 9.24 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 4.00-5.00 sec 1.07 GBytes 9.18 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 5.00-6.00 sec 1.09 GBytes 9.35 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 6.00-7.00 sec 1.09 GBytes 9.35 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 7.00-8.00 sec 1.09 GBytes 9.35 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 8.00-9.00 sec 1.09 GBytes 9.35 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 9.00-10.00 sec 1.09 GBytes 9.35 Gbits/sec
iperf3: getsockopt - Function not implemented
[ 5] 10.00-10.01 sec 5.25 MBytes 8.99 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 10.6 GBytes 9.10 Gbits/sec receiver
root@pve:~# iperf3 -c 192.168.0.123 -t 10 -i 5 -f g -p 8000
Connecting to host 192.168.0.123, port 8000
[ 5] local 192.168.0.150 port 53790 connected to 192.168.0.123 port 8000
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-5.01 sec 5.15 GBytes 8.85 Gbits/sec 1289 1.49 MBytes
[ 5] 5.01-10.01 sec 5.44 GBytes 9.35 Gbits/sec 0 2.00 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 10.6 GBytes 9.10 Gbits/sec 1289 sender
[ 5] 0.00-10.01 sec 10.6 GBytes 9.10 Gbits/sec receiver
iperf Done.
root@pve:~# iperf3 -s -B 192.168.0.150 -p 8000
-----------------------------------------------------------
Server listening on 8000 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.0.123, port 55395
[ 5] local 192.168.0.150 port 8000 connected to 192.168.0.123 port 53091
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.07 GBytes 9.21 Gbits/sec
[ 5] 1.00-2.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 2.00-3.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 3.00-4.00 sec 1.08 GBytes 9.31 Gbits/sec
[ 5] 4.00-5.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 6.00-7.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 7.00-8.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 8.00-9.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 5] 9.00-10.00 sec 1.08 GBytes 9.31 Gbits/sec
[ 5] 10.00-10.01 sec 4.62 MBytes 9.36 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 10.9 GBytes 9.32 Gbits/sec receiver
[root@my-esx:~] /usr/lib/vmware/vsan/bin/iperf3.copy -c 192.168.0.150 -t 10 -i 5 -f g -p 8000
Connecting to host 192.168.0.150, port 8000
[ 5] local 192.168.0.123 port 53091 connected to 192.168.0.150 port 8000
iperf3: getsockopt - Function not implemented
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-5.00 sec 5.42 GBytes 9.32 Gbits/sec 4236316672 0.00 Bytes
iperf3: getsockopt - Function not implemented
[ 5] 5.00-10.00 sec 5.43 GBytes 9.33 Gbits/sec 58650624 0.00 Bytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.32 Gbits/sec 0 sender
[ 5] 0.00-10.01 sec 10.9 GBytes 9.32 Gbits/sec receiver
iperf Done.
netcat-dd branch the code you did the testing with? I was going to try to compile it and do some testing in the meantime but I wanted to make sure I was looking at the right thing../target/release/esxi-folder-fuse --test-fuse --use-fuse-streaming --esxi-host 10.20.30.40 --esxi-disk /vmfs/volumes/nvme-storage/windows-test/windows-test.vmdk --dest /nvme-storage/esxi-import-test/windows-test.qcow2 --dst-format qcow2 --block-size 4M
We use essential cookies to make this site work, and optional cookies to enhance your experience.