Why do LXC migrations between ZFS pools use Rsync?

outback-engineer

New Member
Sep 18, 2024
4
0
1
Hi, I have recently been moving some large LXC containers between two ZFS pools on the same machine and I've noticed something a little odd... Whereas moving a VM basically screams along as fast as the disks can manage, moving the containers is much slower - in my case, I'm getting ~70MB/s when moving container volumes vs several GB/s for VM disks (the VM migrations also have much more consistent disk utilisation).

When I noticed that LXC migrations were causing a lot of ZFS L2ARC writes but VM migrations weren't, I figured something was up and so I had a peek at the running processes, sure enough, there was Rsync ticking along on a single CPU core...

Is it intended to have Rsync move LXC ZFS subvolumes around? And is there anything simple I can do to speed up the process?

Cheers
 
Last edited:
it's the easiest way to get storage-agnostic moving ;) for VMs, qemu-img can do a lot of the heavy lifting (since access and moving/converting happens on the block layer).

we could maybe special case ZFS->ZFS copying/moving, since we already have support for zfs-send/recv based transfer in the storage layer
 
  • Like
Reactions: Kingneutron
Ah, I had a hunch that might have been the case (after all, send/receive only works on ZFS) - I was mainly curious if there was a simple way to speed up the process (like running several copies of Rsync in parallel?)​
Thanks Fabian​
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!