Moving Large Directory Between ZFS Datasets – Safe Strategy

shalak

Member
May 9, 2021
56
3
13
39
Hi all,

I have an LXC container on Proxmox that stores ~9.7TB of data under /zbiornik-alpha/subvol-100-disk-0/nas. This is a ZFS dataset with refquota=15T, and it's about 70% full. I’d like to move /nas into a separate ZFS dataset (/zbiornik-alpha/shared-nas) so it can be mounted into multiple containers.

Here’s the concern:

If I run, on Proxmox:
Code:
mv /zbiornik-alpha/subvol-100-disk-0/nas /zbiornik-alpha/shared-nas/
...will that move operation:
  1. Copy each file and delete it immediately after (i.e., low temporary space use), or
  2. Copy all files first and only delete the source at the end (i.e., requires 9.7TB of free space)?

I’ve read that mv across ZFS datasets behaves like cp + rm, but want to confirm real behavior — especially in low free space conditions (~4.3TB available on the whole ZFS pool). I'm trying to avoid failed moves or partial state.

Any insight or best practices appreciated!
 
Last edited:
Hmm, not sure how this would help me. I understand, that I would need to
- snapshot the subvol-100-disk-0
- clone it
- mount the clone in e.g. /mnt/clone
- and then mv /mnt/clone/* /zbiornik-alpha/shared-nas/

How does this change the situation? I'm still moving across datasets, am I not? It's still not a simple inode change, but actual cp + rm...
 
To your question it will do way "2", first copy all and automatically delete all at the end and so will need full space temporary.
Using zfs clone from one to another dataset I will strongly assume will need full space too but will not delete the source at the end which needs manually to do.
 
  • Like
Reactions: shalak
Hi all,
  1. Copy each file and delete it immediately after (i.e., low temporary space use), or
  2. Copy all files first and only delete the source at the end (i.e., requires 9.7TB of free space)?
;)I realized you mentioned you don't have another enough 10TB space to store secondary copy. If you create a snapshot and then mv data from the snapshot to destination, you still needs another enough 10TB free space because mv will not delete the source files, those file just be moved from base into snapshot!
For the method-1, command mv will help to do that you need, and it's recommend use an arg -v and send output into a file (ex. mv -v /zbiornik-alpha/subvol-100-disk-0/nas /zbiornik-alpha/shared-nas/ | tee -a mv.log ) let you can keep an eye for the situation required to handle. You don't have to need additional 9.7TB of free space to do that.
For the method-2, you really need additional 10TB of free space to store all data at destination (/zbiornik-alpha/shared-nas/), same as to use zfs clone.
 
For the method-1
...
For the method-2

I don't get what you mean. I didn't post 2 methods, I posted 2 possible outcomes of the `mv` command and I'm wondering which is the correct one.

But your clone suggestion got me thinking, what if I do a snapshot, clone it as shared-nas, then promote it and delete the subvol-100-disk-0? Will that effectively achieve what I want? (i.e. decoupling ZFS dataset from LXC container and all the shenanigans that Proxmox might do to LXC-specific dataset)

Using zfs clone from one to another dataset I will strongly assume will need full space too but will not delete the source at the end which needs manually to do.

Hmmm, isn't snapshot/clone feature a Copy-on-Write type of situation?
 
Last edited:
I don't know did I catch your thoughts. For zfs snapshot, it's copy-on-write as you said. And for zfs clone, it's a full copy created by snapshot. So if you decide to use zfs clone, that you need prepare 10TB free space at least before you do it. If you don't have enough space, zfs clone may not useful for this situation.
 
I’d like to move
I did not really read everything here, but you can move a dataset this way: man zfs-rename. Without a snapshot, without cloning.
 
  • Like
Reactions: Kingneutron
I did not really read everything here, but you can move a dataset this way: man zfs-rename. Without a snapshot, without cloning.
Thanks! After zfs rename such dataset will no longer be managed by Proxmox GUI? I worry about situations like for example: I detach subvol-100-disk-0 from LXC 100 and then rename into it my-dataset, and sometime in the future, I'll remove LXC 100, then create a new one with same ID, it won't magically treat my-dataset as the "first disk of container 100" or something?
 
Thanks! After zfs rename such dataset will no longer be managed by Proxmox GUI?
Things like this is completely "working under the hood" --> yes, there might be dragons.

For Proxmox (the software) the configured disk from Container 100 has just vanished. Without further work this will lead to errors of course. I would just edit /etc/pve/lxc/100.conf manually to remove any reference.

Delivering the missing disclaimer: do always have 3-2-1 backup and verify that "restore" works.
 
  • Like
Reactions: Kingneutron
I worry about...
Why not test the concrete behavior? Just create a test-Container, add some storage, run backup/restore, "zfs rename" the virtual disk, start the Container, and-so-on. This approach comes for free and teaches you the behavior of your actual system.

Just saying...

PS: also note that mountpoints may behave differently than expected: especially when restoring "not-backuped" disks a restore may delete an actual disk. (So I have read; I haven't been in that situation yet...)
 
The mv command will go file by file, copy and delete. Which is a problem if mv is interrupted (you don’t know where it stopped). Rsync has a delete-source option which will work better if interrupted. Neither of those care what filesystem it is on. Although if you do use snapshots, obviously changes need to be tracked and a mv will not result in reduced disk usage. I think that is what you’re confused about?

As others have said, zfs has better options for renaming a volume, which doesn’t use space and maintains snapshots, you can make changes in the config manually, everything is ‘somewhere’ in /etc/pve as plain text.
 
Last edited:
  • Like
Reactions: UdoB and shalak