Yup, decided to test this myself, and I can confirm:
- created new CT 404 with additional non-rootfs mount
- it automatically created /zbiornik-alpha/subvol-404-disk-0
- i started it, created some files
- i stopped the container
- run zfs rename...
The mv command will go file by file, copy and delete. Which is a problem if mv is interrupted (you don’t know where it stopped). Rsync has a delete-source option which will work better if interrupted. Neither of those care what filesystem it is...
Why not test the concrete behavior? Just create a test-Container, add some storage, run backup/restore, "zfs rename" the virtual disk, start the Container, and-so-on. This approach comes for free and teaches you the behavior of your actual...
Thanks! After zfs rename such dataset will no longer be managed by Proxmox GUI? I worry about situations like for example: I detach subvol-100-disk-0 from LXC 100 and then rename into it my-dataset, and sometime in the future, I'll remove LXC...
To your question it will do way "2", first copy all and automatically delete all at the end and so will need full space temporary.
Using zfs clone from one to another dataset I will strongly assume will need full space too but will not delete the...
I don't get what you mean. I didn't post 2 methods, I posted 2 possible outcomes of the `mv` command and I'm wondering which is the correct one.
But your clone suggestion got me thinking, what if I do a snapshot, clone it as shared-nas, then...
Hmm, not sure how this would help me. I understand, that I would need to
- snapshot the subvol-100-disk-0
- clone it
- mount the clone in e.g. /mnt/clone
- and then mv /mnt/clone/* /zbiornik-alpha/shared-nas/
How does this change the situation...
Hi all,
I have an LXC container on Proxmox that stores ~9.7TB of data under /zbiornik-alpha/subvol-100-disk-0/nas. This is a ZFS dataset with refquota=15T, and it's about 70% full. I’d like to move /nas into a separate ZFS dataset...