Question on pve-zsync

juppzupp

Member
May 8, 2020
14
2
8
50
Hi,
I seem to misunderstand pve-zsync.
I am trying to pull the data drive from one lxc to another on a different host as a one time action.

on host 1 in 199.conf :
mp1: Video:subvol-100-disk-0,mp=/media/vda,size=3580G

on host 2:
pve-zsync sync --source 192.168.0.213:Video/subvol-199-disk-0 --dest 127.0.0.1:Video/subvol-199-disk-0 --verbose
but when I start the container (created locally, just added the mp1 manually) I get this :
root@tester:~# cd /media/vda/ root@tester:/media/vda# ls subvol-199-disk-0 root@tester:/media/vda# find . . ./subvol-199-disk-0 root@tester:/media/vda#
Instead of the contents (video files), I get an empty directory with the name of the subvol.
I must be missing something obivous, but cant see it.
Any pointers ? thanks.
 
as a one time action
Then I personally would so a manual
create a snapshot first and then send the snapshot over to the target node.

Code:
zfs snapshot Video/subvol-199-disk-0@{your snapshot}
zfs send Video/subvol-199-disk-0@{your snapshot} | ssh root@{other host} zfs recv Video/subvol-199-disk-0

And if you want to check what kind of ZFS datasets are there, don't take a look at the file system itself, but run zfs list :)
 
Thanks. is there a way to make the already copied filesystem available to the new container ?
It's close to 4TB, if I start from scratch (snapshot, send/recv) that's another 5 to 6 hours.
I can see the files on the host, the lxc shows just the "subvol" directory, which is empty
 
Please post the output of zfs list inside [CODE][/CODE] tags so we know what the situation is.
 
Here we go :
root@h3plus2:~# zfs list NAME USED AVAIL REFER MOUNTPOINT Video 1.77T 1.74T 104K /Video Video/subvol-100-disk-0 1.77T 1.74T 96K /Video/subvol-100-disk-0 Video/subvol-100-disk-0/subvol-100-disk-0 1.77T 1.74T 1.77T /Video/subvol-100-disk-0/subvol-100-disk-0 Video/subvol-199-disk-0 256K 8.00G 96K /Video/subvol-199-disk-0 Video/subvol-199-disk-0/subvol-199-disk-0 96K 1.74T 96K /Video/subvol-199-disk-0/subvol-199-disk-0 rpool 27.5G 1.73T 104K /rpool rpool/ROOT 2.60G 1.73T 96K /rpool/ROOT rpool/ROOT/pve-1 2.60G 1.73T 2.60G / rpool/data 24.9G 1.73T 120K /rpool/data rpool/data/subvol-100-disk-0 2.15G 1.73T 2.14G /rpool/data/subvol-100-disk-0 rpool/data/subvol-101-disk-0 1.70G 1.73T 1.70G /rpool/data/subvol-101-disk-0 rpool/data/subvol-103-disk-0 919M 1.73T 919M /rpool/data/subvol-103-disk-0 rpool/data/subvol-104-disk-0 702M 1.73T 702M /rpool/data/subvol-104-disk-0 rpool/data/subvol-106-disk-0 1.07G 1.73T 1.07G /rpool/data/subvol-106-disk-0 rpool/data/subvol-107-disk-0 3.58G 1.73T 3.58G /rpool/data/subvol-107-disk-0 rpool/data/subvol-108-disk-0 9.19G 1.73T 9.19G /rpool/data/subvol-108-disk-0 rpool/data/subvol-109-disk-0 1.07G 1.73T 1.07G /rpool/data/subvol-109-disk-0 rpool/data/subvol-110-disk-0 636M 1.73T 634M /rpool/data/subvol-110-disk-0 rpool/data/subvol-111-disk-0 293M 7.71G 293M /rpool/data/subvol-111-disk-0 rpool/data/subvol-117-disk-0 411M 1.73T 411M /rpool/data/subvol-117-disk-0 rpool/data/subvol-125-disk-0 1.25G 1.73T 1.25G /rpool/data/subvol-125-disk-0 rpool/data/subvol-199-disk-0 301M 7.71G 301M /rpool/data/subvol-199-disk-0 rpool/data/subvol-901-disk-0 951M 1.73T 951M /rpool/data/subvol-901-disk-0 rpool/data/subvol-906-disk-0 814M 1.73T 814M /rpool/data/subvol-906-disk-0 root@h3plus2:~#

On the host :
root@h3plus2:~# cd /Video/subvol-100-disk-0/ root@h3plus2:/Video/subvol-100-disk-0# find . ./subvol-100-disk-0 ./subvol-100-disk-0/10303_20230515181200.ts ./subvol-100-disk-0/1711_20171102215700.ts ./subvol-100-disk-0/30303_20220217233200.ts.0.100x56.png ./subvol-100-disk-0/1702_20180222204200.ts.0.100x75.png ./subvol-100-disk-0/2305_20191225010200.ts ./subvol-100-disk-0/1641_20170601191200.ts ./subvol-100-disk-0/1712_20180613190000.ts ...truncated for readability

In the Container :
root@tester:~# cd /media/vda/ root@tester:/media/vda# find . ./subvol-100-disk-0 root@tester:/media/vda#
 
I would try something like:
Code:
zfs rename Video/subvol-100-disk-0 Video/old-subvol-100
zfs rename Video/old-subvol-100/Video/subvol-100-disk-0 Video/Video/subvol-100-disk-0

Video/subvol-199-disk-0/subvol-199-disk-0 does not contain any data. You can see that it currently refers only 96k of (meta)data.
 
That worked, thanks.
I played with some smaller disks and figured out, that if I change this
pve-zsync sync --source 192.168.0.213:Video/subvol-199-disk-0 --dest 127.0.0.1:Video/subvol-199-disk-0 --verbose
to that
pve-zsync sync --source 192.168.0.213:Video/subvol-199-disk-0 --dest 127.0.0.1:Video --verbose

it works from the beginning.
 
While I moved a couple of machines now, I see the size of the disks in the GUI with "0B". They boot fine.
root@h3plus2:~# pvesm list local-zfs Volid Format Type Size VMID local-zfs:subvol-100-disk-0 subvol rootdir 0 100 local-zfs:subvol-101-disk-0 subvol rootdir 0 101 local-zfs:subvol-103-disk-0 subvol rootdir 0 103 local-zfs:subvol-104-disk-0 subvol rootdir 0 104 local-zfs:subvol-106-disk-0 subvol rootdir 0 106 local-zfs:subvol-107-disk-0 subvol rootdir 0 107 local-zfs:subvol-108-disk-0 subvol rootdir 0 108 local-zfs:subvol-109-disk-0 subvol rootdir 0 109 local-zfs:subvol-110-disk-0 subvol rootdir 0 110 local-zfs:subvol-117-disk-0 subvol rootdir 0 117 local-zfs:subvol-125-disk-0 subvol rootdir 0 125 local-zfs:subvol-901-disk-0 subvol rootdir 0 901 local-zfs:subvol-906-disk-0 subvol rootdir 0 906 root@h3plus2:~#
compared to
root@h3plus2:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 27.7G 1.73T 104K /rpool rpool/ROOT 2.93G 1.73T 96K /rpool/ROOT rpool/ROOT/pve-1 2.93G 1.73T 2.93G / rpool/data 24.7G 1.73T 120K /rpool/data rpool/data/subvol-100-disk-0 2.14G 1.73T 2.14G /rpool/data/subvol-100-disk-0 rpool/data/subvol-101-disk-0 1.70G 1.73T 1.70G /rpool/data/subvol-101-disk-0 rpool/data/subvol-103-disk-0 921M 1.73T 921M /rpool/data/subvol-103-disk-0 rpool/data/subvol-104-disk-0 704M 1.73T 704M /rpool/data/subvol-104-disk-0 rpool/data/subvol-106-disk-0 1.08G 1.73T 1.08G /rpool/data/subvol-106-disk-0 rpool/data/subvol-107-disk-0 3.58G 1.73T 3.58G /rpool/data/subvol-107-disk-0 rpool/data/subvol-108-disk-0 9.28G 1.73T 9.28G /rpool/data/subvol-108-disk-0 rpool/data/subvol-109-disk-0 1.07G 1.73T 1.07G /rpool/data/subvol-109-disk-0 rpool/data/subvol-110-disk-0 636M 1.73T 635M /rpool/data/subvol-110-disk-0 rpool/data/subvol-117-disk-0 411M 1.73T 411M /rpool/data/subvol-117-disk-0 rpool/data/subvol-125-disk-0 1.25G 1.73T 1.25G /rpool/data/subvol-125-disk-0 rpool/data/subvol-901-disk-0 951M 1.73T 951M /rpool/data/subvol-901-disk-0 rpool/data/subvol-906-disk-0 814M 1.73T 814M /rpool/data/subvol-906-disk-0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!