root@backup:/D4BACKUP# ./pve-zsync.sh
ERROR: in path
ERROR: in path
ERROR: in path
...
...
./pve-zsync.sh: line 1: $'\r': command not found
./pve-zsync.sh: line 2: $'\r': command not found
But some syncs are done. I just upgraded to v6. With v5 it was running fine. I am syncing to different...
For those having same issue which I am sure many, I shrink disk in kvm then set dataset size and extend again. And dataset size dropped to in use size again.
It did not make any difference. But you already gave me an idea. I will shrink disk to used size and extend it again. I feel like it will reduce dataset to used amount. What do you think?
I want to reduce size of backups taken from proxmox, snaphots by zfs and dataset size of kvms. Because as time pass by, with disk read/write/deletes, disk size increases unnecessarily. So I need release free space. Were I able to explain ?
This is what I to learn. I tried zerofilling, defragmenting disk but they do not seem to help much. Only couple of GBs is reduced.
So you suggest shrink the partition to used size and extend again? Do you think if clonning kvm from proxmox might help ?
None of them:
root@s1:~# zfs list -t snapshot
no datasets available
Am I missing someting? Could this be something to do inside KVM? Like defragmantation ?
I noted that older zfs dataset is more space it uses than it really does. More clearly, kvm disk has only 51.53 GB in use.
When I move this dataset to new node, it moves 152 GB;
root@s1:~# pve-zsync sync --source 118 --dest 1.2.3.4:D4 --verbose
send from @rep_default_2019-12-08_20:59:49 to...
Better than that, I am only allowing traffic only between nodes and from my pc. And my pc has dynamic ip so I update it with following bash to update ip in rules in case of ip change:
updateip.sh
# crontab -e : */5 * * * * sh /updateip.sh
# /etc/init.d/cron restart...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.