Hi
yes I didn't do it like that but it seems correct to me,
In any case try it. I think dd will use the blocksize of the destination unlike zfs send/recv.
Hi
If I understand correctly, if the link is broken in full synchronization, we start again from the beginning.
Does pve-zsync do it or do you have to do it manually?
There is no way to remove the failed snapshot from the destination and resynchronize from the good snapshot of the source, you...
Hello,
I have some questions about pve-zsync for my peace of mind.
- If I have a break in the link between two nodes in full sync and a network resume several hours later.
- If the source node restart in full sync.
- If the destination node restart in full sync.
Regards
Hi
Storage with zfs is not simple to understand for me to !!!!
But the size of your vm is the size of volsize 932G propertie, you need to go the windows storage admin to show used and unused space .
rpool/data/vm-100-disk-3 used 911G
rpool/data/vm-100-disk-3 usedbysnapshots...
Hi
post your zfs get all rpool/data/vm-100-disk-3
your /etc/pve/storage.cfg
The default blocksize 8k consumes a lot of space.
I found that the 32K blocksize is the best for disk space coherency and usage and I have good results for running windows in raidz-2.
regards
Steeve
Hello
Is there a way to compare 2 zvol or dataset?
Make sure that replication is ok!
I searched and apart from comparing 2 snapshots, I didn't find any way to compare 2 zvol or dataset !
Regards
Steeve
Hi,
Have you found a solution because the only one that works for me is to mount manually and every time I reboot, I have to do this manipulation.
ps: Deleting the dev folder does not work.
Recreating the cache and updating the kernel does not work
Options in storage.cfg:
mkdir no...
Hi Fabian
Thank you very much!! I have manually mounted for the moment with the zfs mount -O zmarina/isos command and for his children and I have found my data. I configured the dataset in storage.cfg file with the options:
is_mountpoint yes
mkdir 0
This threads to:
(Bug?) Proxmox VE 6.0-4 -...
Hello everybody
I hate this experience! Losing my data specially with zfs!
So I dedicated a raidz2 for vm windows and my isos.
After the update from 5x to 6x where I didn't get any mistakes, the zvol were there (wow) but not my isos in a dedicated dataset.
the disk space is still occupied...
Hi,
I installed the latest version of PVE ZFS with in a zvol windows 2019 server. I notice that each time I start or restart I have the power-on event monitor window.
Do you have this problem?
pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-6 (running...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.