Congratulations everyone on the ZFS 2.2.0 release!
Having said that ZFS 2.2.0 itself has substantial amount optimisations, I'm curious, if the native (zoned) ZFS filesystem for LXC will be implemented in Proxmox.
I think this could have some good benefits, for example:
- probably some...
Thanks for your answer, Fabian. Sorry, I didn't make it clear. I understand that when we are talking about distributed system - it totally depends on the application, If the app can't handle that situation, then of cause one node may be confused about the state of the other node. Totally agree...
It's interesting to read.. Does anybody really experience data corruption of any kind or corrupted snapshot during the power loss with sync=disabled?
To my understanding, the consequences will be exactly the same as if the power loss happened ~ 5 seconds earlier with sync=standard. Am I wrong?
In my case the pveproxy process and 3 pveproxy worker processes were using the volume, according to this command:
Restarting pveproxy helped:
PVE 7.3-4
drbdadm -- --overwrite-data-of-peer primary vm-221-disk-1
drbdsetup primary --force vm-221-disk-1
None of this commands worked for me until I brought down the resource on the other nodes. After that, first one succeeded, I didn't try the second;
The VM args in shantanu post didn't work for me. Probably it is outdated. I use:
args: -drive id=stick,if=none,format=raw,file=/home/stick.img -device nec-usb-xhci,id=xhci -device usb-storage,bus=xhci.0,drive=stick
Source: https://qemu-project.gitlab.io/qemu/system/devices/usb.html
If you use...
Thank you for the clarification, Fabian.
Unfortunately we can't use DRBD until the size issue is fixed. Hope it will be fixed soon.
Best regards, Albert
Thanks for the link, Fabian,
Currently we can live with offline migration, but what really has blown my mind is that the migration process altering data! I mean disk size.
If I understand right, it has been made deliberately for debugging process, isn't it?
So it is not related to DRBD and I...
This really is a shame. I can't even migrate to DRBD offline because of the inconsistency in size after migration:
On ZFS:
root@dc2:~# blockdev --getsize64 /dev/sda
2151677952
After moving to DRBD:
root@dc2:~# blockdev --getsize64 /dev/sda
2155372544 (+3694592 bytes = 3608K)
Back to ZFS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.