Congratulations everyone on the ZFS 2.2.0 release!
Having said that ZFS 2.2.0 itself has substantial amount optimisations, I'm curious, if the native (zoned) ZFS filesystem for LXC will be implemented in Proxmox.
I think this could have some good benefits, for example:
- probably some...
Thanks for your answer, Fabian. Sorry, I didn't make it clear. I understand that when we are talking about distributed system - it totally depends on the application, If the app can't handle that situation, then of cause one node may be confused about the state of the other node. Totally agree...
It's interesting to read.. Does anybody really experience data corruption of any kind or corrupted snapshot during the power loss with sync=disabled?
To my understanding, the consequences will be exactly the same as if the power loss happened ~ 5 seconds earlier with sync=standard. Am I wrong?
In my case the pveproxy process and 3 pveproxy worker processes were using the volume, according to this command:
Restarting pveproxy helped:
PVE 7.3-4
drbdadm -- --overwrite-data-of-peer primary vm-221-disk-1
drbdsetup primary --force vm-221-disk-1
None of this commands worked for me until I brought down the resource on the other nodes. After that, first one succeeded, I didn't try the second;
The VM args in shantanu post didn't work for me. Probably it is outdated. I use:
args: -drive id=stick,if=none,format=raw,file=/home/stick.img -device nec-usb-xhci,id=xhci -device usb-storage,bus=xhci.0,drive=stick
Source: https://qemu-project.gitlab.io/qemu/system/devices/usb.html
If you use...
Thank you for the clarification, Fabian.
Unfortunately we can't use DRBD until the size issue is fixed. Hope it will be fixed soon.
Best regards, Albert
Thanks for the link, Fabian,
Currently we can live with offline migration, but what really has blown my mind is that the migration process altering data! I mean disk size.
If I understand right, it has been made deliberately for debugging process, isn't it?
So it is not related to DRBD and I...
This really is a shame. I can't even migrate to DRBD offline because of the inconsistency in size after migration:
On ZFS:
root@dc2:~# blockdev --getsize64 /dev/sda
2151677952
After moving to DRBD:
root@dc2:~# blockdev --getsize64 /dev/sda
2155372544 (+3694592 bytes = 3608K)
Back to ZFS...
Hi, you can specify different VLAN tags to different VM groups that you want to isolate as well as 'trunks='. Though I've never came across such use case myself, but for cutting untagged traffic completely your patch is a nice solution.
Thanks for advice, it might be reasonable in some circumstances. Currently I have to set it even if some nodes fail to start, and remove the flag only during in the period of minimum cluster load.
You are not serious. Do you?) There are numerous reasons when you need to shut down the cluster...
Hi, @saalih416, I actually did it wrong, I used .img from .tar.gz file. Names and sizes was the same, as I can remember, I thought that it was the same image. README in currently available .tar.gz archives points out that the image in archive is a raw ext4 partition. Probably I missed README...
Hi,
Can someone explain how to shutdown the hyper-converged cluster properly?
I suppose the steps should be as follows:
1. Shutdown all VMs on every node
2. Set the following flags:
# ceph osd set noout
# ceph osd set nobackfill
# ceph osd set norecover
3. Shutdown the nodes only after All...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.