I would remove it from the cluster and handle it as "separated" - https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node
When it is repaired you would need to join the cluster again. Note that this might not be trivial as...
Thanks for the heads-up! I've noticed that myself in the meantime.
It might be interesting to understand why I didn't see any speed difference in my tests:
This was probably due to sshuttle, which in this configuration (standard buffer or no...
The setting in "Add: Sync Job Pull - Pull Direction" accepts a "Rate Limit:" in MiB/s = Mebi Bytes.
The usual (but not all) bandwidth diagrams show Bits per second.
Any chance this was the culprit...?
If I recall correctly you should also be able to use host + disabled flag for nested virtualization to get a good performance and maximum features of your cpu
-cpu x86-64-v3,... and -cpu host,... are not the same thing. x86-64-v3 is a named CPU model / ABI baseline, while host is host passthrough. In QEMU terms, named models expose a predefined, stable feature set, and host passthrough exposes the host...
Aber nur, wenn die Storage-Hardware das anbietet. Der OP erwähnt eine SAN, die können das ja oft nicht, sondern werden über FibreChannel angebunden. Wenn die Storage-Hardware NFS anbietet, wäre das aber auch meine Empfehlung an den OP, weil das...
Echt? Ich sehe das fundamental anders. Einige interessante Features fehlen deinem Raid6 nämlich. Auch wenn der Augenmerk dort auf "kleine Systeme" liegt...
also grundsätzlich brauchst du bei diesem szenario immer eine ungerade anzahl an stimmen, also 3, 5, 7 usw und eine davon (ein qdevice ist hier resourcensparend und kann mehr oder weniger überall, also auch in der cloud, laufen) muss zwingend an...
that is exactly what i see.
1 running backup per node in the cluster.
i have 3 nodes, so i get 3 simultaneous backups.
if you have only 1 node, you are stuck at 1 backup at a time though.
That's a very fair point. While adding a full PVE node might offer the minor convenience of seeing the arbiter's status directly in the WebUI, the administrative overhead and potential complexities you mentioned—especially regarding storage...
Another thing to consider: As soon as you add the qdevice the cluster members can login as root on the qdevice via ssh without additional authentification. So you really shouldn't use the qdevice VM for anything else. This is especially important...
Thank you so much for sharing your experience—this is truly valuable. It helps me better understand the lower limits of what a Q-device requires to function effectively.
Your insights further confirm how flexible the Q-device setup can be...
Naja, ocfs2 war eigentlich nie die erste Wahl, eben weil der Support dafür (sowohl auf Seiten von PVE als auch bei Hardware-herstellern) so lausig ist ;) Aber es erlaubt eben Snapshots mit qcow2 zu benutzen und funktioniert ähnlich aus...
Any x86 (Intel) OS with BIOS or UEFI support should work. Theoretically you could emulate almost any other known CPU as well, but that gets tricky.
You can reference other hypervisors like Nutanix, Red Hat, Canonical as well as documentation...
Basically any operating system which is able to run on a modern x86/64 system should run in ProxmoxVE too. Depending on the usecase and specific os there might be some other considerations, for examples Redhat and Co (Alma/Rocky) demands that the...