The setting in "Add: Sync Job Pull - Pull Direction" accepts a "Rate Limit:" in MiB/s = Mebi Bytes.
The usual (but not all) bandwidth diagrams show Bits per second.
Any chance this was the culprit...?
If I recall correctly you should also be able to use host + disabled flag for nested virtualization to get a good performance and maximum features of your cpu
-cpu x86-64-v3,... and -cpu host,... are not the same thing. x86-64-v3 is a named CPU model / ABI baseline, while host is host passthrough. In QEMU terms, named models expose a predefined, stable feature set, and host passthrough exposes the host...
Aber nur, wenn die Storage-Hardware das anbietet. Der OP erwähnt eine SAN, die können das ja oft nicht, sondern werden über FibreChannel angebunden. Wenn die Storage-Hardware NFS anbietet, wäre das aber auch meine Empfehlung an den OP, weil das...
Echt? Ich sehe das fundamental anders. Einige interessante Features fehlen deinem Raid6 nämlich. Auch wenn der Augenmerk dort auf "kleine Systeme" liegt...
also grundsätzlich brauchst du bei diesem szenario immer eine ungerade anzahl an stimmen, also 3, 5, 7 usw und eine davon (ein qdevice ist hier resourcensparend und kann mehr oder weniger überall, also auch in der cloud, laufen) muss zwingend an...
that is exactly what i see.
1 running backup per node in the cluster.
i have 3 nodes, so i get 3 simultaneous backups.
if you have only 1 node, you are stuck at 1 backup at a time though.
That's a very fair point. While adding a full PVE node might offer the minor convenience of seeing the arbiter's status directly in the WebUI, the administrative overhead and potential complexities you mentioned—especially regarding storage...
Another thing to consider: As soon as you add the qdevice the cluster members can login as root on the qdevice via ssh without additional authentification. So you really shouldn't use the qdevice VM for anything else. This is especially important...
Thank you so much for sharing your experience—this is truly valuable. It helps me better understand the lower limits of what a Q-device requires to function effectively.
Your insights further confirm how flexible the Q-device setup can be...
Naja, ocfs2 war eigentlich nie die erste Wahl, eben weil der Support dafür (sowohl auf Seiten von PVE als auch bei Hardware-herstellern) so lausig ist ;) Aber es erlaubt eben Snapshots mit qcow2 zu benutzen und funktioniert ähnlich aus...
Any x86 (Intel) OS with BIOS or UEFI support should work. Theoretically you could emulate almost any other known CPU as well, but that gets tricky.
You can reference other hypervisors like Nutanix, Red Hat, Canonical as well as documentation...
Basically any operating system which is able to run on a modern x86/64 system should run in ProxmoxVE too. Depending on the usecase and specific os there might be some other considerations, for examples Redhat and Co (Alma/Rocky) demands that the...
There was a recent discussion on the German forum ( use something like deepl to Translation):
https://forum.proxmox.com/threads/sicherung-von-sicherungen.179854/
Basically you can export the discs and config of a vm from a snapshot with the...
1st) I would make a "dummy" restore on a other Server.
2nd) and after that, i can use the PVE integrated vzdump backup to store a 100% copy of your VM on a external device.
Depends on where you are looking from.
When I start a Backup job on my cluster with five nodes... I get five backups running at the same time = one per node. The PBS handles this fine.
Adding a Hyper-V–based PVE host just for quorum is short-sighted. You are introducing additional, unnecessary, management overhead:
Storage pools will now need to be restricted to specific nodes.
If you add NAS storage later and do not restrict...