also grundsätzlich brauchst du bei diesem szenario immer eine ungerade anzahl an stimmen, also 3, 5, 7 usw und eine davon (ein qdevice ist hier resourcensparend und kann mehr oder weniger überall, also auch in der cloud, laufen) muss zwingend an...
My issue was that I had upgrade the driver on the host, which worked perfectly fine, but when I was creating the lxc I was installing an old driver which was creating a driver mismatch, what I did is not install the driver on the lxc and then the...
@TheMat556 Were you able to resolve your issue?
I suspect this might be related to a problem with the 6.17 kernel:
https://bugzilla.kernel.org/show_bug.cgi?id=220693
https://bbs.archlinux.org/viewtopic.php?id=310008
Perhaps the Proxmox team...
577 PGs.
PVE Datacenter "Ceph" view usage shows: 4.48 TiB of 18.38 TiB
Each node's storage entry shows: Usage: 26.41% (1.63 TB of 6.18 TB)
(which is 1/3 of the Datacenter view)
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW...
that is exactly what i see.
1 running backup per node in the cluster.
i have 3 nodes, so i get 3 simultaneous backups.
if you have only 1 node, you are stuck at 1 backup at a time though.
That's a very fair point. While adding a full PVE node might offer the minor convenience of seeing the arbiter's status directly in the WebUI, the administrative overhead and potential complexities you mentioned—especially regarding storage...
Another thing to consider: As soon as you add the qdevice the cluster members can login as root on the qdevice via ssh without additional authentification. So you really shouldn't use the qdevice VM for anything else. This is especially important...
Thank you so much for sharing your experience—this is truly valuable. It helps me better understand the lower limits of what a Q-device requires to function effectively.
Your insights further confirm how flexible the Q-device setup can be...
Naja, ocfs2 war eigentlich nie die erste Wahl, eben weil der Support dafür (sowohl auf Seiten von PVE als auch bei Hardware-herstellern) so lausig ist ;) Aber es erlaubt eben Snapshots mit qcow2 zu benutzen und funktioniert ähnlich aus...
This is expected behavior. The reason no reads appear on the NVMe is due to how BlueStore's WAL and RocksDB DB devices work:
RocksDB WAL (Write-Ahead Log) is write-only during normal operation. It's a sequential journal for crash consistency. It...
No, it was intended to be long-term storage for seldom-used data which I am backing up daily with PBS. I understand that the pool is completely dependent on a single host. I have also found that the small-block write performance (4k) is pretty...
Naja, ocfs2 war eigentlich nie die erste Wahl, eben weil der Support dafür (sowohl auf Seiten von PVE als auch bei Hardware-herstellern) so lausig ist ;) Aber es erlaubt eben Snapshots mit qcow2 zu benutzen und funktioniert ähnlich aus...
This is expected behavior. The reason no reads appear on the NVMe is due to how BlueStore's WAL and RocksDB DB devices work:
RocksDB WAL (Write-Ahead Log) is write-only during normal operation. It's a sequential journal for crash consistency. It...
not an option since I run hyper-v guests ;-) - there is just no "one fits all" good solution - either stick to the defaults, or be perpaired to try a LOT of flags to get it running on host cpu setting (if at all)
Thanks for clearing that up, wan't sure about this.
But then - when using `curl --interface <INTERFACE> 1.1.1.1`, does the command still use the default route, causing connectivity issues when the interface specified is non-default?
My...
Hi all,
we're facing little annoyance with our tape backup. Our tape library (Quantum SuperLoader3) apparently has no option to run through all available LTO-9 tapes and calibrate/initialize them. What we can do is load them manually using the...