Hi Victor,
One other 'temporary' thing that you may configure if there is a critical need for all OSDs to be up is to change the allocation_size for each OSD from 64k to 4k using the 'bluestore_shared_alloc_size' parameter [0], which you can...
The failure domain must never be the OSD.
With failure domain = host you only have one copy or one chunk of the erasure coded object in one host. All the other copies or chunks live on other hosts.
That is why you need at least three hosts for...
you dont need pci passthrough for lxc- just would need to install the proper nvidia driver based on hardware and kernel deployed. You are better off creating an installation script, especially if you intend on having multiple nodes with GPUs...
I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough...
In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and...
iSCSI is deprecated in the Ceph project and should not be used any more.
And there is no need to backup a single Proxmox node (if you have a cluster).
You may want to backup the VM config files but everything else is really not that important...
Ceph can deploy NVMEoF gateways. You need to find hardware that is able to boot from that.
Or you use a PXE network boot where the initrd contains all necessary things to continue with a Ceph RBD as root device.
Thanks for the heads up. Pretty sure most where created with Ceph Reef except a few that got recreated recently with Squid 19.2.3. I'm aware of that bug, but given that I don't use EC pools (Ceph bugreport mentions it seems to only happen on OSD...
as others pointed out already this hits OSDs that are fairly full (~75%+) with heavy disk fragmentation.
v19.2.3 already ships a race condition fix (https://ceph.io/en/news/blog/2025/v19-2-3-squid-released/) that prevents new corruption, but it...
Hey, vielen Dank.
Als zentrales Storage meinte ich eine zentrales Ceph Storage-Cluster, also kein HCI-Ceph. Wir vermuten, dass ist performanter.
Siehst du da Probleme oder Nachteile.
Have these OSDs been deployed with 19.2?
You may be seeing this bug: https://docs.clyso.com/blog/critical-bugs-ceph-reef-squid/#squid-deployed-osds-are-crashing
Hi!
I'm not sure whether this will fit into the concept of HA itself, but if the guest is already on shared storage only, then it easily possible to recover that VM to a running node by just moving the guest's configuration file to the running...
Das hier ist das Proxmox Community Support Forum und der Name sollte grundsätzlich schon auch Programm sein, bitte also schon auch etwas bei dem Thema bleiben.
Der Thread hat auch einige Reports und Spam Triggers generiert, etwas offensichtlich...
Ich ziehe den social media Posts einen anerkannten und renommierten Sicherheitsexperten dem social media (Forumspost) eines KI-Befürworters vor, da ich glaube dass ersterer zu den Security-Implikationen sich besser auskennt als ein KI-Bro.
Ihre...
Kevin Beaumont ist ein anerkannter IT-Sicherheitsexperte und hat u.a. schon für Microsoft gearbeitet.
Aber klar, ich bild' mir meine eigene Meinung. Das hat ja schon immer gut funktioniert.