Wenn die Storages auf den Nodes unterschiedlich sind, musst du das entsprechend markieren auf welchen Nodes diese verfügbar sind. Wenn du ein Storage bearbeitest oben rechts die Nodes auswählen.
Per Default sind alle Storages für alle Nodes...
Set it to warn, then you will see what the ideal would be, without it acting by itself. You should have something in the ballpark of 100PGs/OSD. If you have to few, it can impact performance and also recovery speed/impact in case you lose a node/OSD.
The HW looks good so far.
If I understand it correctly, the PVE hosts connect to the Ceph cluster via 25Gbit/s? While the Ceph nodes themselves use 100Gbit/s?
I would verify that the network performs as expected, as in, do iperf / iperf3 checks...
Some more details would be good to know:
* Disk model of the OSDs
* Network speed for the physical Ceph network(s)
* General specs of the servers, like CPU and RAM
* cat /etc/pve/ceph.conf and cat /etc/network/interfaces please paste the output...
Overall it looks okay. But especially in the beginning it could be simplified.
create new HDD only rule:
ceph osd crush rule create-replicated replicated_hdd default host hdd
Then assign that rule to the existing pools. Some rebalancing is...
Cluster über 2 Räume / Locations ist hier erläutert, mit Ceph als Storage: Würde grundsätzlich auch mit anderen Storages gehen. Wenn man kein HCI Ceph nimmt, kann die Stimme an dritter stelle auch gerne ein QDevice statt voller Node sein...
you can check if we already have a feature request in our bugtracke or not. if we do, please chime in
https://bugzilla.proxmox.com
without checking in detail, fetching that info would mean extending the storage plugin API, so a little bit more...
These are a good starting point:
https://pve.proxmox.com/wiki/Proxmox_VE_API
https://pve.proxmox.com/pve-docs/api-viewer/
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
https://pve.proxmox.com/wiki/Proxmox_VE_API
https://pve.proxmox.com/pve-docs/api-viewer/index.html
and it is true, Proxmox VE does not use libvirt at all, but handles the interaction with kvm/qemu directly.
The Proxmox VE cluster puts a storage lock in place so the other nodes know not to modify the metadata.
As with any other shared storage, there is only ever one active VM process accessing the data. Either the source VM, or after the handover...
only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while)
https://docs.ceph.com/en/latest/cephfs/nfs/
I didn't go through this 6 year old thread, since things changed quite a bit since then. Please checkout this part of the Proxmox VE 8 to 9 upgrade guide https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher
That...
Auf was genau versuchst du die MAC zu setzen? Es gibt hier Einschränkungen was Unicast MACs sind: https://en.wikipedia.org/wiki/MAC_address#Ranges_of_group_and_locally_administered_addresses
Wenn ich das richtig sehe, sollte das eine "BD:...."...
I think this is the reason! A shared LVM cannot be of the type thin, but must be a regular/thick LVM!
In a thin LVM you can only have one writer -> local host only.
If you need snapshots, you can enable the new "Snapshot as a Volume Chain"...
Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time.
So if you followed the multipath guide and still have some issues, the question would be...