Overall it looks okay. But especially in the beginning it could be simplified.
create new HDD only rule:
ceph osd crush rule create-replicated replicated_hdd default host hdd
Then assign that rule to the existing pools. Some rebalancing is...
Cluster über 2 Räume / Locations ist hier erläutert, mit Ceph als Storage: Würde grundsätzlich auch mit anderen Storages gehen. Wenn man kein HCI Ceph nimmt, kann die Stimme an dritter stelle auch gerne ein QDevice statt voller Node sein...
you can check if we already have a feature request in our bugtracke or not. if we do, please chime in
https://bugzilla.proxmox.com
without checking in detail, fetching that info would mean extending the storage plugin API, so a little bit more...
These are a good starting point:
https://pve.proxmox.com/wiki/Proxmox_VE_API
https://pve.proxmox.com/pve-docs/api-viewer/
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
https://pve.proxmox.com/wiki/Proxmox_VE_API
https://pve.proxmox.com/pve-docs/api-viewer/index.html
and it is true, Proxmox VE does not use libvirt at all, but handles the interaction with kvm/qemu directly.
The Proxmox VE cluster puts a storage lock in place so the other nodes know not to modify the metadata.
As with any other shared storage, there is only ever one active VM process accessing the data. Either the source VM, or after the handover...
only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while)
https://docs.ceph.com/en/latest/cephfs/nfs/
I didn't go through this 6 year old thread, since things changed quite a bit since then. Please checkout this part of the Proxmox VE 8 to 9 upgrade guide https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher
That...
Auf was genau versuchst du die MAC zu setzen? Es gibt hier Einschränkungen was Unicast MACs sind: https://en.wikipedia.org/wiki/MAC_address#Ranges_of_group_and_locally_administered_addresses
Wenn ich das richtig sehe, sollte das eine "BD:...."...
I think this is the reason! A shared LVM cannot be of the type thin, but must be a regular/thick LVM!
In a thin LVM you can only have one writer -> local host only.
If you need snapshots, you can enable the new "Snapshot as a Volume Chain"...
Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time.
So if you followed the multipath guide and still have some issues, the question would be...
To add to @bbgeek17
check out the multipathing guide that also covers the finalization with a shared LVM on top: https://pve.proxmox.com/wiki/Multipath
And with just 2 nodes, you need to add a 3rd vote to the cluster, as otherwise if you...
Do I understand it correctly? When you run the command, the machine just does a reset/reboot without any warning? If that is true, then there is something off as that should not happen. Did you check the memory for example? The Proxmox VE...
If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood.
That way you can have a completely isolated...
Is the node able to connect to shop.proxmox.com via https (port 443) and is the firewall not breaking the encryption with its own certificate?
If you have further issues, please open a ticket at our technical support portal...