Recent content by aaron

  1. aaron

    [SOLVED] Adding a new separated pool to existing Ceph

    Overall it looks okay. But especially in the beginning it could be simplified. create new HDD only rule: ceph osd crush rule create-replicated replicated_hdd default host hdd Then assign that rule to the existing pools. Some rebalancing is possible. But your data is save. Then add the SSDs...
  2. aaron

    Cluster total ausfallsicher gegen Brand etc.

    Cluster über 2 Räume / Locations ist hier erläutert, mit Ceph als Storage: Würde grundsätzlich auch mit anderen Storages gehen. Wenn man kein HCI Ceph nimmt, kann die Stimme an dritter stelle auch gerne ein QDevice statt voller Node sein. https://pve.proxmox.com/wiki/Stretch_Cluster
  3. aaron

    How to precisely check the actual disk usage of Ceph RBD?

    you can check if we already have a feature request in our bugtracke or not. if we do, please chime in https://bugzilla.proxmox.com without checking in detail, fetching that info would mean extending the storage plugin API, so a little bit more involved. But definitely make it known on the...
  4. aaron

    How to precisely check the actual disk usage of Ceph RBD?

    rbd -p <pool> du would be the quickest way
  5. aaron

    NetApp ONTAP deploy support

    https://pve.proxmox.com/wiki/Proxmox_VE_API https://pve.proxmox.com/pve-docs/api-viewer/index.html and it is true, Proxmox VE does not use libvirt at all, but handles the interaction with kvm/qemu directly.
  6. aaron

    LVM on shared storage

    See https://forum.proxmox.com/threads/cluster-aware-fs-for-shared-datastores.164933/#post-840004
  7. aaron

    Cluster aware FS for shared datastores?

    The Proxmox VE cluster puts a storage lock in place so the other nodes know not to modify the metadata. As with any other shared storage, there is only ever one active VM process accessing the data. Either the source VM, or after the handover, when the state has been fully migrated, the target...
  8. aaron

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while) https://docs.ceph.com/en/latest/cephfs/nfs/
  9. aaron

    [SOLVED] Wrong memory usage monitoring

    I didn't go through this 6 year old thread, since things changed quite a bit since then. Please checkout this part of the Proxmox VE 8 to 9 upgrade guide https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher That should explain the situation and requirements to...
  10. aaron

    Verbesserung in Fehlermeldung gewünscht

    ja, wenn man anfängt zu schreiben und dann zwischendurch kurz abgelenkt wird ;)
  11. aaron

    Verbesserung in Fehlermeldung gewünscht

    Auf was genau versuchst du die MAC zu setzen? Es gibt hier Einschränkungen was Unicast MACs sind: https://en.wikipedia.org/wiki/MAC_address#Ranges_of_group_and_locally_administered_addresses Wenn ich das richtig sehe, sollte das eine "BD:...." werden? D an zweiter Stelle ist eine Multicast MAC...
  12. aaron

    Cluster two node active with shared storage in FC connection

    I think this is the reason! A shared LVM cannot be of the type thin, but must be a regular/thick LVM! In a thin LVM you can only have one writer -> local host only. If you need snapshots, you can enable the new "Snapshot as a Volume Chain" feature when you add the DC -> Storage configuration...
  13. aaron

    Cluster two node active with shared storage in FC connection

    Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time. So if you followed the multipath guide and still have some issues, the question would be, what exactly are you running into? any errors or...
  14. aaron

    Cluster two node active with shared storage in FC connection

    To add to @bbgeek17 check out the multipathing guide that also covers the finalization with a shared LVM on top: https://pve.proxmox.com/wiki/Multipath And with just 2 nodes, you need to add a 3rd vote to the cluster, as otherwise if you lose/shutdown one node, the remaining would only have 50%...