aaron's latest activity

  • aaron
    Wenn die Storages auf den Nodes unterschiedlich sind, musst du das entsprechend markieren auf welchen Nodes diese verfügbar sind. Wenn du ein Storage bearbeitest oben rechts die Nodes auswählen. Per Default sind alle Storages für alle Nodes...
  • aaron
    aaron replied to the thread Proxmox with ceph performance.
    Set it to warn, then you will see what the ideal would be, without it acting by itself. You should have something in the ballpark of 100PGs/OSD. If you have to few, it can impact performance and also recovery speed/impact in case you lose a node/OSD.
  • aaron
    aaron replied to the thread Proxmox with ceph performance.
    The HW looks good so far. If I understand it correctly, the PVE hosts connect to the Ceph cluster via 25Gbit/s? While the Ceph nodes themselves use 100Gbit/s? I would verify that the network performs as expected, as in, do iperf / iperf3 checks...
  • aaron
    aaron replied to the thread Proxmox with ceph performance.
    Some more details would be good to know: * Disk model of the OSDs * Network speed for the physical Ceph network(s) * General specs of the servers, like CPU and RAM * cat /etc/pve/ceph.conf and cat /etc/network/interfaces please paste the output...
  • aaron
    Overall it looks okay. But especially in the beginning it could be simplified. create new HDD only rule: ceph osd crush rule create-replicated replicated_hdd default host hdd Then assign that rule to the existing pools. Some rebalancing is...
  • aaron
    Cluster über 2 Räume / Locations ist hier erläutert, mit Ceph als Storage: Würde grundsätzlich auch mit anderen Storages gehen. Wenn man kein HCI Ceph nimmt, kann die Stimme an dritter stelle auch gerne ein QDevice statt voller Node sein...
  • aaron
    you can check if we already have a feature request in our bugtracke or not. if we do, please chime in https://bugzilla.proxmox.com without checking in detail, fetching that info would mean extending the storage plugin API, so a little bit more...
  • aaron
    rbd -p <pool> du would be the quickest way
  • aaron
    aaron reacted to bbgeek17's post in the thread NetApp ONTAP deploy support with Like Like.
    These are a good starting point: https://pve.proxmox.com/wiki/Proxmox_VE_API https://pve.proxmox.com/pve-docs/api-viewer/ Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • aaron
    aaron replied to the thread NetApp ONTAP deploy support.
    https://pve.proxmox.com/wiki/Proxmox_VE_API https://pve.proxmox.com/pve-docs/api-viewer/index.html and it is true, Proxmox VE does not use libvirt at all, but handles the interaction with kvm/qemu directly.
  • aaron
    aaron replied to the thread LVM on shared storage.
    See https://forum.proxmox.com/threads/cluster-aware-fs-for-shared-datastores.164933/#post-840004
  • aaron
    The Proxmox VE cluster puts a storage lock in place so the other nodes know not to modify the metadata. As with any other shared storage, there is only ever one active VM process accessing the data. Either the source VM, or after the handover...
  • aaron
    only partially related, but why does this need SMB? Have you looked at NFS exports? They should also be a valid option and should work without cephadm or ceph orch, last time I checked (has been a while) https://docs.ceph.com/en/latest/cephfs/nfs/
  • aaron
    I didn't go through this 6 year old thread, since things changed quite a bit since then. Please checkout this part of the Proxmox VE 8 to 9 upgrade guide https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher That...
  • aaron
    ja, wenn man anfängt zu schreiben und dann zwischendurch kurz abgelenkt wird ;)
  • aaron
    Da war ich schneller Aaron ;-) Danke für die Verlinkung der Wikipedia und guten Morgen nach Wien!
  • aaron
    Auf was genau versuchst du die MAC zu setzen? Es gibt hier Einschränkungen was Unicast MACs sind: https://en.wikipedia.org/wiki/MAC_address#Ranges_of_group_and_locally_administered_addresses Wenn ich das richtig sehe, sollte das eine "BD:...."...
  • aaron
    how? any errors in the task log?
  • aaron
    I think this is the reason! A shared LVM cannot be of the type thin, but must be a regular/thick LVM! In a thin LVM you can only have one writer -> local host only. If you need snapshots, you can enable the new "Snapshot as a Volume Chain"...
  • aaron
    Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time. So if you followed the multipath guide and still have some issues, the question would be...