Search results

  1. gurubert

    Deploying multiple proxmox host easily?

    I would also assume that automating a Debian installation and then adding the Proxmox repos and packages would be the easiest way. Have you looked at https://fai-project.org/ ?
  2. gurubert

    Fragen zu einem Ceph Cluster Setup

    2x 25G sind aufgrund der geringeren Latenzen besser als 2x 40G. Für NVMe braucht es keinen HBA-Controller, da diese ja direkt am PCI-Bus hängen.
  3. gurubert

    Two Ceph pool with seperated hard disks in each node

    The command line tool "ceph" is available to create crush rules and set device classes.
  4. gurubert

    Two Ceph pool with seperated hard disks in each node

    Assign a different device class to the "B" disks and create crush rules that utilizes these device classes. Assign the crush rules to the pools and they will only use the OSDs from their device class.
  5. gurubert

    [SOLVED] How to enable tagged vlan1 for VM by default?

    Use another VLAN ID as PVID for that bridge. We use 4063 for that purpose: auto vmbr0 iface vmbr0 inet static bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 1-3 5-99 101-4094 bridge-pvid 4063
  6. gurubert

    Ceph PG error

    IMHO 80GB for WAL is way too much. It should be sufficient to have one extra 80GB of SSD space for HDD OSDs that then will contain DB and WAL. OSDs will automatically put WAL and DB together if you only specify one db-device on creation.
  7. gurubert

    Ceph PG error

    The MGR needs that pool and will create it, I forgot. :)
  8. gurubert

    Ceph PG error

    The .mgr pool is the one with the missing PG, you need to remove it and recreate it with the same name. The MGR stores information in that pool.
  9. gurubert

    Ceph PG error

    Yes, this should not affect the other pool.
  10. gurubert

    Ceph PG error

    You seem to have removed the OSDs too fast and the pool was not able to migrate its PG to other OSDs. And now it cannot find the PG any more. You can try to remove the pool and recreate it. It is called ".mgr" with a dot in front.
  11. gurubert

    Ceph PG error

    What happend to the pool with the id 1 (usually the pool called ".mgr")? Its only placement group is in state "unknown" which is not good. Please try to restart OSDs 2, 3 and 7 with "systemctl restart ceph-osd@2.service" etc on their nodes. If that does not fix the issue please post the...
  12. gurubert

    Ceph PG error

    You use inages of multiple KBs in size to post text of a few bytes. why? Please attach the text files here somehow.
  13. gurubert

    Ceph PG error

    Please.post the output of "ceph -s", "ceph osd df tree" and "ceph pg dump".
  14. gurubert

    [SOLVED] Probleme mit Datapool

    Der Fehler wird auch hier erwähnt: https://forum.proxmox.com/threads/rbd-error-rbd-listing-images-failed-2-no-such-file-or-directory-500.120661/ Wurden Images migriert und die Migration abgebrochen?
  15. gurubert

    [SOLVED] Probleme mit Datapool

    Was sagt "ceph -s" auf den Knoten?
  16. gurubert

    Ceph - Reduced data availability: 1 pg inactive, 1 pg incomplete

    Are both OSDs gone? You could try to set min_size to 1 on the affected pool(s). This will allow Ceph to restore the copies from the last remaining one.
  17. gurubert

    [SOLVED] Ceph - poor write speed - NVME

    Have you tried to attach the virtual disks as SCSI disks with the virtio-scsi-single controller?
  18. gurubert

    New Ceph cluster advice

    Both CPUs have 8 cores, 16 threads. In total 16 cores and 32 threads in each node. An SSD OSD should have 2 CPU threads available which is given. The 2660 is a bit slower. You will have to test the system how much this affects real workloads.
  19. gurubert

    [SOLVED] Unexpected Behavior with Multiple Network Interfaces

    Use bond mode active-passive for that with the faster link as active. If you switch supports LACP you could also use that and utilize both connections.