Search results

  1. gurubert

    Second CEPH pool for SSD

    ceph osd pool set $poolname crush_rule $rulename will assign a rule to an existing pool. Ceph will then move all the PGs with their objects according to the new rule.
  2. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Bei Linbit gibt es ein DRBD-Plugin für Proxmox. Das kann für so kleine Umgebungen besser sein.
  3. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Das ist mit aktuellem Ceph auch nicht mehr notwendig. Früher war das IO auf SSDs/NVMes noch nicht so optimal, so dass 2 OSDs auf einem Device empfohlen wurden. Wenn das Device dann aber defekt ist, sind halt auch beide OSDs kaputt.
  4. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Selbst wenn Du eine schnelle Netzwerkkarte besorgst, wirst Du mit diesem absoluten Minimalsetup nicht glücklich werden. Nur eine OSD pro Knoten und nur drei Knoten ist einfach zu wenig, Ceph kann damit seine Stärken nicht ausspielen. Die Performance wird auch lange nicht an die möglichen...
  5. gurubert

    Is there a way to use Terraform to add newly created VM to an existing backup job?

    You could define the backup jobs to backup all VMs from a specific pool. Then add the new VM to that pool and it will be automatically backed up.
  6. gurubert

    How to migrate Debian VM to Bare Metal

    dd is a command line tool to write bytes from and to a file or block device.
  7. gurubert

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    You should add that you need to first get the OSD map with ceph osd getmap -o om.
  8. gurubert

    accessing the root partition of a pve installation

    /etc/pve is a FUSE mount that gets its data from the Proxmox database. Without a running Proxmox there is nothing in /etc/pve.
  9. gurubert

    upgrade ceph a minor version

    All nodes in the Proxmox cluster should run the same versions of the software.
  10. gurubert

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    It looks like the kernel builtin CephFS client is too old to talk to the MDS.
  11. gurubert

    Question for PVE Staff/Devs - Enabling cephadm, what are the known (with evidence) problems?

    IMHO you need to choose the method of management for the Ceph cluster. It is either with the tools and the GUI of Proxmox or with the cephadm orchestrator, but not both at the same time. You can setup Ceph with cephadm alongside Proxmox on the same hardware and use it as storage for Proxmox.
  12. gurubert

    Problem with PVE 8.1 and OCFS2 shared storage with io_uring

    `virtio` may be something completely different. Please switch to `scsi` and the `virtio-scsi-single` virtual HBA in the VM.
  13. gurubert

    New install. Proxmox + Ceph cluster config questions.

    You could add a standby-replay MDS instance that will be active faster than a cold standby. Each CephFS needs at least one active MDS. With 8 CephFS you would need 8 MDS instances. Do you have the capacities to run these? Multiple MDS per CephFS are needed when the load is too high for just...
  14. gurubert

    Cluster A and ClusterB same management network subnet

    The VM config does not reside in the VM storage directory like it is with VMware. You would need to copy or move the NNN.conf file from /etc/pve of one cluster to the other. And make sure there is no ID conflict and the storages and network bridges are all named the same. Otherwise you would...
  15. gurubert

    [SOLVED] Switch redundancy for cluster network?

    You can have multiple cluster network links between the Proxmox nodes. This allows e.g. to have two 1G NICs per node and two "cheap" separate 1G switches for the cluster networks.
  16. gurubert

    Unexpected Ceph Outage - What did I do wrong?

    This is not expected. Is there any IO going on after you physically removed the drive? The OSD orocess will only die after it cannot access the drive which it will only try to do with IO. As for the complete failure there is way too less information to debug this. How.many.MONs did you deploy?
  17. gurubert

    Access Proxmox managed Ceph Pool from standalone node

    Ceph is either IPv6 or IPv4, but not dual stack AFAIK. Does your standalone node has a layer 2 connection in fc00::/64? If not you need to route the traffic. Which is OK for a Ceph client.
  18. gurubert

    Backing up bare metal machine in network

    I don't think that your first method "running the backup from the PBS machine/VM connecting through the network" is even implemented in PBS. To backup physical machines you need to run the PBS client on them. https://pbs.proxmox.com/docs/backup-client.html
  19. gurubert

    1 ceph or 2 ceph clusters for fast and slow pools?

    I do not think that this will be necessary as the IOPS provided by the NVMe drives will make these OSDs grab CPU and memory resources. HDD OSD processes usually sit and wait for IO from the HDDs.
  20. gurubert

    1 ceph or 2 ceph clusters for fast and slow pools?

    Don't run Ceph in VMs. Just use the hardware in one cluster. As the clients exchange data directly with the OSDs there is no bottlebeck where both fast and slow drives would clash (except for the NICs, but these would also be shared in your scenario).