Search results

  1. gurubert

    Proxmox Repository List of CDN Hosts

    IMHO this would be easier in that situation than trying to play catch with changing CDN IPs.
  2. gurubert

    VM poor storage performance

    Single thread performance like you have with one disk inside one VM will always be disappointing, especially with small block sizes. You can just throw faster hardware at the problem. BTW: Your rados bench uses 16 parallel threads and a block size of 4MB. This will show you nearly the maximum...
  3. gurubert

    Import OVA: working storage 'cephfs' does not support 'images' content type or is not file based.

    Have you added 'images' in the storage config forCephFS? After importing the image you still can migrate it to the RBD pool. Or select the target storage in the import dialog.
  4. gurubert

    VM poor storage performance

    Get multiple OSDs per node and 25G network. You are running the bare minimum for a working system.
  5. gurubert

    Proxmox\Ceph Multipath strategy?

    Usually bonding is used with LACP and a corresponding switch stack to provide network redundancy. As Ceph uses multiple TCP connections both physical links are utilized.
  6. gurubert

    Proxmox 8.2 welsche Kernel möglich?

    Es muss kein aufwendiges shared Storage dranhängen, um den Fehler zu reproduzieren. Ich habe die notwendigen Schritte hier dokumentiert: http://gurubert.de/ocfs2_io_uring.html
  7. gurubert

    Second CEPH pool for SSD

    ceph osd pool set $poolname crush_rule $rulename will assign a rule to an existing pool. Ceph will then move all the PGs with their objects according to the new rule.
  8. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Bei Linbit gibt es ein DRBD-Plugin für Proxmox. Das kann für so kleine Umgebungen besser sein.
  9. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Das ist mit aktuellem Ceph auch nicht mehr notwendig. Früher war das IO auf SSDs/NVMes noch nicht so optimal, so dass 2 OSDs auf einem Device empfohlen wurden. Wenn das Device dann aber defekt ist, sind halt auch beide OSDs kaputt.
  10. gurubert

    Fragen zu CEPH, Netzwerk und Bandbreiten im Homelab

    Selbst wenn Du eine schnelle Netzwerkkarte besorgst, wirst Du mit diesem absoluten Minimalsetup nicht glücklich werden. Nur eine OSD pro Knoten und nur drei Knoten ist einfach zu wenig, Ceph kann damit seine Stärken nicht ausspielen. Die Performance wird auch lange nicht an die möglichen...
  11. gurubert

    Is there a way to use Terraform to add newly created VM to an existing backup job?

    You could define the backup jobs to backup all VMs from a specific pool. Then add the new VM to that pool and it will be automatically backed up.
  12. gurubert

    How to migrate Debian VM to Bare Metal

    dd is a command line tool to write bytes from and to a file or block device.
  13. gurubert

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    You should add that you need to first get the OSD map with ceph osd getmap -o om.
  14. gurubert

    accessing the root partition of a pve installation

    /etc/pve is a FUSE mount that gets its data from the Proxmox database. Without a running Proxmox there is nothing in /etc/pve.
  15. gurubert

    upgrade ceph a minor version

    All nodes in the Proxmox cluster should run the same versions of the software.
  16. gurubert

    Ceph 19.2 adding storage for CephFS 'cephfs' failed

    It looks like the kernel builtin CephFS client is too old to talk to the MDS.
  17. gurubert

    Question for PVE Staff/Devs - Enabling cephadm, what are the known (with evidence) problems?

    IMHO you need to choose the method of management for the Ceph cluster. It is either with the tools and the GUI of Proxmox or with the cephadm orchestrator, but not both at the same time. You can setup Ceph with cephadm alongside Proxmox on the same hardware and use it as storage for Proxmox.
  18. gurubert

    Problem with PVE 8.1 and OCFS2 shared storage with io_uring

    `virtio` may be something completely different. Please switch to `scsi` and the `virtio-scsi-single` virtual HBA in the VM.
  19. gurubert

    New install. Proxmox + Ceph cluster config questions.

    You could add a standby-replay MDS instance that will be active faster than a cold standby. Each CephFS needs at least one active MDS. With 8 CephFS you would need 8 MDS instances. Do you have the capacities to run these? Multiple MDS per CephFS are needed when the load is too high for just...