Search results

  1. herzkerl

    [SOLVED] Ceph Object RGW

    I've been having the same 500 - Internal Server Error for realm/zone/zonegroup—finally found the solution right here on the forums...
  2. herzkerl

    Ceph Dashboard only shows default pool usage under “Object” -> “Overview”

    Hello everyone, I’m running a Proxmox cluster with Ceph and noticed something odd in the web interface. Under “Object” -> “Overview”, the “Used capacity” metric appears to show only the data stored in the default pool, while ignoring other pools (including erasure-coded pools). It shows only...
  3. herzkerl

    snapshot needed by replication - run replication first

    Hi everyone, I’m running a Proxmox VE cluster (PVE 8.x) with three nodes and have a daily ZFS replication job configured. One of our VMs has a snapshot named Test (created on 2024-11-18) that I would like to delete, but I’m receiving the following error when trying to remove it: TASK ERROR...
  4. herzkerl

    Cloud-init re-enables IPv6 despite sysctl disabled setting

    Hi everyone, I’m currently facing an issue with disabling IPv6 on a VM using Cloud-init. For this particular machine, I didn’t configure any IPv6 settings in Cloud-init (left it as static but empty). Despite having net.ipv6.conf.all.disable_ipv6 = 1 set in sysctl.conf, the VM still gets an IPv6...
  5. herzkerl

    ERROR: online migrate failure - Failed to complete storage migration: block job (mirror) error: drive-efidisk0: Input/output error (io-status: ok)

    It might have been a network issue after all: We set up a bond (lacp, hash policy layer3+4)—after changing to a single nic config on the old host, the migration worked just fine. EDIT: It could also be due to the different sizes of the EFI image. When trying to move from local-zfs to CePH I'm...
  6. herzkerl

    ERROR: online migrate failure - Failed to complete storage migration: block job (mirror) error: drive-efidisk0: Input/output error (io-status: ok)

    I've been giving "remote migration" a try for the first time today, moving machines live from a single host running ZFS to a new cluster running CePH. It worked tremendously well—without issues and on the first try—for all VM's but one, which always fails with the following errors. I tried...
  7. herzkerl

    [SOLVED] Ceph Object Gateway (RadosGW), multi-part uploads and ETag

    It's been a while, but after the recent update to ceph version 18.2.4 this issue seems to have been fixed. Finally!
  8. herzkerl

    WARN: running QEMU version does not support backup fleecing - continuing without

    I've enabled backup fleecing for one of our hosts, but keep getting this error message. Unfortunately I can't find anything about the minimum QEMU version needed. Also, when I edit the machine settings and set the version to latest it keeps resetting to 5.1 for some Windows VMs.
  9. herzkerl

    Proxmox VE 8.2 released!

    Updated yesterday without any issues.
  10. herzkerl

    noVNC not working behind HAproxy

    I put our three nodes behind HAproxy using round-robin. When accessing via HAproxy, most of the time, noVNC doesn't work and says "failed to connect to server"—sometimes it works, though. Accessing noVNC via any individual host's IP works just fine. Not sure how to fix that, looking forward to...
  11. herzkerl

    [SOLVED] Ceph Object Gateway (RadosGW), multi-part uploads and ETag

    We'd been having issues with a particular (Synology) Hyper Backup quite some time. Synology support analyzed the log files and told us that there's no ETag returned after a multi-part upload: 2023-11-21T13:20:55+01:00 NAS img_worker[20149]: (20149) [err] transfer_s3.cpp:355 ETag is empty...
  12. herzkerl

    [SOLVED] Ceph block.db auf SSD-Mirror (md-raid, lvm mirror, ZFS mirror)

    Ich lasse mal den Link zu meinem Beitrag bei Reddit hier – da hatte ich die Frage nach dem Sinn des ganzen schon beantwortet: Dort wurden auch einige interessante Wege beschrieben, aber am Ende war mir das alles doch zu komplex und vor allem fehleranfällig. Zudem hatte ich beim Umbau dann auch...
  13. herzkerl

    Erasure code and min/max in Proxmox

    4+2, and then changed the rule to spread the data to two OSD per each of the three nodes according to a few blog posts like this one. Probably not enough. I think I haven't looked at your documentation regarding EC pools, but I've read quite a bit though Ceph's. Thank you for pointing me to...
  14. herzkerl

    Erasure code and min/max in Proxmox

    I set up an erasure coded pool via Ceph Dashboard, and changed the rule later by manually editing the CRUSH map: rule erasurecode { id 6 type erasure step set_chooseleaf_tries 5 step set_choose_tries 100 step take default class hdd-cached step choose indep 3 type host...
  15. herzkerl

    Ceph block.db on SSD mirror (md-raid, lvm mirror or ZFS mirror)

    I want to set up a mirror consisting of two SSDs, to use that as a DB/WAL device for several (HDD) OSD's. The system doesn't have a hardware RAID controller, so basically there's three options (I guess): md raid or ZFS mirror — but /dev/zd0 or /dev/md127 are not accepted by pveceph create LVM...
  16. herzkerl

    [SOLVED] Ceph block.db auf SSD-Mirror (md-raid, lvm mirror, ZFS mirror)

    Ich möchte einen Mirror/RAID1 aus zwei SSDS erstellen, um darauf dann die block.db von einigen (HDD-)OSDs abzulegen. Hardware-RAID ist (natürlich) nicht vorhanden, d.h. ich sehe hier drei Möglichkeiten: ZFS Mirror – jedoch wird /dev/zd0 nicht von pveceph create akzeptiert MD Mirror, jedoch...
  17. herzkerl

    wearout per Console abfragen?

    Danke euch für die ausführlichen Rückmeldungen! Wie macht ihr das bei euren Hosts: Schaut ihr auf den Wearout-Wert in Proxmox bzw. den SMART-Daten oder geht ihr strikt nach der TBW? Oder tauscht ihr Disks eh erst, wenn sie ausgefallen sind?
  18. herzkerl

    wearout per Console abfragen?

    Ist dir das nicht zu doof, mir zu unterstellen, ich könnte eine Zahl in einer UI nicht korrekt ablesen? Ich verstehe deinen 'Tonfall' auch überhaupt nicht... Soll ich jetzt hier echt ein Bild posten, auf dem 77% steht? Das sollte man eigentlich auch ohne einen Screenshot können – ich habe es...