Recent content by herzkerl

  1. herzkerl

    [SOLVED] Ceph Object RGW

    I've been having the same 500 - Internal Server Error for realm/zone/zonegroup—finally found the solution right here on the forums...
  2. herzkerl

    Ceph Dashboard only shows default pool usage under “Object” -> “Overview”

    Hello everyone, I’m running a Proxmox cluster with Ceph and noticed something odd in the web interface. Under “Object” -> “Overview”, the “Used capacity” metric appears to show only the data stored in the default pool, while ignoring other pools (including erasure-coded pools). It shows only...
  3. herzkerl

    snapshot needed by replication - run replication first

    Hi everyone, I’m running a Proxmox VE cluster (PVE 8.x) with three nodes and have a daily ZFS replication job configured. One of our VMs has a snapshot named Test (created on 2024-11-18) that I would like to delete, but I’m receiving the following error when trying to remove it: TASK ERROR...
  4. herzkerl

    Cloud-init re-enables IPv6 despite sysctl disabled setting

    Hi everyone, I’m currently facing an issue with disabling IPv6 on a VM using Cloud-init. For this particular machine, I didn’t configure any IPv6 settings in Cloud-init (left it as static but empty). Despite having net.ipv6.conf.all.disable_ipv6 = 1 set in sysctl.conf, the VM still gets an IPv6...
  5. herzkerl

    ERROR: online migrate failure - Failed to complete storage migration: block job (mirror) error: drive-efidisk0: Input/output error (io-status: ok)

    It might have been a network issue after all: We set up a bond (lacp, hash policy layer3+4)—after changing to a single nic config on the old host, the migration worked just fine. EDIT: It could also be due to the different sizes of the EFI image. When trying to move from local-zfs to CePH I'm...
  6. herzkerl

    ERROR: online migrate failure - Failed to complete storage migration: block job (mirror) error: drive-efidisk0: Input/output error (io-status: ok)

    I've been giving "remote migration" a try for the first time today, moving machines live from a single host running ZFS to a new cluster running CePH. It worked tremendously well—without issues and on the first try—for all VM's but one, which always fails with the following errors. I tried...
  7. herzkerl

    [SOLVED] Ceph Object Gateway (RadosGW), multi-part uploads and ETag

    It's been a while, but after the recent update to ceph version 18.2.4 this issue seems to have been fixed. Finally!
  8. herzkerl

    WARN: running QEMU version does not support backup fleecing - continuing without

    I've enabled backup fleecing for one of our hosts, but keep getting this error message. Unfortunately I can't find anything about the minimum QEMU version needed. Also, when I edit the machine settings and set the version to latest it keeps resetting to 5.1 for some Windows VMs.
  9. herzkerl

    Proxmox VE 8.2 released!

    Updated yesterday without any issues.
  10. herzkerl

    noVNC not working behind HAproxy

    I put our three nodes behind HAproxy using round-robin. When accessing via HAproxy, most of the time, noVNC doesn't work and says "failed to connect to server"—sometimes it works, though. Accessing noVNC via any individual host's IP works just fine. Not sure how to fix that, looking forward to...
  11. herzkerl

    [SOLVED] Ceph Object Gateway (RadosGW), multi-part uploads and ETag

    We'd been having issues with a particular (Synology) Hyper Backup quite some time. Synology support analyzed the log files and told us that there's no ETag returned after a multi-part upload: 2023-11-21T13:20:55+01:00 NAS img_worker[20149]: (20149) [err] transfer_s3.cpp:355 ETag is empty...
  12. herzkerl

    [SOLVED] Ceph block.db auf SSD-Mirror (md-raid, lvm mirror, ZFS mirror)

    Ich lasse mal den Link zu meinem Beitrag bei Reddit hier – da hatte ich die Frage nach dem Sinn des ganzen schon beantwortet: Dort wurden auch einige interessante Wege beschrieben, aber am Ende war mir das alles doch zu komplex und vor allem fehleranfällig. Zudem hatte ich beim Umbau dann auch...
  13. herzkerl

    Erasure code and min/max in Proxmox

    4+2, and then changed the rule to spread the data to two OSD per each of the three nodes according to a few blog posts like this one. Probably not enough. I think I haven't looked at your documentation regarding EC pools, but I've read quite a bit though Ceph's. Thank you for pointing me to...