Search results

  1. herzkerl

    Ceph Dashboard only shows default pool usage under “Object” -> “Overview”

    Hello everyone, I’m running a Proxmox cluster with Ceph and noticed something odd in the web interface. Under “Object” -> “Overview”, the “Used capacity” metric appears to show only the data stored in the default pool, while ignoring other pools (including erasure-coded pools). It shows only...
  2. herzkerl

    snapshot needed by replication - run replication first

    Hi everyone, I’m running a Proxmox VE cluster (PVE 8.x) with three nodes and have a daily ZFS replication job configured. One of our VMs has a snapshot named Test (created on 2024-11-18) that I would like to delete, but I’m receiving the following error when trying to remove it: TASK ERROR...
  3. herzkerl

    Cloud-init re-enables IPv6 despite sysctl disabled setting

    Hi everyone, I’m currently facing an issue with disabling IPv6 on a VM using Cloud-init. For this particular machine, I didn’t configure any IPv6 settings in Cloud-init (left it as static but empty). Despite having net.ipv6.conf.all.disable_ipv6 = 1 set in sysctl.conf, the VM still gets an IPv6...
  4. herzkerl

    ERROR: online migrate failure - Failed to complete storage migration: block job (mirror) error: drive-efidisk0: Input/output error (io-status: ok)

    I've been giving "remote migration" a try for the first time today, moving machines live from a single host running ZFS to a new cluster running CePH. It worked tremendously well—without issues and on the first try—for all VM's but one, which always fails with the following errors. I tried...
  5. herzkerl

    WARN: running QEMU version does not support backup fleecing - continuing without

    I've enabled backup fleecing for one of our hosts, but keep getting this error message. Unfortunately I can't find anything about the minimum QEMU version needed. Also, when I edit the machine settings and set the version to latest it keeps resetting to 5.1 for some Windows VMs.
  6. herzkerl

    noVNC not working behind HAproxy

    I put our three nodes behind HAproxy using round-robin. When accessing via HAproxy, most of the time, noVNC doesn't work and says "failed to connect to server"—sometimes it works, though. Accessing noVNC via any individual host's IP works just fine. Not sure how to fix that, looking forward to...
  7. herzkerl

    [SOLVED] Ceph Object Gateway (RadosGW), multi-part uploads and ETag

    We'd been having issues with a particular (Synology) Hyper Backup quite some time. Synology support analyzed the log files and told us that there's no ETag returned after a multi-part upload: 2023-11-21T13:20:55+01:00 NAS img_worker[20149]: (20149) [err] transfer_s3.cpp:355 ETag is empty...
  8. herzkerl

    Erasure code and min/max in Proxmox

    I set up an erasure coded pool via Ceph Dashboard, and changed the rule later by manually editing the CRUSH map: rule erasurecode { id 6 type erasure step set_chooseleaf_tries 5 step set_choose_tries 100 step take default class hdd-cached step choose indep 3 type host...
  9. herzkerl

    Ceph block.db on SSD mirror (md-raid, lvm mirror or ZFS mirror)

    I want to set up a mirror consisting of two SSDs, to use that as a DB/WAL device for several (HDD) OSD's. The system doesn't have a hardware RAID controller, so basically there's three options (I guess): md raid or ZFS mirror — but /dev/zd0 or /dev/md127 are not accepted by pveceph create LVM...
  10. herzkerl

    [SOLVED] Ceph block.db auf SSD-Mirror (md-raid, lvm mirror, ZFS mirror)

    Ich möchte einen Mirror/RAID1 aus zwei SSDS erstellen, um darauf dann die block.db von einigen (HDD-)OSDs abzulegen. Hardware-RAID ist (natürlich) nicht vorhanden, d.h. ich sehe hier drei Möglichkeiten: ZFS Mirror – jedoch wird /dev/zd0 nicht von pveceph create akzeptiert MD Mirror, jedoch...
  11. herzkerl

    Migrations to one node very slow

    I'm sometimes seeing migrations slow down for some reason. This time I turned on maintenance mode on one node—the first few VMs migrated within minutes (avg. speed around 130 MiB/s), but there are other migrations that hover around 50-200 KiB/s. Any ideas what could be the culprit here? The...
  12. herzkerl

    sync group filed - connection closed because of a broken pipe

    Starting a while ago we've had some issues with syncing some backups to our offsite PBS. Upgrading to v3 didn't change that, unfortunately. The task would fail with the following message: sync group vm/208 failed - connection closed because of a broken pipe That only happens for about 50% of...
  13. herzkerl

    Ceph OSD block.db on NVMe / Sizing recommendations and usage

    Dear community, the HDD pool on our 3 node Ceph cluster was quite slow, so we recreated the OSDs with block.db on NVMe drives (Enterprise, Samsung PM983/PM9A3). The sizing recommendations in the Ceph documentation recommend 4% to 6% of 'block' size: block.db is either 3.43% or around 6%...
  14. herzkerl

    Tatsächliche Größe einer VM-Disk auf Ceph

    Hallo zusammen, wie kann ich die tatsächliche Größe einer auf Ceph befindlichen VM-Disk ermitteln? In der 108.conf ist die virtuelle Disk auf 25 TB angelegt und discard=on: scsi1: ceph-hdd-size-2:vm-108-disk-0,backup=0,discard=on,iothread=1,size=25480G Die 25 TB sehe ich auch hier...
  15. herzkerl

    Feature Request: Ceph Object Gateway Support (via UI)

    I‘d love to use S3 via Ceph! Not having to set up another server for S3-compatible storage (e.g. minIO) would be a nice benefit. I know there’s some unofficial documentation (mainly blog posts that are a few years old)—but having that right within the Proxmox web interface/CLI would be...
  16. herzkerl

    sync group failed - authentication failed - invalid ticket - expired

    Hallo zusammen, leider habe ich nun bei einem Sync-Job (wir syncen derzeit von drei Kunden) auch den im Titel genannten Fehler. Log siehe unten. Im Forum gibt es einige Threads zu diesem Fehler, jedoch beziehen sich die meisten auf einen Bug, der in 2020 gefixt wurde, und andere blieben...
  17. herzkerl

    Exchange log files

    Is there any way to run a post backup command/script within a Windows Server running Exchange to truncate the log files after a successful backup?
  18. herzkerl

    ZFS scrubbing bei virtuellem PBS

    Wie sinnvoll ist es, das ZFS scrubbing in einem virtualisierten PBS (auf Proxmox VE – in unserem Fall via Ceph, bei unseren Kunden via ZFS) in der VM ebenfalls laufen zu lassen? Da sollte doch von "außen" schon alles soweit passen, dass man sich die Ressourcen dafür sparen kann, oder irre ich mich?
  19. herzkerl

    [SOLVED] Windows Server 2019 boot environment and virtIO

    I just had a Windows Server 2019 reboot into "automatic startup repair", which it couldn't perform because it wasn't able to find the drive(s). I then changed the disk type to scsi and repaired the disk manually—CLI, chkdsk /f C:—which did work. Changed the disk back to virtIO and Windows was...
  20. herzkerl

    Virtuelle Disks unterschiedlich häufig sichern

    Hallo zusammen, wir haben bei einer Fileserver-VM eines Kunden bisher zwei Backups gefahren: • Ein imagebasiertes Backup der gesamten VM um 6 und 18 Uhr, täglich – bisher mit Active Backup for Business, nun mit Proxmox Backup Server; sowie • ein dateibasiertes Backup der Dateien unter D:\Daten...