ceph

  1. P

    Ceph - VM with high IO wait

    Hello everyone, I have spent a lot of time to figure out what is provoking that IO wait. on this cluster VM who do a high amount of IO have a lot of IO wait ( like 30k I/O read, 50% IO wait) Summary of my setup: PVE proxmox-ve: 9.0.0 (running kernel: 6.17.2-2-pve) pve-manager: 9.0.18...
  2. 4

    OSD Segmentation faults (safe_timer)

    Since last minor upgrade end of january, we have an crashing osd every few days. OSDs recovering from itself. Journalctl looks very similar every time. It started a few days after the last minor update. My assumption is, it`s maybe related. Do you have any ideas or tips which information/logs...
  3. M

    [SOLVED] Adding a new separated pool to existing Ceph

    I'm playing with a cluster composed by 3 machines with 3 hdd osd per machine (9 osd in total). It is a test environment to learn doing stuff, but I don't want to destroy it with this "expansion". I have a VM with an application that is not tolerating HDD slow performance, and since I have 3 ssd...
  4. S

    Ceph rbd du shows usage 2-4x higher than inside VM

    I've noticed VMs that show much higher usage via rbd du than in the VM, for example: NAME PROVISIONED USED vm-119-disk-0 500 GiB 413 GiB vm-122-disk-0 140 GiB 131 GiB Inside the VM, df shows 95G and 63G used space, respectively. Both of these are Debian 12 which has...
  5. P

    How to precisely check the actual disk usage of Ceph RBD?

    Hi everyone, I've run into a serious issue while managing a Proxmox VE Ceph environment. A user created a lot of VMs and ended up filling the entire Ceph cluster. The problem is, when I look at the RBD storage in the WebUI, I can only see the "Provisioned Size" of each disk. I can't tell which...
  6. R

    Storage for small clusters, any good solutions?

    Hi there, the Title may be a bit deceptive as I know there are good solutions that work for many but for me/my workplace we face a bit of a dilemma. I know I'm opening this can of worms again and this is also partly me venting a bit of my frustration and I'm sorry about that. We wanna use...
  7. L

    /etc/init.d/ceph warnings

    I am on the latest proxmox 9.1.5 with ceph 19.2.3 squid. These warnings bothering me a lot and I am scared to touch anything related to ceph. systemd‑sysv‑generator: SysV service '/etc/init.d/ceph' lacks a native systemd unit file… Please update package to include a native systemd unit file...
  8. Y

    Rook External Ceph connectivity from VMs in EVPN overlay network

    Environment Proxmox VE cluster with 2 nodes (node94: 10.129.56.94, node107: 10.129.56.107) Ceph cluster running on the Proxmox nodes (public_network: 10.129.56.0/24) Proxmox SDN EVPN zone (madp) for VM networking VMs are on the EVPN overlay network: 172.16.0.0/16 Goal: Configure Rook External...
  9. T

    Ceph cache tier alternative

    Hi! I’m planning to migrate from TrueNAS to Ceph because I want node-based redundancy. I was planning to use HDDs with an SSD cache tier on top to boost performance, since I work with heavy file sequences and want to saturate my 10 Gbit network. However, I found that the cache tier is...
  10. V

    proxmox ceph performance with consumer grade samsung ssd

    Hello all i have a 3 node proxmox cluster with ceph. each node has 2x 4tb samsung 870 QVO ssds. I have noticed my vms being really slow and i was wondering how much of that is because of the ssds. I have checked my network and everything else. Im here to just confirm if what AI assistant is...
  11. D

    CEPH 17.2.8 BluestoreDB bug

    Hello everyone, I need your support. We recently updated our cluster to versions PVE 8.4.16 and CEPH 17.2.8, and only after the update did we read an article that said we need to urgently upgrade from this version because there's a critical error with bluestore. Can you tell me if the error is...
  12. F

    Änderung RAM für PVE nicht möglich (Cluster, CEPH, HA)

    Schönen guten Morgen, ich habe hier eine HA Cluster mit CEPH-Storage im Betrieb. Auf 3 Nodes verteilt laufen ca. 30 VEs (Linux, Windows). Hardware:CPUs, RAM und Storage ausreichend vorhanden. Seit einiger Zeit kann ich, auch bei ausgeschalteter VE die Größe des RAMs nicht ändern. Weder...
  13. K

    Ceph Migration bond (LACP) Interfaces zu Openfabric

    Hallo, ich möchte meinen 3 Nodes Cluster mit je 2 x 25Gb ( bond0 LACP) von Switch auf Full Mesh über direkte Verbindung umstellen. Da ich keine NICs mehr frei habe, ist es von meinem Verständnis her nicht so ganz einfach... ...oder doch nicht? Deswegen meine Frage: Ist es möglich ein Openfabric...
  14. M

    VM disk is readonly after ERROR: "qmp command 'backup' failed - got timeout"

    For the past few weeks, I’ve been experiencing a recurring issue with one of our VMs during backup: - The backup fails on a specific VM, and afterward, the filesystem on the VM switches to read-only mode (screenshot available). - The CPU usage then spikes to 100% until I force a shutdown. -...
  15. K

    Slow ceph operation

    Hi I have 3 node cluster with 2 x Kioxia 1,92TB SAS Enterprise SSD. Disk operation on VM's are very slow. Network config is as follow: Each node have: 1 x 1Gb NIC for Ceph public network (the same network as proxmox itself) 1 x 10Gb NIC for Ceph cluster network I'm no expert in ceph and...
  16. S

    Experiencing slow OSDs after upgrading Ceph version 18 to 19 in Proxmox v8.4

    Since we upgrade our production Proxmox version from 8.2.x to 8.4.14 and ceph form version 18.2.1 to 19.2.3, we were observing slow OSDs since day 2 of upgrade. We have daily backup of our production VMs from PVE to PBS starting from hour 2100 to ~0400. When its time for Database VM backup(time...
  17. T

    Ceph installer GUI question

    Hi guys! Newbie here, so please be gentle :) In the Ceph installer (Proxmox VE 9, 3 node cluster), when I have to choose the "ceph-public" and "ceph-cluster" network it only let's me choose the local IPs of my configured NICs (I have installed dedicated NICs for ceph-public and ceph-cluster)...
  18. Z

    Creation of LXC via API (CEPH) without precreating vm storage

    I want to create a LXC or VM via API. Currently working on LXC. BUT I noticed first of all the api doc is plain... bad imo . Anyways. I NEED to add a rootfs for lxc because ofc I do?! And I am using CEPH for my cluster. So I want to create for example a disk of 1G on Ceph Storage CL1. Can I do...
  19. B

    PVE 9.1 with Kernel 6.17 - Unstable?

    Dear Community, I've experienced many issues with PVE 9.1 and the new kernel 6.17. I'm unable to name all exactly but there were I/O hangs, kernel stack traces and so on. 6.17.2-1-pve was worst, it got a bit better with 6.17.2-2-pve. Yesterday I had I/O timeouts while the PBS backup was...
  20. A

    Ceph RBD Image Usage After Creating Snapshot

    In the Ceph Squid, I created the RBD image of 200GB in Ceph Mgr Dashboard. And I mounted that RBD image and stored the data of 10GB. Then the dashboard shows the usage as 5%. And then I created the snapshot of that RBD image. Then the dashboard shows the RBD image usage as 0% even though there...