ceph

  1. T

    Ceph installer GUI question

    Hi guys! Newbie here, so please be gentle :) In the Ceph installer (Proxmox VE 9, 3 node cluster), when I have to choose the "ceph-public" and "ceph-cluster" network it only let's me choose the local IPs of my configured NICs (I have installed dedicated NICs for ceph-public and ceph-cluster)...
  2. Z

    Creation of LXC via API (CEPH) without precreating vm storage

    I want to create a LXC or VM via API. Currently working on LXC. BUT I noticed first of all the api doc is plain... bad imo . Anyways. I NEED to add a rootfs for lxc because ofc I do?! And I am using CEPH for my cluster. So I want to create for example a disk of 1G on Ceph Storage CL1. Can I do...
  3. B

    PVE 9.1 with Kernel 6.17 - Unstable?

    Dear Community, I've experienced many issues with PVE 9.1 and the new kernel 6.17. I'm unable to name all exactly but there were I/O hangs, kernel stack traces and so on. 6.17.2-1-pve was worst, it got a bit better with 6.17.2-2-pve. Yesterday I had I/O timeouts while the PBS backup was...
  4. A

    Ceph RBD Image Usage After Creating Snapshot

    In the Ceph Squid, I created the RBD image of 200GB in Ceph Mgr Dashboard. And I mounted that RBD image and stored the data of 10GB. Then the dashboard shows the usage as 5%. And then I created the snapshot of that RBD image. Then the dashboard shows the RBD image usage as 0% even though there...
  5. T

    Ceph intermittent slow OSD

    I'm seeing crazy low performance in Ceph. I've traced it to OSD's sometimes being very slow...see below. In 3 tests, the first is fairly slow, then fast (as fast as I expect this OSD to be), then the 3rd is crazy slow. root@pm3:~# ceph tell osd.3 bench { "bytes_written": 1073741824...
  6. B

    Proper OSD replacement procedure

    In our cluster there are currently three hosts with four 3.84TB RI OSDs each. I want to replace the four OSDs with 3.2TB MU SSDs, eventually adding a fifth OSD (later on). Currently there is only around 2.2TB used per OSD so this should work. CEPH version is 19.2.3-pve2...
  7. P

    How to setup nas

    Hello, this is my first post. I have 3 Dell OptiPlex 7060 Micro which I have upgraded and each of the computers has the following specifications: - 4TB M.2 NVMe - 4TB SATA SSD - 512GB M.2 NVMe Now what I want is to make a CEPH pool with the 3 machines and using only the 512GB disks so I...
  8. yboujraf

    Ceph storage hosted on proxmox nodes shared with K3S cluster ?

    Dear, I am facing to a choice to share the existing ceph storage from proxmox cluster to K3S cluster. Is it best practice to do that or need to SoC and each cluster has his own storage ? If proxmox manage the ceph storage is it a good governance ? Some clarifications are welcome. Best Regards,
  9. A

    MinIO is dead... What's next?

    Hey all, I'm the administrator for a bunch of servers here at the university (both students and researchers). We have quite a lot of data science / machine learning projects, and we store the data in locally hosted S3 buckets. For the last few years, I have just been spinning up a separate...
  10. J

    Ceph OSD woes after NVMe hotplug

    We're in the process of validating a PVE cluster setup that will be deployed to prod some time in 2026, and for that purpose, we've spun up the MVC (Minimum Viable Cluster) that mimicks, except in node count, what we're planning to have by then. As a result, we have three modern Dell boxen with...
  11. F

    [SOLVED] Ceph Installation offline

    Hello everyone, I'm new to Proxmox and would like to set up Ceph, but my server doesn't have direct internet access. Has anyone done this before or have any tips on how to install Ceph offline? Any advice or experiences would be greatly appreciated! Thanks in advance and best regards,
  12. A

    Proxmox with Cpeh - Pool Quota definitions

    Hello everyone, I have following issue where I would like to define Ceph-Pool quota-thresholds for all thresholds defined within my Ceph-Cluster. These quota-definitions work, but my issue is that warnings and critical notices within the Proxmox-Cpeh dashboard tab are not visible. When following...
  13. M

    Ceph - Reduced data availability: 3 pgs inactive

    A while back I had an event that caused my Ceph cluster to crash. By design I had backups of everything that mattered. However, I wanted to see if I could fix the cluster and maybe try to bring back some VMs that won't start due to the crash. I had a lot of `pgs inactive`, but managed to get...
  14. S

    duplicate ceph mon/mgr

    Hello :) my ceph dashboard shows duplicate entries for the ceph mon/mgr on node01 for some reason, it doesnt seem to affect anything but Id like to get rid of it :D pveversion 9.0.10 ceph 19.2.3 I found this thread with almost the same problem as me, changing /etc/hostname from fqdn to...
  15. K

    rbd-mirror

    Hi, I have 2 clusters ceph one is hyper converged proxmox and the other one is cephadm managed. I want to mirrored them to test if it's possible. But by following the proxmox documentation about rbd-mirroring, rbd mirror wont start am I missing something ? Thanks for your help :) !
  16. J

    Ceph full-mesh ( no switch) performance issues when live migrating Windows VM

    Hello Everyone I seem to struggle when it comes to the Ceph full-mesh cluster created. I made one using a bond on each node using the fiber NIC's two interfaces of 25Gbps fiber card. Later I made an additional corosync link, not sure if this picks up if the other is down. I have created a pool...
  17. N

    [SOLVED] CEPH: public and cluster network

    Hello all, I am adding Ceph to an 3 node cluster. On each machine, I have one 10GbE and one 1GbE link available. What would be the better way of configuring my network? - 10 GbE Public network, 1 GbE Cluster network - 10 GbE Cluster network, 1 GbE Public network - 10 GbE both networks Thank...
  18. L

    Ceph freeze when a node reboots on Proxmox cluster

    Hello everyone, I’m currently facing a rather strange issue on my Proxmox cluster, which uses Ceph for storage. My infrastructure consists of 8 nodes, each equipped with 7 NVMe drives of 7.68 TB. Each node therefore hosts 7 OSDs (one per drive), for a total of 56 OSDs across the cluster...
  19. L

    Ceph freeze when a node reboots on Proxmox cluster

    Hello everyone, I’m currently facing a rather strange issue on my Proxmox cluster, which uses Ceph for storage. My infrastructure consists of 8 nodes, each equipped with 7 NVMe drives of 7.68 TB. Each node therefore hosts 7 OSDs (one per drive), for a total of 56 OSDs across the cluster...
  20. T

    3 servers, 3 cables, 1 ceph network?

    1. Can you take 3 servers with 4 port networking and connect them in this fashion: A - B A - C B - C with 3 cables and have redundant networking? (literal redundant would require 6 cables and eat all ports I think) 2. Are there three nets in the end? (and then possibly connect them all together...