ceph

  1. X

    Procedure for cycling CEPH keyrings cluster-wide

    Hello, I want to cycle / renew all CEPH keyrings across the cluster as part of my security maintenance procedures. My environment Proxmox VE 8.2.8 CEPH 18.2.4 Components where I want to cycle the keyrings MON & client.admin MGR MDS Current situation I tried to rotate the keys in the...
  2. H

    CEPH advise

    I want some advice regarding CEPH. I like to use it in the future when I have a 3 node cluster. The idea is to have 2 NVME ssd's per node. 1 1TB ssd for the OS and 1 4TB ssd for the CEPH storage. Is this a good approach? Btw I'm think of WD SN850X or Samsung 990 Pro SSD's
  3. J

    How to safely enable KRBD in a 5 node production environment running 7.4.19

    I’m running a 5-node Proxmox cluster on version 7.4.19 with two storage pools: one on spindle drives and the other on SSDs. Both pools host live VMs, but performance is slower than expected, with high IOWait states. I’ve read that enabling KRBD can improve performance, but I haven’t found clear...
  4. C

    Please, can you help me

    We use the PVE8.2+CEPH hyper-converged architecture in the production environment, with a total of 8 physical nodes. These 8 physical nodes use exactly the same hardware configuration. THE SERVER MODEL IS DELL R750 However, in the past six months or so, there has been a physical node crash, and...
  5. M

    On node crash, OSD is down but stays "IN" and all vm's on all nodes keep in error and unusable.

    Hello, I work for multiple clients and one of them wanted us to create a Proxmox cluster to assure them fault tolerance and a good hypervisor that's cost-efficient. It's the first time we put a Proxmox cluster in Production environment for a client. We've only used single node proxmox. Client...
  6. N

    Ceph erasure code plugin set to clay using pveceph pool create

    I think the answer is "no", but I wanted to check whether there was support in the "pveceph pool create" command to specify which erasure code plugin it should use - specifically "clay". Or, would we need to create the pool in ceph manually and add it as specified here...
  7. L

    Ceph Mon frequently crashing on one machine

    I have a 2 node cluster running ceph (I know that's not ideal). On one machine, the ceph mon service frequently keeps crashing. Looking at the syslogs for the last crash, it was preceded by: Dec 09 00:00:46 <node> ceph-mon[1207324]: 2024-12-09T00:00:46.474-0800 7ebf05a006c0 -1 received...
  8. A

    Enable NVMEoF Ceph Pool With Proxmox VE managed Ceph

    I think the subject says it all. I'm looking for a guide that shows a step by step on how to create an NVMEoF pool with a Ceph cluster managed by Proxmox VE. I tried following the guide at: https://docs.ceph.com But I didn't get far because it says right in the requirements that the Ceph...
  9. L

    CEPH OSDs verlieren vermutlich Verbindung

    Moin, mit ist ein Verhalten aufgefallen: Wenn ich ISOs von PVE in den CEPH Storage downloaden lasse, verlieren wohl einige OSDs die Verbindung. Nicht aber z.B. bei VM Restore auf CEPH Storage. RADOS Benchmarks ergeben auch keine Fehler Ich habe zunächst 3 Screenshots, was ich noch an...
  10. L

    CEPH '1 PG inconsistent'

    Hi, I have a 3x node Proxmox cluster running ceph with a mixture of NVMe and SAS hard drives built from some used hardware. I've logged into the dashboard this morning and was greeted with an error in the CEPH dashboard saying 'Possible data damage: 1 pg inconsistent'. I've tried a few things...
  11. M

    Ceph + MTU 9000

    Hello, I am experiencing an issue with configuring the MTU for the interface used by Ceph. When I set the MTU to 9000 on both the server interface and the physical switch, I can successfully ping with a maximum packet size of 8958. However, I am unable to access resources in the Ceph cluster...
  12. A

    suddenly bad io wait and 100% diskutilisation on ceph disk. how to troubleshoot?

    Hi, recently i updated to 8.3 (with this also ceph to 18.2.4). my hardware: intel N100, some 1tb nvme (local storage) and a sata samsung 870QVO for ceph I also messed around with microcode updates (which im currently searching how to revert) and cpu powersaving govoner (which i already undo)...
  13. K

    Cheph issue on Proxmox

    Hi, Not sure if this is the correct place to ask these questions, but here goes. Please correct me if I am wrong. I am running a three node cluster with proxmox 8.3.0, a 10gbe mesh network to run ceph. I was using ceph "Quincy" and decided to upgrade to ceph "Squid". All went well, all ceph...
  14. J

    Import OVA: working storage 'cephfs' does not support 'images' content type or is not file based.

    Hi, I can't import ova with ceph. I get the error "scsi0: import working storage 'cephfs' does not support 'images' content type or is not file based." I use cephfs for the ova, isos etc. and cephpool for my vms/ct Have I made a mistake or is it not possible to import via Ceph? proxmox-ve...
  15. M

    VM poor storage performance

    Hello there, I have a 5 node PM clsuter, ceph is configured, each node has a 3.84 TB ssd for OSD and 1 nvme drive for WAL/DB usage. ceph network is on 10G link and proxmox managmend network is on a 1G link CEPH benchmark is as below : READ ~= 1000 MB/S Write ~= 1000 MB/S I create a Windows...
  16. S

    Second CEPH pool for SSD

    Hi! I'm currently using 3-node PVE cluster with CEPH pool based on HDD (80Tb total). Now I added 2 SSDs to each node and want to create second separate pool (10Tb total). I read https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_device_classes but some things are not clear to me...
  17. X

    Ceph / Cluster Networking Question

    We've been using a traditional SAN with iscsi for over 10 years, it has been ultra reliable. Now looking at ceph and have built a 3-server ceph cluster with Dell R740xds. The device has six interfaces, three to one switch, three to another. One port is public internet One port is public ceph...
  18. N

    Ceph OSD using wrong device identifier

    Hello! I have been messing around with ceph to see if it will properly augment my NAS in a small subset of tasks, but I noticed if a disk is removed and put back in the ceph cluster doesn't detect that until reboot. This is because it is defined using the /dev/sdX format instead of the...
  19. C

    Small Business Cluster with 2 New Servers and QDevice

    BACKGROUND I work for a small business that provides a 24 hour service on our servers and requires as close to 100% uptime as possible. Our old IT company sold us 2 identical Dell R420 servers several years ago with a single 6 core processor, 4x 3.5" 600GB 10K SAS HDD in RAID10, and 16GB RAM and...
  20. L

    Windows Server Performance

    Hallo, wir haben nun seit einiger Zeit ein Proxmox Cluster am laufen. Wir betreiben 4 Server mit 128 x AMD EPYC 7543 32-Core Processor (2 Sockets), 512GB RAM und Ceph mit insgesamt 16 OSD´s. Bei der Performance von Linux Servern ist alles optimal. Hier haben wir keine Probleme mit...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!