ceph

  1. H

    [SOLVED] Removing ceph DB disk

    Hello, I've added some more drives to our 3 node ceph cluster, started creating OSDs and accidently created LVM,CEPH (DB) disk instead of OSD. I do not need a seperate DB disk. How can i destroy it and re-create it to regular OSD? Actually i did the same mistake on two nodes. Here's output...
  2. B

    Question about CEPH Topology

    Hi everyone, I would like some help regarding CEPH topology. I have the following environment: - 5x Servers (PVE01,02,03,04,05) - PVE 01,02, and 03 in one datacenter and PVE04 and 05 in another datacenter. - 6x Disks in each (3x HDD and 3x SSD) - All of the same capacity/model. I would like...
  3. D

    How to setup a HA schema using Proxmox over VPN

    Hello there! I'm trying to configure a Proxmox cluster using a VPN from 2 different geographic sites with a total of 3 nodes. I'd like to enable Ceph cluster using ODS disks but it looks like ceph Monitors and ODS configurations need to use a network in the same segment like 192.168.2.0/24 If...
  4. H

    [HELP] Backup Failing

    Hi. I have a brand new four-node Proxmox cluster up and running. I've configured a CEPH pool called "StandardStorage" (3:2) where the two VMs are stored. The target for the backup is called PLFS2Storage, an NFS mount. Running "Backup" from the node -> Virtual Machine page fails because it...
  5. T

    [SOLVED] Can a Ceph disk be used as a Windows Failover Cluster Shared Volume?

    Hello and thank you for your time. TL;DR - Can a Ceph disk be used as a Windows Failover Cluster Shared Volume? If yes, what particular VM configurations need to be made in order to make the WFC accept the disk? Related forum threads - Support for Windows Failover Clustering, hpe 1060 stroage...
  6. A

    [SOLVED] Ceph Recovery Process stopped and can no longer see OSD's in tab

    This morning a loss of power to the one of the servers in the ceph cluser knocked out the cluster. After I got that server up and running again, the ceph cluster started to recover. And at that time I could see the OSD's in the OSD tab. But it got to 72.36% and the recovery stopped, and I can no...
  7. A

    Hardware Feedback - Proxmox Ceph Cluster (3-4 nodes)

    After going down many rabbit holes, I have finally come to the conclusion that the best solution (for my office) is a Proxmox cluster with 4 nodes. Depending on my final build, I might be able to only have 3. For now, I will use both proxmox backup and veeam to backup my VMs to a TrueNAS box...
  8. M

    PVE 8.2.2 and Ceph: all OSDs report slow ops

    Hello! I'm testing Ceph setup on single node, 12 HDDs are connected via SAS9211-4i contoller: root@pve-1:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 32.74786 root default -3 32.74786 host pve-1 0 hdd 2.72899 osd.0...
  9. D

    [SOLVED] Ceph Public/Private And What Goes Over the network

    Good Day All. Im new to ceph. i have set up a 6 node cluster. Its a mixture of SSD AND sas drives. With the sas drives i use a ssd partition for the db. Now what im experiencing is that my VMs are slow. Boot is slow Opening programs are slow etc etc. the 10.0.45.0/24 network is 10Gig the...
  10. M

    CEPH on three separate nodes

    Hi, A client want to have 3 nodes in 3 datacenter with iSCSI local storage connected, it's possible to configure on the top one shared storage for all hosts?? CEPH??
  11. L

    Had to remove 1 Node from Proxmox/Ceph

    Once I had 4 nodes running with Ceph and one could fail. Now that one has failed and is removed. Is it possible to establish redundancy with the remaining 3 nodes? From ceph -s data: volumes: 2/2 healthy pools: 7 pools, 193 pgs objects: 139.48k objects, 538 GiB usage: 1.6...
  12. F

    [SOLVED] CEPH Reef osd still shutdown

    Hi everyone, I'm working with a 3 node cluster running with ceph 17 and I'm about to upgrade. I also add a new node to the cluster and install ceph 18.2 . The first OSD i'm creating seems OK yet after a few moments it's shut down. In the logs here is what I can find : May 18 15:34:44 node4...
  13. M

    Number of disks on Ceph storage?

    Currently, I have 1 disk in each server(total 3) used for Ceph shared storage. I noticed that if 2 out of 3 disks are not working then complete ceph will also stop working. I am not sure if the same (n/2 +1) formula works here as well. Why It is concerning is because we are using CephFS as...
  14. F

    CEPH vs RAID

    Hello, I would like to ask you a few questions about CEPH. Currently, I'm working on critical infrastructure for my client, and I'm planning to use servers with hardware RAID. My first question is whether I should opt for CEPH or classic RAID on these servers? The second question is whether it's...
  15. D

    Understanding Ceph Fundamentals

    Good Day All. I have just set up a 4 node cluster with Ceph. So a few Questions to help me understand what is happening 1. I know data is stored over multiple servers. However, when I run a VM is the "Hard Drive" on the local machine copy for IO. Or does the VM have to read and write over the...
  16. D

    Reducing Size of Ceph fs

    Greetings, I currently have a 1.12 TB CEPH fs which has 60GB of data on it that I don’t want to lose. However 1.11TB is much too large. I need to reduce it to 512GB or even 256GB. I have set the target size to 512GB however the available size of the fs refuses to move from 1.11TB. Why...
  17. G

    [SOLVED] Three-nodes Ceph mesh network with bond and VLANs

    Hi everyone, I'm designing a new 3-node cluster. It's not the first cluster I make but it's the first with a full-mesh Ceph network (no dedicated switch) . Each node has a double 10G NIC and they're already physically connected each other. So I've been reading the docs and I think the best...
  18. M

    Having trouble clearing some ceph warnings.. Reduced data availability & Slow ops

    Hey all, I'm having trouble clearing some warnings from my ceph cluster. 1.) HEALTH_WARN: Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 5m, current state unknown, last acting [] 2.) HEALTH_WARN: 2 slow ops, oldest one blocked for 299 sec, daemons [osd.0,osd.1] have...
  19. B

    Ceph failure - missing /etc/pve/ceph directory

    I have an 8.1.4 ProxMox Ceph cluster. I'm trying to add a new host that is running the latest 8.2.2 to the cluster but when I try and add the OSD I get the failure "pveceph configuration not initialized - missing '/etc/pve/ceph'". There's a commit that went in on Feb 16, 2024 that looks like...
  20. J

    Promox + Ceph: VMs won't start + dreadful performance

    I have been searching far and wide for this. But I can't seem to find a solution. In one of our testing clusters we're experimenting with Ceph. But so far the journey has been bumpy at best. The actual trigger for this post is the fact that one of our VMs locked up and I can't seem to get it to...