ceph

  1. K

    Shared storage over Fiber channel

    Hi all, I am looking for a fast storage for HA cluster. Currently I use Ceph, but it is terribly slow (I have 3 nodes with enterprise class nvme drives. I read many articles about it and its performace. By my opinion the bottleneck is in network synchronization. So I am looking for another...
  2. K

    Ceph install via offline mirror (POM) fails

    Dear, I am new to Proxmox, so please bear with me. I am currently setting up an offline 4-node test-cluster with PVE in order to convince my boss to move away from standalone ESXi-servers to 1 decent PVE-cluster in the future. We work in an offline environment so I installed a Proxmox Offline...
  3. R

    Best way to separate traffic and configure the PVE network

    Hi, we're building an 4 node PVE cluster with NVME Ceph storage. Available Nics: We have several nics available: Nic1: 2 x 10G + 2 x 1G Nic2: 2 x 10G Nic3: 2 x 100G Traffic/Networks: Now we need (I think) the following traffic separations: PVE Management PVE Cluster & Corosync Ceph (public)...
  4. O

    Ceph - Most OSDs down and all PGs unknown after P2V migration

    I run a small single-node ceph cluster (not via Proxmox) for home file storage (deployed by cephadm). It was running bare-metal, and I attempted a physical-to-virtual migration to a Proxmox VM (I am passing through the PCIe HBA that is connected to all the disks to the VM). After doing so, all...
  5. R

    CEPH Crushmap zuordnung

    Titel: Ceph: Nur HDD für data-Pool und SSD für nvme-Pool verwenden Problem:Ich möchte in meinem Ceph-Cluster (Proxmox) sicherstellen, dass: HDD-OSDs nur im Pool data verwendet werden. SSD-OSDs nur im Pool nvme verwendet werden. Aktuelle Situation: OSDs: ID CLASS WEIGHT TYPE...
  6. F

    ceph-osd OOM

    I have a 3-node PVE 7.4-18 cluster running Ceph 15.2.17. There is one OSD per node, so pretty simple. I'm using 3 replicas, so the data should basically be mirrored across all OSDs in the cluster. Everything has been running fine for months, but I've suddenly lost the ability to get my OSDs up...
  7. L

    PVE 8.3 ceph 19.2 monitor costantly "probing"

    Hi! I have made a lot of tries, changes of version, re-install of monitors and everything… and not I know a lot more than I did before, but stil I don't understand why this happens: remove any existing monitor from /etc/pve/ceph.conf remove any monitor folder: /var/lib/ceph/mon/ceph-* on node...
  8. S

    Issues creating CEPH EC pools using pveceph command

    I wanted to start a short thread here because I believe I may have found either a bug or a mistake in the Proxmox documentation for the pveceph command, or maybe I'm misunderstanding, and wanted to put it out there. Either way I think it may help others. I was going through the CEPH setup for...
  9. F

    Ceph + Cloud-Init Troubleshooting

    I have been using Cloud-Init for the past six months and Ceph for the past three months. I tried to set up Cloud-Init to work with CephFS and RBD, but I am having trouble booting a basic virtual machine. Is there a post or tutorial available for this particular use case? I have searched...
  10. G

    Ceph stretch pools in squid

    Hi, Has anyone experimented with ceph stretch pools that seem to have appeared in squid? (not stretched clusters) It seems rather new, but rather interesting, as it may not require the whole cluster to be set to stretched, while still dealing with the guarantee of OSDs and monitors being on...
  11. H

    Question Regarding Ceph Install

    Hi I have a quick question about Ceph. Originally when i installed Proxmox I had added all versions of Ceph repo. I added Quincy and Reef and Squid repositories, honestly i don't know much about Ceph and didn't think much of it when adding all 3 version repo's, and don't know if anything was...
  12. A

    [SOLVED] Ceph says osds are not reachable

    Hello all, I have a 3-node cluster set up using the guide here: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ Everything was working fine when using Ceph Quincy and Reef. However, after updating to Squid, I now get this error in the health status...
  13. B

    CEPH newbie

    Hello. I have three similar computers. Each of them has 2 similar-size disks. I installed PVE on one of the computers and configured its two disks as a ZFS mirror during installation. I recently did the same on a second computer and joined them as a cluster. I was about to do the same for...
  14. UdoB

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Ceph is great, but it needs some resources above the theoretical minimum to work reliably. My assumptions for the following text: you want to use Ceph because... why not? you want to use High Availability - which requires Shared Storage (note that a complete solution needs more things like a...
  15. J

    Migrating from one Node Proxmox to clustered Proxmox

    Hello, i am planning to move my one node Proxmox server with VMs to a new 3 node Cluster with Ceph Pool. Now are my questions: 1. Whats the best way to move the VMs to the according Ceph Pools on the Cluster, with minimal downtime. 2. Is there a way for testing to just copy a VM from the...
  16. F

    Ceph Ruined My Christmas

    Merry Christmas, everyone, if that's what you're into. I have been using Ceph for a few months now and it has been a great experience. I have four Dell R740s and one R730 in the cluster, and I plan to add two C240 M4s to deploy a mini-cloud at other locations (but that's a conversation for...
  17. J

    [SOLVED] Going to add more disk to the 8 node CEPH , Documentation URL requested for CEPH

    Dear All, I have a 8 node Proxmox with 8.3.2 and CEPH storage, now planning to add more disk to the CEPH Like to refer the documentation and understand how to do it Requesting the URL for adding the disk to the CEPH cluster thanks Joseph John
  18. F

    Ceph Config File (Separate Subnet)

    Hello everyone, I have been using Ceph for the past few months and have recently acquired the necessary hardware to set up Ceph on its subnet, as advised in the Ceph and Proxmox documentation. I am unsure if I have configured this correctly. Below is my configuration file, where you will also...
  19. X

    Procedure for cycling CEPH keyrings cluster-wide

    Hello, I want to cycle / renew all CEPH keyrings across the cluster as part of my security maintenance procedures. My environment Proxmox VE 8.2.8 CEPH 18.2.4 Components where I want to cycle the keyrings MON & client.admin MGR MDS Current situation I tried to rotate the keys in the...
  20. H

    CEPH advise

    I want some advice regarding CEPH. I like to use it in the future when I have a 3 node cluster. The idea is to have 2 NVME ssd's per node. 1 1TB ssd for the OS and 1 4TB ssd for the CEPH storage. Is this a good approach? Btw I'm think of WD SN850X or Samsung 990 Pro SSD's

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!