ceph

  1. Y

    Expanding Cluster and ceph with other node

    Hi , i have a 3-node cluster , with ceph all current nodes have 4x 3,4TB Nve in the ceph OSD i want to add a node the host is the same hardware but with 3x 1.9TB Nvme my questions are; What is the best way of expanding the OSD and can it be done with different storage, or is it better to...
  2. P

    How can I configure ceph to completely disable msgr1 and use only msgr2?

    Hello, I want to configure ceph to use msgr2 and not to use msgr1, to encrypt ceph traffic. So I firstly set ms_bind_msgr1 = false and ms_bind_msgr2 = true into /etc/ceph/ceph.conf under the section [global], and changed IP addresses into v2-only addresses. The full configuration is: [global]...
  3. D

    What is the best practice for CephFS in VMs?

    I am looking to let Proxmox manage Ceph so that I can mount CephFS in a bunch of VM's. However, I don't know what the best practice is for authorization. I would like to follow least privilege so that a VM only has r/w access to CephFS. How would I generate a key that only has access to one...
  4. K

    Ceph upgrade

    Hello, After adding the new node to the cluster, I installed ceph on it. Ceph installed itself with version 18.2.4, on the others it is version 18.2.2. I would like to do a ceph update on the other nodes. What is the safest way to do this? If there is data on osd's can it be done live or should...
  5. 4

    2+q cluster with shared/replicated storage

    I am building a cluster with 2 PVE hosts and a quorum host. Each PVE host is running local storage with hardware raid. I would like have it set up in such a way that in the event of a node failure, all VMs running on one host can be restarted on another. Ideally with clustering, but replication...
  6. K

    Shared storage over Fiber channel

    Hi all, I am looking for a fast storage for HA cluster. Currently I use Ceph, but it is terribly slow (I have 3 nodes with enterprise class nvme drives. I read many articles about it and its performace. By my opinion the bottleneck is in network synchronization. So I am looking for another...
  7. K

    Ceph install via offline mirror (POM) fails

    Dear, I am new to Proxmox, so please bear with me. I am currently setting up an offline 4-node test-cluster with PVE in order to convince my boss to move away from standalone ESXi-servers to 1 decent PVE-cluster in the future. We work in an offline environment so I installed a Proxmox Offline...
  8. R

    Best way to separate traffic and configure the PVE network

    Hi, we're building an 4 node PVE cluster with NVME Ceph storage. Available Nics: We have several nics available: Nic1: 2 x 10G + 2 x 1G Nic2: 2 x 10G Nic3: 2 x 100G Traffic/Networks: Now we need (I think) the following traffic separations: PVE Management PVE Cluster & Corosync Ceph (public)...
  9. O

    Ceph - Most OSDs down and all PGs unknown after P2V migration

    I run a small single-node ceph cluster (not via Proxmox) for home file storage (deployed by cephadm). It was running bare-metal, and I attempted a physical-to-virtual migration to a Proxmox VM (I am passing through the PCIe HBA that is connected to all the disks to the VM). After doing so, all...
  10. R

    CEPH Crushmap zuordnung

    Titel: Ceph: Nur HDD für data-Pool und SSD für nvme-Pool verwenden Problem:Ich möchte in meinem Ceph-Cluster (Proxmox) sicherstellen, dass: HDD-OSDs nur im Pool data verwendet werden. SSD-OSDs nur im Pool nvme verwendet werden. Aktuelle Situation: OSDs: ID CLASS WEIGHT TYPE...
  11. F

    ceph-osd OOM

    I have a 3-node PVE 7.4-18 cluster running Ceph 15.2.17. There is one OSD per node, so pretty simple. I'm using 3 replicas, so the data should basically be mirrored across all OSDs in the cluster. Everything has been running fine for months, but I've suddenly lost the ability to get my OSDs up...
  12. L

    PVE 8.3 ceph 19.2 monitor costantly "probing"

    Hi! I have made a lot of tries, changes of version, re-install of monitors and everything… and not I know a lot more than I did before, but stil I don't understand why this happens: remove any existing monitor from /etc/pve/ceph.conf remove any monitor folder: /var/lib/ceph/mon/ceph-* on node...
  13. S

    Issues creating CEPH EC pools using pveceph command

    I wanted to start a short thread here because I believe I may have found either a bug or a mistake in the Proxmox documentation for the pveceph command, or maybe I'm misunderstanding, and wanted to put it out there. Either way I think it may help others. I was going through the CEPH setup for...
  14. F

    Ceph + Cloud-Init Troubleshooting

    I have been using Cloud-Init for the past six months and Ceph for the past three months. I tried to set up Cloud-Init to work with CephFS and RBD, but I am having trouble booting a basic virtual machine. Is there a post or tutorial available for this particular use case? I have searched...
  15. G

    Ceph stretch pools in squid

    Hi, Has anyone experimented with ceph stretch pools that seem to have appeared in squid? (not stretched clusters) It seems rather new, but rather interesting, as it may not require the whole cluster to be set to stretched, while still dealing with the guarantee of OSDs and monitors being on...
  16. H

    Question Regarding Ceph Install

    Hi I have a quick question about Ceph. Originally when i installed Proxmox I had added all versions of Ceph repo. I added Quincy and Reef and Squid repositories, honestly i don't know much about Ceph and didn't think much of it when adding all 3 version repo's, and don't know if anything was...
  17. A

    [SOLVED] Ceph says osds are not reachable

    Hello all, I have a 3-node cluster set up using the guide here: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ Everything was working fine when using Ceph Quincy and Reef. However, after updating to Squid, I now get this error in the health status...
  18. B

    CEPH newbie

    Hello. I have three similar computers. Each of them has 2 similar-size disks. I installed PVE on one of the computers and configured its two disks as a ZFS mirror during installation. I recently did the same on a second computer and joined them as a cluster. I was about to do the same for...
  19. UdoB

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Ceph is great, but it needs some resources above the theoretical minimum to work reliably. My assumptions for the following text: you want to use Ceph because... why not? you want to use High Availability - which requires Shared Storage (note that a complete solution needs more things like a...
  20. J

    Migrating from one Node Proxmox to clustered Proxmox

    Hello, i am planning to move my one node Proxmox server with VMs to a new 3 node Cluster with Ceph Pool. Now are my questions: 1. Whats the best way to move the VMs to the according Ceph Pools on the Cluster, with minimal downtime. 2. Is there a way for testing to just copy a VM from the...