ceph

  1. M

    ceph status "authenticate timed out after 300" after attempt to migrate to new network

    Hey dear Proxmox Pros, I have an issue with my Ceph Cluster after the attempt of migrating it to a dedicated network. I have a 3 node Proxmox Cluster with Ceph enabled. Since I only had one network connection on each of the nodes, I wanted to create a dedicated and separate network only for the...
  2. C

    1 ceph or 2 ceph clusters for fast and slow pools?

    Hi All Is there any recommendations for configuring pools with differing goals? I am setting two pools with opposite goals. A fast NVMe pool for VM's and a slow cephfs using spinning drives. Considering the different objectives, i was thinking of setting up two ceph clusters. Both would be...
  3. C

    Ceph unavailable from single node

    Hi All I have a 3 node cluster with ceph storage. I want to configure the ceph cluster to work with only one node. (in the even of a double failure) Currently the ceph continues working correctly with the failure of one node, but as soon as two nodes are down, ceph becomes unavailable and you...
  4. L

    Proxmox Ceph multiple roots that has same host item but different weights

    I'm trying to do a two ceph pools with 3 nodes (Site A) and 3 nodes (Site B) with two different rules. The first rule is a stretched Ceph with Site A and Site B and the second rule is just a standalone ceph rule on Site A. Each of the 6 nodes has 3 OSDs. So total there's 18 OSDs. I'm...
  5. B

    Mini PC Proxmox cluster with ceph

    Hello, I am trying to create a high availability Proxmox cluster with three nodes and three SSD's however I can't seem to get it working. The monitoring on two of the three nodes isn't working and I cannot actually create the VM on the share
  6. F

    Swapping Boot Drives

    I'm operating a Dell PowerEdge R740 server as a Ceph node. The current system boot configuration utilizes a single SAS SSD for the OS drive. I'm planning to migrate to an Dell BOSS M.2 SATA drive via PCIe adapter (two drives as RAID 1) (https://www.ebay.com/itm/296760576190) to optimize the...
  7. N

    Ceph Reef for Windows x64 Install Leads to Automatic Repair Mode

    Hey everyone, I’m running into a serious issue while trying to install Ceph Reef (Windows x64) on my Windows laptop. The installation seems to go smoothly until it prompts me to restart the PC. However, after the restart, my computer goes into automatic repair mode, and I can't seem to get it...
  8. F

    Advise configuring cluster w/o external NAS

    I have 3 HPE Proliant DL360 Gen10 servers that I would like to configure as a 3 node Proxmox cluster. Each node has 6 x 2TB data drives, for a total of 12TB per node + 2 system drives in mirror. Most configurations I have seen relay on an external NAS as the shared storage space. I am specially...
  9. t.lamprecht

    Ceph 19.2 Squid Stable Release and Ceph 17.2 Quincy soon to be EOL

    Hi Community! The recently released Ceph 19.2 Squid is now available on all Proxmox Ceph repositories (test, no-subscription and enterprise) to install or upgrade. Upgrades from Reef to Squid: You can find the upgrade how-to here: https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid New...
  10. U

    CEPH cluster analysis and improvement

    Howdy, I have a CEPH cluster built on a former vSAN cluster. Right now, there are three nodes, although I do have three more identical servers available. CEPH is slow and I know that I am not getting any new hardware, so I would like to make it run as well as possible. Hosts are BL460G9 with...
  11. D

    Ceph problem

    Hello, we have a proxmox server cluster with CEPH storage. Today one of the nodes (1/3) was crashed, we rebooted it physically but one of the VMs doesn't boot anymore because the disk is not visible in the CEPH pool. With the command rbd ls “poolname” the disk doesn't appear. Any ideas?
  12. F

    Fragen bzgl. erster Proxmox / CEPH Installation

    Hallo, ich bin gerade damit beschäftigt unseren ersten Cluster mit Proxmox zu installieren und hätte noch die ein oder andere Frage. Hardware Daten: 6 Server mit je 2x AMD Epyc 9124 768GB RAM 6x 3,84TB NVME SSD 2x 2Port 25G NIC Workload: ~100-120 Windows/Linux VMS Auf den Servern habe...
  13. J

    Hyper-converged 100Gbe MTU settings and best practices....

    Hello, We recently built a 3-node PVE hyper-converged cluster w/ ceph and I was wondering about the following: For MTU size on interfaces, does it matter where that is applied? I believe I've found that on linux bridge it is unnecessary as it inherits it from the bond, but what about the...
  14. P

    Proxmox Ceph Missing RGW Module

    I've just installed radosgw with apt install radosgw on my Proxmox Ceph 18.2.4 Reef cluster as well as the Ceph dashboard in the hopes of managing the object storage from it, however the dashboard throws a number of internal errors as it requires the rgw module to be enabled. When checking the...
  15. itNGO

    Timeframe for Ceph 19.2?

    Any guess for Ceph 19.2 implementation on the horizon?
  16. F

    Connection refused from 2 of 4 nodes on a cluster

    Hi, i get the error message "595 Connection refused" if i try to manage 2 of a 4 nodes cluster. This is a production cluster and every node comes with 1 dedacted 10gig nic which one with 2 ports, one for HA and one for Ceph. The manage network is on default 1gig nic, checking on logs i'll see...
  17. A

    Ceph 2+4 layout considerations

    Hi yall, I'm thinking about creating a Ceph pool with a EC 2+4 scheme. Even though I did intensive Google reserch, I could not find any experience with that. My idea is this: The Ceph cluster is spread across two fault domains (latency < 1 ms, 40 disks on each side, all NVME SSDs, lots of CPU...
  18. S

    Proxmox & Ceph layout

    We have two data centers (streched cluster) with 4 servers each in a VMware vsan config ( with Raid 1 -mirroring) of each with 20 disks / 60TB total capacity ( RAW). We would like to convert this cluster into proxmox with ceph as a storage. I am listing out only the current VSAN storage layout...
  19. B

    Is OS drives of Proxmox an IOPS limiting factor?

    Hi, We want to procure a 3-node Proxmox cluster with Ceph. The configuration of each node would be as follows: Memory: 384GB CPU: 40 core (80 Thread) OS: 2 enterprise SSD drives in RAID1 mode Ceph OSD: 5 x enterprise mixed used NVMe SSD drives. (All will be connected to the On-board storage...
  20. W

    FRR OpenFabric creating a loop (?) in full mesh Ceph setup after reconnecting the interface

    Hello, I have 4 Proxmox nodes with 2x10G interfaces. I am testing out a configuration without a switch. I have followed the tutorial in the https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server The only difference between my setup and the setup in the tutorial is the addition of the...