ceph

  1. N

    CEPH: public and cluster network

    Hello all, I am adding Ceph to an 3 node cluster. On each machine, I have one 10GbE and one 1GbE link available. What would be the better way of configuring my network? - 10 GbE Public network, 1 GbE Cluster network - 10 GbE Cluster network, 1 GbE Public network - 10 GbE both networks Thank...
  2. L

    Ceph freeze when a node reboots on Proxmox cluster

    Hello everyone, I’m currently facing a rather strange issue on my Proxmox cluster, which uses Ceph for storage. My infrastructure consists of 8 nodes, each equipped with 7 NVMe drives of 7.68 TB. Each node therefore hosts 7 OSDs (one per drive), for a total of 56 OSDs across the cluster...
  3. L

    Ceph freeze when a node reboots on Proxmox cluster

    Hello everyone, I’m currently facing a rather strange issue on my Proxmox cluster, which uses Ceph for storage. My infrastructure consists of 8 nodes, each equipped with 7 NVMe drives of 7.68 TB. Each node therefore hosts 7 OSDs (one per drive), for a total of 56 OSDs across the cluster...
  4. T

    3 servers, 3 cables, 1 ceph network?

    1. Can you take 3 servers with 4 port networking and connect them in this fashion: A - B A - C B - C with 3 cables and have redundant networking? (literal redundant would require 6 cables and eat all ports I think) 2. Are there three nets in the end? (and then possibly connect them all together...
  5. D

    Proxmox Cluster 3 nodes, Monitors refuse to start

    Hi all, i am facing a strange issue, after using having a proxmox pc for my self hosted app I decided to play around and create a cluter to dive deeper into the HA topics, i dowloaded the latest ISO and build up a cluster from scratch. My Cluster works, i can see every node, my ceph storage says...
  6. D

    Question - Run PVE Ceph & Non-PVE Ceph within the same cluster?

    Hi there, We're planning on migrating one of our Ceph platforms to new Ceph versions, this currently runs 'regular' Ceph from the official download()ceph()com repository on Ubuntu 22.04. Now we'd like to change this out for Proxmox VE Ceph, primarily because of the extra bug-fixes & cherry...
  7. A

    Moving ceph IPs from one interface to another.

    Because someone (hint: me) designed the ceph network poorly, I would like to make changes to it. For that, I need to move it from running a seperate public/cluster network on 2x25gbe bond, to my primary interface, thats is a 2x10 gbe bond. While it will be a slower network, its a temporary...
  8. A

    Proxmox 9 in a 3 node cluster with CEPH VM migration use VMBR0 network? how to get faster VM migration speed?

    Hi all, is it correct that by default Proxmox 9 in a 3 node cluster configuration with CEPH for VM migration use VMBR0 network? Current node network config does use a separate network for CEPH Cluster (10.10.10.x/24), CEPH Public (10.10.20.x/24), COROSYNC (172.16.1.x/24) and VMBR0...
  9. F

    SDN / Ceph Private Network

    Hello, With the major SDN enhancements introduced in Proxmox 9.0, is it now recommended to use these built-in SDN features to separate Ceph network traffic, rather than relying on traditional VLANs configured through a switch or OPNsense?
  10. P

    [SOLVED] Ceph not working / showing HEALTH_WARN

    Hi, I'm fairly new to Proxmox and only just set up my first actual cluster made of 3 PCs/nodes. Everything seemed to be working fine and showed up correctly and I started to set up Ceph for a shared storage. I gave all 3 PCs an extra physical SSD for the shared storage additional to the Proxmox...
  11. R

    Impact of Changing Ceph Pool hdd-pool size from 2/2 to 3/2

    Scenario I have a Proxmox VE 8.3.1 cluster with 12 nodes, using CEPH as distributed storage. The cluster consists of 96 OSDs, distributed across 9 servers with SSDs and 3 with HDDs. Initially, my setup had only two servers with HDDs, and now I need to add a third node with HDDs so the pool can...
  12. tcabernoch

    CEPH cache disk

    Hello folks Sorry. This is long-ish. It's been a saga in my life ... I'm not clear on exactly how to deploy a cache disk with CEPH. I've read a lot of stuff about setting up CEPH, from Proxmox and the CEPH site. Browsed some forum posts. Done a bunch of test builds. I'm an old VMware guy...
  13. M

    How to improve RDP user experience (Proxmox + Ceph NVMe + Mellanox fabric)

    Hi everyone, I’d like to ask for advice on improving user experience in two Windows Terminal Servers (around 15 users each, RDP/UDP). After migrating from two standalone VMware hosts (EPYC 9654, local SSDs) to a Proxmox + Ceph cluster, users feel sessions are slightly slower or less...
  14. B

    PVE Full Mesh reconfiguration - Ceph (got timeout 500)

    Hello everyone, I have a 3-node PVE cluster that I am using for testing and learning on. On this PVE cluster, I recently created a CEPH pool and had 4 VMs residing on it. My PVE cluster was connected to my Mellanox 40GbE switch but I wanted to explore reconfiguring it for full mesh, which I...
  15. M

    mount cephfs error

    Hey I get an error when I try to mount cephfs on one of my LXC. modprobe: FATAL: Module ceph not found in directory /lib/modules/6.14.11-3-pve failed to load ceph kernel module (1) mount error: ceph filesystem not supported by the system I have followed the step from ceph documentation...
  16. B

    Empty Ceph pool still uses storage

    I have just finished migrating my VMs on the cluster from a hdd pool to an ssd pool, but now that there are no vm disks or other proxmox related items left on the pool, it still is using ~7.3TiB on what i assume is orphaned data? This cluster is currently running PVE 8 with Ceph 18, but has been...
  17. S

    Effects of editing /etc/pve/storage.cfg monhosts on live datastores

    I'm running a PVE 8.4 cluster. The datastore our VMs use is an external Ceph cluster running 19.2.2. Yesterday, I found a "design flaw" in our setup. In /etc/pve/storage.cfg, there's this: rbd: pve content images krbd 0 monhost mon.ourdomain.com pool pve username blah...
  18. A

    CEPH Experimental POC - Non-Prod

    I have a 3 node cluster. Has a bunch of drives, 1TB cold rust, 512 warm SataSSD And three 512 non-PLP NVMe that are Gen3. (1 Samsung SN730 and 2 Inland TN320) I know not to expect much - this is pre-prod - plan is to eventually get PLPs next year. 10Gb Emulex CNA is working very well with FRR...
  19. T

    Ceph latency with cephfs volume for Elasticsearch with replicas

    We've been using Proxmox with ceph for years. Our typical cluster is... Proxmox 8.3.5 Ceph 18.2.4 10 servers 3 Enterprise SSD OSDs per server each 20Gbs between the servers 10Gbs between VMs and Ceph Public network for cephfs mounts. 1 Pool for VM deployment 2 subvolumes/pools; 1 Elastic data...
  20. G

    CPU Empfehlung - 3 Node Cluster

    Hallo zusammen, aktuell plane ich den Aufbau eines Cluster mit 3 Nodes und Ceph als Storage mit 10G Anbindung. Darauf laufen soll verschiedenes: - div. Windows VMs (DC, FS, IIS etc.) - Linux VMs (Tailscale Router, mehrere Docker VMs, Plex, Loxberry etc. ) - LXC (Cloudflare DDNS, paperless...