ceph

  1. F

    SDN / Ceph Private Network

    Hello, With the major SDN enhancements introduced in Proxmox 9.0, is it now recommended to use these built-in SDN features to separate Ceph network traffic, rather than relying on traditional VLANs configured through a switch or OPNsense?
  2. P

    [SOLVED] Ceph not working / showing HEALTH_WARN

    Hi, I'm fairly new to Proxmox and only just set up my first actual cluster made of 3 PCs/nodes. Everything seemed to be working fine and showed up correctly and I started to set up Ceph for a shared storage. I gave all 3 PCs an extra physical SSD for the shared storage additional to the Proxmox...
  3. R

    Impact of Changing Ceph Pool hdd-pool size from 2/2 to 3/2

    Scenario I have a Proxmox VE 8.3.1 cluster with 12 nodes, using CEPH as distributed storage. The cluster consists of 96 OSDs, distributed across 9 servers with SSDs and 3 with HDDs. Initially, my setup had only two servers with HDDs, and now I need to add a third node with HDDs so the pool can...
  4. tcabernoch

    CEPH cache disk

    Hello folks Sorry. This is long-ish. It's been a saga in my life ... I'm not clear on exactly how to deploy a cache disk with CEPH. I've read a lot of stuff about setting up CEPH, from Proxmox and the CEPH site. Browsed some forum posts. Done a bunch of test builds. I'm an old VMware guy...
  5. M

    How to improve RDP user experience (Proxmox + Ceph NVMe + Mellanox fabric)

    Hi everyone, I’d like to ask for advice on improving user experience in two Windows Terminal Servers (around 15 users each, RDP/UDP). After migrating from two standalone VMware hosts (EPYC 9654, local SSDs) to a Proxmox + Ceph cluster, users feel sessions are slightly slower or less...
  6. B

    PVE Full Mesh reconfiguration - Ceph (got timeout 500)

    Hello everyone, I have a 3-node PVE cluster that I am using for testing and learning on. On this PVE cluster, I recently created a CEPH pool and had 4 VMs residing on it. My PVE cluster was connected to my Mellanox 40GbE switch but I wanted to explore reconfiguring it for full mesh, which I...
  7. M

    mount cephfs error

    Hey I get an error when I try to mount cephfs on one of my LXC. modprobe: FATAL: Module ceph not found in directory /lib/modules/6.14.11-3-pve failed to load ceph kernel module (1) mount error: ceph filesystem not supported by the system I have followed the step from ceph documentation...
  8. B

    Empty Ceph pool still uses storage

    I have just finished migrating my VMs on the cluster from a hdd pool to an ssd pool, but now that there are no vm disks or other proxmox related items left on the pool, it still is using ~7.3TiB on what i assume is orphaned data? This cluster is currently running PVE 8 with Ceph 18, but has been...
  9. S

    Effects of editing /etc/pve/storage.cfg monhosts on live datastores

    I'm running a PVE 8.4 cluster. The datastore our VMs use is an external Ceph cluster running 19.2.2. Yesterday, I found a "design flaw" in our setup. In /etc/pve/storage.cfg, there's this: rbd: pve content images krbd 0 monhost mon.ourdomain.com pool pve username blah...
  10. A

    CEPH Experimental POC - Non-Prod

    I have a 3 node cluster. Has a bunch of drives, 1TB cold rust, 512 warm SataSSD And three 512 non-PLP NVMe that are Gen3. (1 Samsung SN730 and 2 Inland TN320) I know not to expect much - this is pre-prod - plan is to eventually get PLPs next year. 10Gb Emulex CNA is working very well with FRR...
  11. T

    Ceph latency with cephfs volume for Elasticsearch with replicas

    We've been using Proxmox with ceph for years. Our typical cluster is... Proxmox 8.3.5 Ceph 18.2.4 10 servers 3 Enterprise SSD OSDs per server each 20Gbs between the servers 10Gbs between VMs and Ceph Public network for cephfs mounts. 1 Pool for VM deployment 2 subvolumes/pools; 1 Elastic data...
  12. G

    CPU Empfehlung - 3 Node Cluster

    Hallo zusammen, aktuell plane ich den Aufbau eines Cluster mit 3 Nodes und Ceph als Storage mit 10G Anbindung. Darauf laufen soll verschiedenes: - div. Windows VMs (DC, FS, IIS etc.) - Linux VMs (Tailscale Router, mehrere Docker VMs, Plex, Loxberry etc. ) - LXC (Cloudflare DDNS, paperless...
  13. A

    Ceph NVMe-oF gateways with PVE

    I'm using PVE on three nodes, and have Ceph installed as a distributed storage on these nodes. I now want to access the Ceph storage from other servers as well, via NVMe-oF. So, what I have to do is to extend my node configuration so that the PVE nodes also act as NVME-oF targets. In principle...
  14. P

    CEPH Erasure Coded Configuration: Review/Confirmation

    First, let me contextualize our set-up: We have a 3 node cluster, where we will be using CEPH for storage hyperconvergence. We are familiarizing ourselves with CEPH and would love to have someone more experienced chiming in: All of our storage hardware are SSDs. (24x 2TB NVMe, 8 per server)...
  15. C

    [SOLVED] Ceph nach ISCSI Mulitpath defekt

    Wir testen gerade ProxMox als VMWare Ersatz. Ich habe die Schulungen gemacht und bassierend auf dieser ohne große Probleme ein Cluster mit 3 Nodes und Ceph aufbauen können. Heute wollte ich unsere ISCSI NAS hinzufügen, aber das hat nicht so ganz geklappt. Ich konnte auf die LUN zugreifen und...
  16. K

    proxmox deployment

    Hello everyone Quick backstory. We have 2 main programs, both of them are using SQL server on windows. We used Hyper-V, but it wasn't very fast, nor secure enogh (periodic backups). Both of these systems are crucial for our company, so the best way would be to have some sort of continuity of...
  17. S

    Unattended install of Ceph using pveceph

    I am attempting to write an Ansible playbook that sets up my Proxmox cluster. One issue I'm running into is pveceph install doesn't have a non-interactive option. I've tried setting DEBIAN_FRONTEND=noninteractive on the task like this: - name: Install Ceph environment: DEBIAN_FRONTEND...
  18. V

    [SOLVED] PG stuck incomplete

    Hey Folks, Stashing this here as it's the only solution that worked for me and I will undoubtedly need it again. Given, $> ceph health detail ... [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive, 1 pg incomplete pg 7.188 is incomplete, acting [5,10,43] (reducing pool...
  19. K

    Ceph Storage question

    Hi This seems to be the best place / forum to ask question about ceph :) My understanding of ceph is the underlying storage is OSD, these are distributed between nodes. Pools are then created that sit on top of OSD's ... i think OSD's are broken into PG and PG are assigned to Pools I think ...
  20. C

    ceph Erasure Coding temporary override failure domain

    I'm recreating my ceph cluster due a configuration corruption. I will be reusing the same hardware. The problem is I don't have enough hard drives for two ceph clusters, but there is enough capacity. I know that you can't change the size of a erasure coded pool, but is there any way to override...