ceph

  1. G

    Ceph on 10GB NiC, which NVME?

    Greetings, I have just created my account here, since I am assembling a HOMELAB Proxmox Cluster with 3 Nodes, each having Dual 10GB NiCs. I want to use Ceph as a backend for the VM storage for learning purposes. While I also wish to migrate my own infrastructure onto Proxmox soon, as I hope it...
  2. M

    3 Server Ceph Cluster 100Gbit-Backend / 10 Gbit Frontend

    Hallo, My hyperconverged Proxmoxcluster with Ceph (19.2.1) have 3 Servers: All have Threadripper Pro Zen3, Zen4 16-32 Cores, 256Gb Ram, with at first 1 NVME OSD (Kioxia CM7r) per Server. Frontend Network has multiple redundant 10Gbit NICs for VMS und Clients. Backend Network only for Ceph...
  3. W

    Network Config Suggestions w/Ceph

    I'm in need of some help from some of the seasoned professionals out there. I'm setting up a 5-node cluster with Ceph that will run around 30-40 VMs. Each node has two 4-port 10G NICs. I'm going to use LACP to bond one port from each NIC to create four connections on each server. However, this...
  4. D

    Ceph networking guide

    In the doc there is just information about separating storage and public Ceph networks. But it doesn't seem to be that easy, eg. here https://forum.proxmox.com/threads/vm-storage-traffic-on-ceph.117137/ VLAN setup: Proxmox MGMT Ceph storage Ceph monitors VMs When copying data between VMs...
  5. B

    proxmox ceph slow ops, oldest one blocked for 2531 sec

    Hello, Yesterday I updated all the hosts in my proxmox cluster. After that, after restarting the osds one by one for the new version, the client io in my ceph cluster almost stopped. There is no problem on the network side and disk health. Restarting all ceph services and hosts did not solve the...
  6. G

    10G write speed in VM | Ceph?

    Hello everyone, I am currently planning a new Proxmox cluster for my Homelab. Because my firewall is to run in the cluster, the cluster should have shared storage for the VM disks and HA features. The firewall cannot run as a cluster, as only one external IP (PPPoE) is available and double...
  7. M

    New proxmox cluster - config help

    Hi, Currently installing a new proxmox cluster to replace VMware and wanted to get some hints/pointers on the configuration. Setup is as follows: 7 hosts 2x128GB M.2 SSD for OS per host 12x1,6TB SAS SSD per host 4x10Gb network interfaces If I understand correctly, if you want shared storage...
  8. K

    Ceph MDS OOM killed on weekends

    Hi, I have 4 node PVE Cluster with CephFS deployed and from a couple of months ago i get MDS oom kills and sometimes MDS are deployed on another node and get stucked on clientreplay status, so i need to restart this MDS again to gain acces to cephfs from all clients Checked scheduled jobs or...
  9. T

    Migrating Proxmox HA-Cluster with Ceph to new IP Subnet (Reup)

    Hey there, I am in the process of migrating my entire cluster, consisting of three nodes, to a new subnet. Old addresses: 10.10.20.11/24, 10.10.20.12/24, 10.10.20.13/24 New addresses: 10.10.0.10/24, 10.10.0.11/24, 10.10.0.12/24 I have already updated all necessary files, following a...
  10. C

    Upgrade cluster from 7.3-4 to 8.X with Ceph storage Configuration

    Hello I am running 7.3.-4 in cluster with 3 nodes and i plan to do a upgrade to 8.X. i know this full guide : https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#Actions_step-by-step but , i can't find a response for all my questions : should i first ; removing a server from a Ceph cluster ...
  11. S

    [SOLVED] ceph - anyone had experience using mon_osd_auto_mark_in=true?

    i have been having fun with my network, somtimes this results in an OSD being marked down and out. my understading is: after some period when the network is back the osd should be marked up automatically (i am unclear how long this takes) but that it will never be marked as in automatically...
  12. M

    Proxmox multiple Ceph for HA

    Example - I have 3 server and Ceph for Proxmox HA. It is using 1 Disk per server. Now, if I have to add a new VM to its isolated storage on 1 new disk per server, but I want that VM to also have HA feature. Is it possible that I can have VMs working on two different Ceph and can work with HA...
  13. P

    Advice on first attempt at proxmox & ceph hybrid cluster

    Hi All, So I've recently decided to try Proxmox. I'll be honest; it was primarily the Ceph integration that initially sold it to me for the following reasons: In production we use/pay license fees for the Hitachi VSP, and to be honest, it's been very stable over the last 2/3 years. However...
  14. A

    Ceph - feasible for Clustered MSSQL?

    Looking to host a clustered MSSQL DB for an enterprise environment using ceph to remove a NAS/SAN as a single point of failure. Curious about performance, requirements are likely not very much and no multi-writer outside the cluster itself. However... as I understand writes can be very bad with...
  15. D

    No OSDs in CEPH after recreating monitor

    Hello! I'm trying to create a disaster recovery plan for our PVE cluster including CEPH. Our current config involves three monitors on our three servers. We'll be using three monitors and standard pool configuration (3 replicas). I'm trying to set up a manual for deleting monitor configuration...
  16. herzkerl

    Ceph Dashboard only shows default pool usage under “Object” -> “Overview”

    Hello everyone, I’m running a Proxmox cluster with Ceph and noticed something odd in the web interface. Under “Object” -> “Overview”, the “Used capacity” metric appears to show only the data stored in the default pool, while ignoring other pools (including erasure-coded pools). It shows only...
  17. H

    Ceph deep-scrubbing performance optimization

    We are using Ceph on three nodes (10g). There are one HDD pool (3 OSDs per node) and one NVMe pool. The NVMes are used for the HDD WALs and DBs, too. For each OSD osd_max_scrubs is set to 1. During the deep scrubbing phases (I have limited this to some hours during the night) the cluster is...
  18. D

    [SOLVED] Possible bluestoreDB bug in Ceph 17.2.8

    Hi all, Recently the latest version of PVE-Ceph Quincy released (17.2.8)* within the Enterprise repository. However on another (non-PVE ceph) storage cluster we experienced quite the nasty bug related to BluestoreDB (https://tracker.ceph.com/issues/69764). Causing OSD's to crash and recover...
  19. D

    [SOLVED] Node reboot while disk operation in Ceph

    Hello! I have another issue I'd appreciate a community feedback from. One of our nodes crashed during last Friday inexplicably during migration of a vm disk from ceph storage to a local one. I'm attaching ceph log on pastebin as it's too long: https://pastebin.com/wkxHP7rt proxmox-ve: 8.3.0...
  20. A

    Ceph - power outage and recovery

    Hello all! Recently we have experienced a power outage and loss network connectivity (Junpier switch that was used by Ceph cluster). Some proxmox/ceph nodes have been restarted as well. Network traffic and nodes have been restored but our cluster is in critical condition. On the monitors we...