ceph

  1. C

    Upgrade cluster from 7.3-4 to 8.X with Ceph storage Configuration

    Hello I am running 7.3.-4 in cluster with 3 nodes and i plan to do a upgrade to 8.X. i know this full guide : https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#Actions_step-by-step but , i can't find a response for all my questions : should i first ; removing a server from a Ceph cluster ...
  2. S

    [SOLVED] ceph - anyone had experience using mon_osd_auto_mark_in=true?

    i have been having fun with my network, somtimes this results in an OSD being marked down and out. my understading is: after some period when the network is back the osd should be marked up automatically (i am unclear how long this takes) but that it will never be marked as in automatically...
  3. M

    Proxmox multiple Ceph for HA

    Example - I have 3 server and Ceph for Proxmox HA. It is using 1 Disk per server. Now, if I have to add a new VM to its isolated storage on 1 new disk per server, but I want that VM to also have HA feature. Is it possible that I can have VMs working on two different Ceph and can work with HA...
  4. P

    Advice on first attempt at proxmox & ceph hybrid cluster

    Hi All, So I've recently decided to try Proxmox. I'll be honest; it was primarily the Ceph integration that initially sold it to me for the following reasons: In production we use/pay license fees for the Hitachi VSP, and to be honest, it's been very stable over the last 2/3 years. However...
  5. A

    Ceph - feasible for Clustered MSSQL?

    Looking to host a clustered MSSQL DB for an enterprise environment using ceph to remove a NAS/SAN as a single point of failure. Curious about performance, requirements are likely not very much and no multi-writer outside the cluster itself. However... as I understand writes can be very bad with...
  6. D

    No OSDs in CEPH after recreating monitor

    Hello! I'm trying to create a disaster recovery plan for our PVE cluster including CEPH. Our current config involves three monitors on our three servers. We'll be using three monitors and standard pool configuration (3 replicas). I'm trying to set up a manual for deleting monitor configuration...
  7. herzkerl

    Ceph Dashboard only shows default pool usage under “Object” -> “Overview”

    Hello everyone, I’m running a Proxmox cluster with Ceph and noticed something odd in the web interface. Under “Object” -> “Overview”, the “Used capacity” metric appears to show only the data stored in the default pool, while ignoring other pools (including erasure-coded pools). It shows only...
  8. H

    Ceph deep-scrubbing performance optimization

    We are using Ceph on three nodes (10g). There are one HDD pool (3 OSDs per node) and one NVMe pool. The NVMes are used for the HDD WALs and DBs, too. For each OSD osd_max_scrubs is set to 1. During the deep scrubbing phases (I have limited this to some hours during the night) the cluster is...
  9. D

    [SOLVED] Possible bluestoreDB bug in Ceph 17.2.8

    Hi all, Recently the latest version of PVE-Ceph Quincy released (17.2.8)* within the Enterprise repository. However on another (non-PVE ceph) storage cluster we experienced quite the nasty bug related to BluestoreDB (https://tracker.ceph.com/issues/69764). Causing OSD's to crash and recover...
  10. D

    [SOLVED] Node reboot while disk operation in Ceph

    Hello! I have another issue I'd appreciate a community feedback from. One of our nodes crashed during last Friday inexplicably during migration of a vm disk from ceph storage to a local one. I'm attaching ceph log on pastebin as it's too long: https://pastebin.com/wkxHP7rt proxmox-ve: 8.3.0...
  11. A

    Ceph - power outage and recovery

    Hello all! Recently we have experienced a power outage and loss network connectivity (Junpier switch that was used by Ceph cluster). Some proxmox/ceph nodes have been restarted as well. Network traffic and nodes have been restored but our cluster is in critical condition. On the monitors we...
  12. I

    Ceph OSD drives disconnect when I move LXC storage to it

    Hi, I’m running Plex in an LXC container with the root disk on my local-zfs storage. However, when I try to move the storage to my ceph pool, my local OSD drives disconnect during the process. I tried doing something similar with a larger VM disk (300GB) with no issues. Likewise, when I move...
  13. F

    [TUTORIAL] Supported Server & Components from Fujitsu / Primergy

    Hi all disclaimer: I work for Fujitsu, this is a semi-official posting but I hope it is not considered as advertising. ;) I found a couple of discussions in this forum around recommended hardware and components for Primergy servers, so maybe this will be helpful for some of you. Of course we...
  14. C

    [SOLVED] Increase CEPH Replication during operation

    Hello, I have a cluster of 9 nodes, with a default crush map and a replication (size) of 3, and min_size of 2. Are there any points against increasing the size to 5 and min_size 3 during operation to handle a higher failure of individual nodes? Enough disk space is available, I am aware of...
  15. M

    [SOLVED] Network issues with Ceph after Proxmox node restart (OVS, MTU 8900)

    We are using Open vSwitch (OVS) for networking in our Proxmox VE cluster. We are experiencing a network issue after restarting a node – Ceph is not immediately ready, and HEALTH_OK status takes several minutes to appear instead of just a few seconds. Observed behavior: • Ping with MTU...
  16. H

    Ceph support - Not Proxmox

    There’s not a whole lot of support forms on the Internet for ceph, so if this is misplaced, I apologize. This is my little one page platform I'm putting together https://github.com/rlewkowicz/micro-platform I use ceph nano to launch a standalone ceph cluster...
  17. K

    Trying Proxmox VE for the first time. Need help with the setup.

    Hi ALL, I have a requirement where i need to install & Configure a 3 node Proxmox Cluster with HCI. As part of this setup, I need to configure Ceph storage and enable High Availability (HA). 3*server Configuration: 2CPU - Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (18cores) with 10 RAM (32GB)...
  18. C

    Verständnisfrage zum Netzwerk eines PVE-/CEPH-Cluster

    Moin zusammen, aktuell setzen wir einen kleinen Proxmox-Cluster jeweils mit lokalem Storage via ZFS ein. Da die Hardware zur Reinvestition ansteht, plane ich einen neuen PVE-Cluster mit CEPH und habe dazu noch ein paar Fragen, die ich trotz Recherche hier und via Google nicht oder teilweise...
  19. H

    Cephadm at Proxmox nodes

    Hello, I just want to ask, is it possible for me to install Ceph with Cephadm on a Proxmox node? then I install the RBD from Ceph as a storage disk image / container? I think the Ceph that is included in Proxmox is less customizable, and cannot implement Orchestrator on Ceph Proxmox.
  20. S

    [SOLVED] Unable to create OSD

    I am relatively new to Proxmox. I had Ceph installed and fully configured and for learning purposes i tried removing it. I removed the Ceph install and rebuilt the whole Ceph setup. I have all the nodes online, but i can't create an OSD. The drives are wiped. NAME MAJ:MIN RM...