ceph

  1. tuxis

    Dutch Proxmox VE day on 12 October 2023

    We are organising a Proxmox VE day in the Netherlands, Ede on Thursday on the 12th of October 2023. In the morning, we will discuss how you can innovatively meet your virtualisation, storage and private cloud needs within your budget with Proxmox VE. We will also look at how to easily achieve...
  2. U

    [SOLVED] PVE not booting after Ceph installation

    I have a three-node cluster with PVE8 and Ceph installed. The names are pfsense-1, pfsense-2 and r730. I have been running PVE for about a year and recently installed Ceph on these nodes. It worked well, but when I reboot the r730 node, it won't boot (I waited for 15 hours). I reinstalled the...
  3. G

    OSD Crashing

    Hi guys, Would appreciate some assistance on this as this is quite an urgent issue. This morning when I woke up, received a call from our client and it seems like 2 NVME OSDs on their site crashed without any apparent reason. Other non NVME OSDs are running normally without any issues. Issue...
  4. H

    Proxmox ceph upgrade problems experienced

    Hi, Ceph pacific (16.2.13) to quincy 17.2.6 During the quincy upgrade process from ceph pacific, we could not upgrade our osd disks while we were upgrading 17.2.6 in monitor, manager and meta. Since I could not upgrade the osd disks, the osd disk versions remained at 16.2.13 Can anyone have...
  5. Z

    Ceph configuration Disappeared?

    Was in the middle of re balancing after a set of OSDs went offline and came back up only to find that about 20 minutes later the entire ceph cluster is unresponsive. upon looking into the issue the ceph.conf is now completely empty and I have no clue how to proceed without having to manually...
  6. S

    Help recovering ceph

    Hello all, this is my first post on the forum. I have a proxmox cluster installation with ceph that is broken and need help recovering it. After a network device stole the IP address of one of the nodes, ceph went down and never recovered. I tried troubleshooting with a proxmox silver partner...
  7. S

    Ceph Help: Reduced data availability & Degraded data redundancy

    So I have been trying to get a single node, promox ceph server up all weekend.. I have successfully done a fresh install of Proxmox 8.0.4 and Ceph Quincy, but after setting up Ceph with: osd_pool_default_min_size = 2 osd_pool_default_size = 3 osd_crush_chooseleaf_type = 0 I get a Ceph warnings...
  8. M

    [SOLVED] CephFS max file size

    Hello everyone, I have 3 servers all the same with 16core/32Thread AMD EPYC, 256GB Ram, 2x 1TB SSD NVMe in ZFS RAID1 as OS, and 4x 3.2TB SSD NVMe as ceph storage for VMs drives, 2x4TB HDD in RAID0 for fast-local backup. These three servers are clustered together and connected with dedicated...
  9. T

    [SOLVED] Proxmox Ceph Zabbix

    In case this helps someone. Today, I was testing proxmox ceph cluster monitoring with the ceph built in zabbix monitoring module and followed the following ceph documentation on doing so https://docs.ceph.com/en/latest/mgr/zabbix/. But since I want to monitor my the cluster through proxy I had...
  10. M

    Ceph Cluster Konfiguration zum Imitieren von RAID 1 auf einem Cluster Knoten

    Guten Tag zusammen, zur Zeit arbeite ich mich in die Konfiguration von einem Ceph Clusters als Speicherlösung ein. Ich habe es auch geschafft ein einfaches Ceph Cluster mit der Standardkonfiguration zu erstellen. 2 Knoten mit identischem Aufbau: - 2 HDD Festplatten - 2 SSD Festplatten...
  11. C

    Struggling to create Ceph OSD on Node that is not apart of same Gateway/LAN (4 Nodes, Different LAN)

    Hi everyone, I have 4 nodes in my Proxmox Datacenter. Two nodes are in North America and two are in Asia, thus they are both in different geographical locations and networks. Node 1 (10.10.100.1) Node 2 (10.10.100.2) Node 3 (10.100.200.1) Node 4 (10.100.200.2) I have successfully setup Ceph...
  12. D

    Cannot add disk to Ceph OSD in cluster

    Hello, I have a 3 node cluster, crated OK. I want to create Ceph storage cluster for important VMs. Furthermore, I have added to each host iSCSI drive, and created LVM on top of it. But seems that Ceph does not like that drive to be added as OSD. Is this limitation of Ceph config in PX or Ceph...
  13. S

    Ceph OSD rebalancing

    Hi all, I have a setup of 3 proxmox servers (7.3.6) running Ceph 16.2.9. I have 21 SSD OSDs, 12*1.75TB,9*0.83TB. On these OSDs I have one pool with replication 1 (one copy). I have set the pg_autoscale_mode to 'on' and the resulting PGs of the pool are 32. My problem is that the OSDs are very...
  14. A

    Confused on OVF Import & Ceph

    I have a VM on my QNAP under Virtualization Station. I have shut it down, exported it as a .OVF, imported it using: qm "importovf 300 ./turnkey-core-16.1-buster-amd64.ovf HDD_500GB --format qcow2" It imports just fine, takes a while, and it is going on a CEPH filesystem, however, after it is...
  15. T

    Clarification Regarding RBD Clients

    Hi Everyone, I'm about to deploy a sizable CEPH environment for proof of concept purposes and I want to consider benchmarks vs real world usage. Therefore, I have a few clarifying questions that I don't think my shiny PVE Advanced Certification answered: What counts as an RBD client in PVE...
  16. A

    installing ceph in pve8 nosub repo

    I'm putting together a dev cluster for proxmox 8 within my environment, and ran into a curious "problem:" when attempting to install the ceph stack, pveceph install attempts to rewrite ceph.list to enterprise and then complains that no subscription is present. I tried to set immutable on...
  17. A

    Default cephfs and librbd volumes statuses are "Unknown"

    After following the wiki to create a hyperconverged cluster, my `cephfs` and `librbd` volumes on my host servers have a little grey question mark next to them, and a "Status: Unknown" help text when i hover over them. Since they're there by default, I figured that once I got down to the pool...
  18. DynFi User

    CEPH config: hybrid node for controller only setup

    We are working on a CEPH setup where we will have 5 nodes spread on 3 locations. One of the location will be there only to act as a "quorum node". We are planning to have AMD based cluster. Can the node acting as the "Quorum node" be Intel based (we are trying to recycle hardware and limit...
  19. C

    Replace all SSDs on a 3 nodes cluster

    Hi everyone, I have a 3 nodes cluster running on PVE 6.4 with a total of 24 SSDs with CEPH. Considering that: - the cluster can be brought to a total stop - each node is more than capable to host all the machines - new SSDs are bigger (from 960GB to 1.92TB) - I'd highly prefer to not stress the...
  20. C

    [SOLVED] rbd: sysfs write failed on TPM disks

    Hello everyone, we are running a 4-Node pve cluster with 3 Nodes in a hyper-converged setup with ceph and the 4th Node just for virtualization without its own osds. After creating a VM with a TPM state device on a ceph pool it fails to start with the error message: rbd: sysfs write failed TASK...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!