ceph

  1. H

    Ceph support - Not Proxmox

    There’s not a whole lot of support forms on the Internet for ceph, so if this is misplaced, I apologize. This is my little one page platform I'm putting together https://github.com/rlewkowicz/micro-platform I use ceph nano to launch a standalone ceph cluster...
  2. K

    Trying Proxmox VE for the first time. Need help with the setup.

    Hi ALL, I have a requirement where i need to install & Configure a 3 node Proxmox Cluster with HCI. As part of this setup, I need to configure Ceph storage and enable High Availability (HA). 3*server Configuration: 2CPU - Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (18cores) with 10 RAM (32GB)...
  3. C

    Verständnisfrage zum Netzwerk eines PVE-/CEPH-Cluster

    Moin zusammen, aktuell setzen wir einen kleinen Proxmox-Cluster jeweils mit lokalem Storage via ZFS ein. Da die Hardware zur Reinvestition ansteht, plane ich einen neuen PVE-Cluster mit CEPH und habe dazu noch ein paar Fragen, die ich trotz Recherche hier und via Google nicht oder teilweise...
  4. H

    Cephadm at Proxmox nodes

    Hello, I just want to ask, is it possible for me to install Ceph with Cephadm on a Proxmox node? then I install the RBD from Ceph as a storage disk image / container? I think the Ceph that is included in Proxmox is less customizable, and cannot implement Orchestrator on Ceph Proxmox.
  5. S

    [SOLVED] Unable to create OSD

    I am relatively new to Proxmox. I had Ceph installed and fully configured and for learning purposes i tried removing it. I removed the Ceph install and rebuilt the whole Ceph setup. I have all the nodes online, but i can't create an OSD. The drives are wiped. NAME MAJ:MIN RM...
  6. P

    removing OSD on failed Hardware leaves the OSD service

    Hello all, Here is the situation : We have a Ceph cluster on top of proxmox on Dell Hardware. On of the DELL virtual disk failed, hence the corresponding OSD failed. This is a HDD disk not NVME and "thankfully" the bluestore was not split out on local NVME disks. Anyway, we followed the...
  7. L

    Local Ephemeral Storage for HA VM

    Hello, just after a bit of implementation advice for planning out a cluster. Scenario - VMs running on ceph as a high availability cluster, but attached to each VM is an additional drive which is just for storing ephemeral data as a high-speed cache. This data does not need to survive a...
  8. C

    Failed Backup wiederherstellen

    Mit dem Titel fängt es ja schon mal sehr gut an. Wir haben ein kleines Problem: Wir müssen eine VM aus dem Backup wiederherstellen und leider sind unsere Backups failed. Warum fragen wir trotzdem? Im Backup liegen 5 SCSi Abbilder, welche ursprünglich aus einem Ceph Cluster stammen. Nur die...
  9. C

    Cleanup leftover Ceph OSD services/files after unsuccessful removal of the OSD

    Hello, I had OSD 23 on one of my hosts but the drive failed so we stopped the services and removed it. We did not need to replace it immediately (lots of OSDs in the cluster so not a lot of capacity lost) so I didn't pay much attention to it. Sometime in the last couple of months another disk...
  10. D

    Ceph monitor (mon caps) permissions

    I hope this message finds you well. I would like to gain a clearer understanding of the Monitor permissions (mon caps) in Ceph. Specifically, could you explain what actions these permissions allow, and what exactly can be executed or written with them? Thank you in advance :)
  11. B

    How to upgrade Ceph 18.2.2 to 18.2.4

    Hi, I'm trying to figure out how to upgrade Ceph from 18.2.2 to 18.2.4. I've found this document describing going from Reef to Squid but I want to get to the latest Reef 18.2.4 first. I know I can use that procedure to get to 18.2.4 if I don't update my ceph repo file to squid but then I'll...
  12. V

    Migration from VMware to Proxmox: Reusing Hardware Proxmox-SAN

    What are the best storage configurations in Proxmox to access a SAN with multiple nodes while ensuring high performance, redundancy, and simultaneous access? How do options like ZFS over iSCSI, ZFS over NVMe/TCP, Ceph compare in terms of latency, scalability, and complexity? The goal is to...
  13. S

    Ceph and LSI 3108 with JBOD

    I’ve read many threads but haven’t quite figured it out… I know one should not use RAID with Ceph. Do drives connected as JBOD/single drives on a RAID controller (3108) still fall into that warning category? We’re hoping to reuse some older hardware.
  14. I

    replace node in cluster

    Hi all! I have a pve cluster and a cef storage of 3 nodes. On one node, both OSDs are in the down and out status. I need to replace this node with another node. I added a new node to the proxmox cluster, installed cef. Tell me what are my next steps? Do I need to delete the OSD on the node I...
  15. C

    Help moving ceph network

    I tried to move my ceph network to another subnet and now all osd's are not picking up the new network and staying down. I fear i may have hosed ceph, but as a learning experience, would like to see if i can recover them. this is what they look like in the cluster: this is the contents of...
  16. Sebi-S

    [SOLVED] Help! Ceph access totally broken

    We've got a quite serious problem over here. We have added a new node to our cluster. However, when installing ceph on the new node, there was a problem due to the fact that the VLANs for ceph and osd could not communicate correctly (network problem). As a result, we tried to uninstall ceph...
  17. D

    Ceph install failing on 8.3.3

    Good Day, I'm seeing a somewhat weird issue specifically with 8.3.3. I am trying to install ceph to convert a cluster running off of local storage to a hyper-converged cluster running on ceph. The issue I'm seeing is timeouts when performing the ceph install wizard. On the first page I'm...
  18. B

    proper versions for upgrading Reef to Squid

    Hi, I'm going to be upgrading my ProxMox hosts from 8.2.4 up to 8.3.1 and my Ceph from Reef 18.2.2 to Squid. I'm following the directions here (https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid) for ceph and I'm wondering if I can go from Reef 18.2.2 up to Squid directory? In the Introduction...
  19. C

    Error when starting an LXC container

    Hello everyone, I am kinda new into all this so be patient with me. I am having a particular container with the name CT213 in my organization and I have the task to bring it back up. From the HA cluster pve trick we have , its the only container that doesnt want to start up. When i try to start...
  20. S

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    This 3-node cluster also had a 4th node (r730), which didn't have any OSDs assigned. This is what I have to recover:- /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring available from a previous node that was part of cluster. /var/lib/pve-cluster/config.db file from r730 node Now...