ceph

  1. S

    [SOLVED] Unable to create OSD

    I am relatively new to Proxmox. I had Ceph installed and fully configured and for learning purposes i tried removing it. I removed the Ceph install and rebuilt the whole Ceph setup. I have all the nodes online, but i can't create an OSD. The drives are wiped. NAME MAJ:MIN RM...
  2. P

    removing OSD on failed Hardware leaves the OSD service

    Hello all, Here is the situation : We have a Ceph cluster on top of proxmox on Dell Hardware. On of the DELL virtual disk failed, hence the corresponding OSD failed. This is a HDD disk not NVME and "thankfully" the bluestore was not split out on local NVME disks. Anyway, we followed the...
  3. L

    Local Ephemeral Storage for HA VM

    Hello, just after a bit of implementation advice for planning out a cluster. Scenario - VMs running on ceph as a high availability cluster, but attached to each VM is an additional drive which is just for storing ephemeral data as a high-speed cache. This data does not need to survive a...
  4. C

    Failed Backup wiederherstellen

    Mit dem Titel fängt es ja schon mal sehr gut an. Wir haben ein kleines Problem: Wir müssen eine VM aus dem Backup wiederherstellen und leider sind unsere Backups failed. Warum fragen wir trotzdem? Im Backup liegen 5 SCSi Abbilder, welche ursprünglich aus einem Ceph Cluster stammen. Nur die...
  5. C

    Cleanup leftover Ceph OSD services/files after unsuccessful removal of the OSD

    Hello, I had OSD 23 on one of my hosts but the drive failed so we stopped the services and removed it. We did not need to replace it immediately (lots of OSDs in the cluster so not a lot of capacity lost) so I didn't pay much attention to it. Sometime in the last couple of months another disk...
  6. D

    Ceph monitor (mon caps) permissions

    I hope this message finds you well. I would like to gain a clearer understanding of the Monitor permissions (mon caps) in Ceph. Specifically, could you explain what actions these permissions allow, and what exactly can be executed or written with them? Thank you in advance :)
  7. B

    How to upgrade Ceph 18.2.2 to 18.2.4

    Hi, I'm trying to figure out how to upgrade Ceph from 18.2.2 to 18.2.4. I've found this document describing going from Reef to Squid but I want to get to the latest Reef 18.2.4 first. I know I can use that procedure to get to 18.2.4 if I don't update my ceph repo file to squid but then I'll...
  8. V

    Migration from VMware to Proxmox: Reusing Hardware Proxmox-SAN

    What are the best storage configurations in Proxmox to access a SAN with multiple nodes while ensuring high performance, redundancy, and simultaneous access? How do options like ZFS over iSCSI, ZFS over NVMe/TCP, Ceph compare in terms of latency, scalability, and complexity? The goal is to...
  9. S

    Ceph and LSI 3108 with JBOD

    I’ve read many threads but haven’t quite figured it out… I know one should not use RAID with Ceph. Do drives connected as JBOD/single drives on a RAID controller (3108) still fall into that warning category? We’re hoping to reuse some older hardware.
  10. I

    replace node in cluster

    Hi all! I have a pve cluster and a cef storage of 3 nodes. On one node, both OSDs are in the down and out status. I need to replace this node with another node. I added a new node to the proxmox cluster, installed cef. Tell me what are my next steps? Do I need to delete the OSD on the node I...
  11. C

    Help moving ceph network

    I tried to move my ceph network to another subnet and now all osd's are not picking up the new network and staying down. I fear i may have hosed ceph, but as a learning experience, would like to see if i can recover them. this is what they look like in the cluster: this is the contents of...
  12. Sebi-S

    [SOLVED] Help! Ceph access totally broken

    We've got a quite serious problem over here. We have added a new node to our cluster. However, when installing ceph on the new node, there was a problem due to the fact that the VLANs for ceph and osd could not communicate correctly (network problem). As a result, we tried to uninstall ceph...
  13. D

    Ceph install failing on 8.3.3

    Good Day, I'm seeing a somewhat weird issue specifically with 8.3.3. I am trying to install ceph to convert a cluster running off of local storage to a hyper-converged cluster running on ceph. The issue I'm seeing is timeouts when performing the ceph install wizard. On the first page I'm...
  14. B

    proper versions for upgrading Reef to Squid

    Hi, I'm going to be upgrading my ProxMox hosts from 8.2.4 up to 8.3.1 and my Ceph from Reef 18.2.2 to Squid. I'm following the directions here (https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid) for ceph and I'm wondering if I can go from Reef 18.2.2 up to Squid directory? In the Introduction...
  15. C

    Error when starting an LXC container

    Hello everyone, I am kinda new into all this so be patient with me. I am having a particular container with the name CT213 in my organization and I have the task to bring it back up. From the HA cluster pve trick we have , its the only container that doesnt want to start up. When i try to start...
  16. S

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    This 3-node cluster also had a 4th node (r730), which didn't have any OSDs assigned. This is what I have to recover:- /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring available from a previous node that was part of cluster. /var/lib/pve-cluster/config.db file from r730 node Now...
  17. S

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    I'm stupid :/, and I really need your help. I was following the thread to clear a dead monitor here https://forum.proxmox.com/threads/ceph-cant-remove-monitor-with-unknown-status.63613/post-452396 And as instructed, I deleted the folder named "ceph-nuc10" where nuc10 is my node name under...
  18. D

    Ceph Restful permissions and security concerns

    Hello, I recently configured Ceph supervision with Zabbix. to make the superversion working, i applied the following permissions on each mgr: ceph auth caps $mgr mds 'allow *' mon 'allow * 'osd 'allow *' However, I'm concerned about the excessive permissions. Should I be worried about the...
  19. TwiX

    Proxmox Ceph cluster - mlag switches choice -

    Hi, I plan to build a new proxmox ceph cluster (3 nodes). LACP 2x25 Gbps for ceph LACP 2x10 Gbps for VMs My question is related to the switches, I want mlag switches indeed. Mikrotik was my first choice due to functionalities and low budget as well. I have 2 CRS310 and have settled a very...
  20. C

    VM stuck on starting after moving nodes

    Hello all, I am trying to set up a 3-node cluster with failover, nodes 1 and 2 have my OSDs and nodes 1, 2, and 3 are monitors, nodes 1 and 2 are my production servers and node 3 is just for negotiation. If I gracefully power off node 1 the VM will move to node 2 (using HA fail over) and start...