ceph

  1. Sebi-S

    [SOLVED] Help! Ceph access totally broken

    We've got a quite serious problem over here. We have added a new node to our cluster. However, when installing ceph on the new node, there was a problem due to the fact that the VLANs for ceph and osd could not communicate correctly (network problem). As a result, we tried to uninstall ceph...
  2. D

    Ceph install failing on 8.3.3

    Good Day, I'm seeing a somewhat weird issue specifically with 8.3.3. I am trying to install ceph to convert a cluster running off of local storage to a hyper-converged cluster running on ceph. The issue I'm seeing is timeouts when performing the ceph install wizard. On the first page I'm...
  3. B

    proper versions for upgrading Reef to Squid

    Hi, I'm going to be upgrading my ProxMox hosts from 8.2.4 up to 8.3.1 and my Ceph from Reef 18.2.2 to Squid. I'm following the directions here (https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid) for ceph and I'm wondering if I can go from Reef 18.2.2 up to Squid directory? In the Introduction...
  4. C

    Error when starting an LXC container

    Hello everyone, I am kinda new into all this so be patient with me. I am having a particular container with the name CT213 in my organization and I have the task to bring it back up. From the HA cluster pve trick we have , its the only container that doesnt want to start up. When i try to start...
  5. S

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    This 3-node cluster also had a 4th node (r730), which didn't have any OSDs assigned. This is what I have to recover:- /etc/ceph/ceph.conf and /etc/ceph/ceph.client.admin.keyring available from a previous node that was part of cluster. /var/lib/pve-cluster/config.db file from r730 node Now...
  6. S

    HELP! Deleted everything under /var/lib/ceph/mon on one node in a 4 node cluster

    I'm stupid :/, and I really need your help. I was following the thread to clear a dead monitor here https://forum.proxmox.com/threads/ceph-cant-remove-monitor-with-unknown-status.63613/post-452396 And as instructed, I deleted the folder named "ceph-nuc10" where nuc10 is my node name under...
  7. D

    Ceph Restful permissions and security concerns

    Hello, I recently configured Ceph supervision with Zabbix. to make the superversion working, i applied the following permissions on each mgr: ceph auth caps $mgr mds 'allow *' mon 'allow * 'osd 'allow *' However, I'm concerned about the excessive permissions. Should I be worried about the...
  8. TwiX

    Proxmox Ceph cluster - mlag switches choice -

    Hi, I plan to build a new proxmox ceph cluster (3 nodes). LACP 2x25 Gbps for ceph LACP 2x10 Gbps for VMs My question is related to the switches, I want mlag switches indeed. Mikrotik was my first choice due to functionalities and low budget as well. I have 2 CRS310 and have settled a very...
  9. C

    VM stuck on starting after moving nodes

    Hello all, I am trying to set up a 3-node cluster with failover, nodes 1 and 2 have my OSDs and nodes 1, 2, and 3 are monitors, nodes 1 and 2 are my production servers and node 3 is just for negotiation. If I gracefully power off node 1 the VM will move to node 2 (using HA fail over) and start...
  10. N

    My Proxmox VE 8 does not seem to map all the RBD upon container start

    Good day all, Short description of the issue: inability to start containers consistently. Context in which the problem arose: 7-node cluster recently upgraded from version 7 to 8 (I believe the issue did not occur before the upgrade); no PVE subscription. Scope: All the containers (three in...
  11. Y

    Expanding Cluster and ceph with other node

    Hi , i have a 3-node cluster , with ceph all current nodes have 4x 3,4TB Nve in the ceph OSD i want to add a node the host is the same hardware but with 3x 1.9TB Nvme my questions are; What is the best way of expanding the OSD and can it be done with different storage, or is it better to...
  12. P

    How can I configure ceph to completely disable msgr1 and use only msgr2?

    Hello, I want to configure ceph to use msgr2 and not to use msgr1, to encrypt ceph traffic. So I firstly set ms_bind_msgr1 = false and ms_bind_msgr2 = true into /etc/ceph/ceph.conf under the section [global], and changed IP addresses into v2-only addresses. The full configuration is: [global]...
  13. D

    What is the best practice for CephFS in VMs?

    I am looking to let Proxmox manage Ceph so that I can mount CephFS in a bunch of VM's. However, I don't know what the best practice is for authorization. I would like to follow least privilege so that a VM only has r/w access to CephFS. How would I generate a key that only has access to one...
  14. K

    Ceph upgrade

    Hello, After adding the new node to the cluster, I installed ceph on it. Ceph installed itself with version 18.2.4, on the others it is version 18.2.2. I would like to do a ceph update on the other nodes. What is the safest way to do this? If there is data on osd's can it be done live or should...
  15. 4

    2+q cluster with shared/replicated storage

    I am building a cluster with 2 PVE hosts and a quorum host. Each PVE host is running local storage with hardware raid. I would like have it set up in such a way that in the event of a node failure, all VMs running on one host can be restarted on another. Ideally with clustering, but replication...
  16. K

    Shared storage over Fiber channel

    Hi all, I am looking for a fast storage for HA cluster. Currently I use Ceph, but it is terribly slow (I have 3 nodes with enterprise class nvme drives. I read many articles about it and its performace. By my opinion the bottleneck is in network synchronization. So I am looking for another...
  17. K

    Ceph install via offline mirror (POM) fails

    Dear, I am new to Proxmox, so please bear with me. I am currently setting up an offline 4-node test-cluster with PVE in order to convince my boss to move away from standalone ESXi-servers to 1 decent PVE-cluster in the future. We work in an offline environment so I installed a Proxmox Offline...
  18. R

    Best way to separate traffic and configure the PVE network

    Hi, we're building an 4 node PVE cluster with NVME Ceph storage. Available Nics: We have several nics available: Nic1: 2 x 10G + 2 x 1G Nic2: 2 x 10G Nic3: 2 x 100G Traffic/Networks: Now we need (I think) the following traffic separations: PVE Management PVE Cluster & Corosync Ceph (public)...
  19. O

    Ceph - Most OSDs down and all PGs unknown after P2V migration

    I run a small single-node ceph cluster (not via Proxmox) for home file storage (deployed by cephadm). It was running bare-metal, and I attempted a physical-to-virtual migration to a Proxmox VM (I am passing through the PCIe HBA that is connected to all the disks to the VM). After doing so, all...
  20. R

    CEPH Crushmap zuordnung

    Titel: Ceph: Nur HDD für data-Pool und SSD für nvme-Pool verwenden Problem:Ich möchte in meinem Ceph-Cluster (Proxmox) sicherstellen, dass: HDD-OSDs nur im Pool data verwendet werden. SSD-OSDs nur im Pool nvme verwendet werden. Aktuelle Situation: OSDs: ID CLASS WEIGHT TYPE...