ceph

  1. S

    Second CEPH pool for SSD

    Hi! I'm currently using 3-node PVE cluster with CEPH pool based on HDD (80Tb total). Now I added 2 SSDs to each node and want to create second separate pool (10Tb total). I read https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_device_classes but some things are not clear to me...
  2. X

    Ceph / Cluster Networking Question

    We've been using a traditional SAN with iscsi for over 10 years, it has been ultra reliable. Now looking at ceph and have built a 3-server ceph cluster with Dell R740xds. The device has six interfaces, three to one switch, three to another. One port is public internet One port is public ceph...
  3. N

    Ceph OSD using wrong device identifier

    Hello! I have been messing around with ceph to see if it will properly augment my NAS in a small subset of tasks, but I noticed if a disk is removed and put back in the ceph cluster doesn't detect that until reboot. This is because it is defined using the /dev/sdX format instead of the...
  4. C

    Small Business Cluster with 2 New Servers and QDevice

    BACKGROUND I work for a small business that provides a 24 hour service on our servers and requires as close to 100% uptime as possible. Our old IT company sold us 2 identical Dell R420 servers several years ago with a single 6 core processor, 4x 3.5" 600GB 10K SAS HDD in RAID10, and 16GB RAM and...
  5. L

    Windows Server Performance

    Hallo, wir haben nun seit einiger Zeit ein Proxmox Cluster am laufen. Wir betreiben 4 Server mit 128 x AMD EPYC 7543 32-Core Processor (2 Sockets), 512GB RAM und Ceph mit insgesamt 16 OSD´s. Bei der Performance von Linux Servern ist alles optimal. Hier haben wir keine Probleme mit...
  6. G

    Ceph OSD crash loop, RocksDB corruption

    Hi, yesterday one OSD went down and dropped out of my cluster, systemd stopped the service after it "crashed" 4 times. I tried restarting the OSD manually, but it continues to crash immediately, the OSD looks effectively dead. Here's the first ceph crash info (the later ones look the same): {...
  7. A

    Out of RAID disks are not being detected

    Hello, I'm new to Proxmox. So, if any term is wrong help me out. I have a HPE Proliant DL380 G6 with 7 disks at my lab. I have installed Proxmox on two of the disks with RAID 10. I have left the other ones out of the RAID as I want to test Ceph for my environment. The other five disks are not...
  8. J

    Moving CEPH to a separate network

    On an old installation I had PVE and CEPH in the same network. To improve performance and security, I'm currently separating networks more strict. The first step I'm trying is to separate the cluster network from the PVE network. I was following the...
  9. D

    Ceph RBD on Windows .. documentation seems lacking?

    Now I don't know where to ask and post this but I am trying to get my Ceph RBD image mounted again on a new Windows OS install Due to unfortunate disk errors (both SSDs died at same time) I cannot access my previously working Ceph config and keyring for Windows. I did have it working quite...
  10. G

    Problem with trim/discard on Ceph storage

    Hello, I have problems with various VMS that seems do not release unused space. Here is the VM Config: Here is the fstab: Here is the LVM config: Here is the Filesystem free disk space: I've also issued fstrim manually after poweroff/poweron the VM: But when I investigate du on...
  11. J

    SMART/Health failure on Ceph install

    I have been running a small (3-node) homelab proxmox cluster with CEPH for almost 5 months. It has been running great and I learned a lot on many levels. So far so good, but yesterday I noticed that two of the three 1TB Samsung SSD Pro NVME drives were in degraded mode because of percentage...
  12. M

    ceph status "authenticate timed out after 300" after attempt to migrate to new network

    Hey dear Proxmox Pros, I have an issue with my Ceph Cluster after the attempt of migrating it to a dedicated network. I have a 3 node Proxmox Cluster with Ceph enabled. Since I only had one network connection on each of the nodes, I wanted to create a dedicated and separate network only for the...
  13. C

    1 ceph or 2 ceph clusters for fast and slow pools?

    Hi All Is there any recommendations for configuring pools with differing goals? I am setting two pools with opposite goals. A fast NVMe pool for VM's and a slow cephfs using spinning drives. Considering the different objectives, i was thinking of setting up two ceph clusters. Both would be...
  14. C

    Ceph unavailable from single node

    Hi All I have a 3 node cluster with ceph storage. I want to configure the ceph cluster to work with only one node. (in the even of a double failure) Currently the ceph continues working correctly with the failure of one node, but as soon as two nodes are down, ceph becomes unavailable and you...
  15. L

    Proxmox Ceph multiple roots that has same host item but different weights

    I'm trying to do a two ceph pools with 3 nodes (Site A) and 3 nodes (Site B) with two different rules. The first rule is a stretched Ceph with Site A and Site B and the second rule is just a standalone ceph rule on Site A. Each of the 6 nodes has 3 OSDs. So total there's 18 OSDs. I'm...
  16. B

    Mini PC Proxmox cluster with ceph

    Hello, I am trying to create a high availability Proxmox cluster with three nodes and three SSD's however I can't seem to get it working. The monitoring on two of the three nodes isn't working and I cannot actually create the VM on the share
  17. F

    Swapping Boot Drives

    I'm operating a Dell PowerEdge R740 server as a Ceph node. The current system boot configuration utilizes a single SAS SSD for the OS drive. I'm planning to migrate to an Dell BOSS M.2 SATA drive via PCIe adapter (two drives as RAID 1) (https://www.ebay.com/itm/296760576190) to optimize the...
  18. N

    Ceph Reef for Windows x64 Install Leads to Automatic Repair Mode

    Hey everyone, I’m running into a serious issue while trying to install Ceph Reef (Windows x64) on my Windows laptop. The installation seems to go smoothly until it prompts me to restart the PC. However, after the restart, my computer goes into automatic repair mode, and I can't seem to get it...
  19. F

    Advise configuring cluster w/o external NAS

    I have 3 HPE Proliant DL360 Gen10 servers that I would like to configure as a 3 node Proxmox cluster. Each node has 6 x 2TB data drives, for a total of 12TB per node + 2 system drives in mirror. Most configurations I have seen relay on an external NAS as the shared storage space. I am specially...
  20. t.lamprecht

    Ceph 19.2 Squid Available as Technology Preview and Ceph 17.2 Quincy soon to be EOL

    Hi Community! The recently released Ceph 19.2 Squid is now available on the Proxmox Ceph test and no-subscription repositories to install or upgrade. Upgrades from Reef to Squid: You can find the upgrade how-to here: https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid New Installation of Reef...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!