ceph cluster

  1. [SOLVED] 4 Node Cluster mit Ceph SSD tauschen

    Hallo, ich habe die Herausforderung, dass einige der Festplatte nahe dem Lebensende sind, laut SMART. SSD-Wareout Jetzt habe ich die VMs auf die anderen Nodes verschoben und den Server heruntergefahren. Dann habe ich die Festplatte umgebaut und den Server wieder gesatartet. Gibt es einen...
  2. Compute and Ceph Storage Cluster

    Hi Everyone I want to ask best practice for 4 nodes that i have node-1 DL380 dual proc xeon gold with 256gb ram node-2 DL380 dual proc xeon silver with 96gb ram node-3 DL380 single proc xeon silver with 32gb ram, 3x1.92tb ssd node-4 DL380 single proc xeon silver with 32gb ram, 4x1.92tb ssd...
  3. CEPH ERROR

    HI everyone, I have a controller of 3 nodes to install Ceph. controller( ceph_yonetici 192.168.122.10) nodes (ceph_osdx 192.168.122.11, ceph_osdy 192.168.122.12, ceph_osdz 192.168.122.13 ) At first, everything worked wonders. But when I add hosts of nodes, I saw this error. I can ssh to nodes...
  4. [SOLVED] Ceph health warning: unable to load:snappy

    Hello, after a server crash I was able to repair the cluster. Health check looks ok, but there's this warning for 68 OSDs: unable to load:snappy All OSDs are located on the same cluster node. Therefore I was checking version of related file libsnappy1v5; this was 1.1.9 Comparing this file...
  5. Multi-region Ceph + mixed host HW?

    I'm facing a bit of a challenge with my current project and I'm hoping that someone here might have some wisdom to share. For context, my end goal is to have a self-hosted S3 service replicated across 3 data centers (1 West coast, 2 East coast). I have 6 storage servers (2 for each DC) that are...
  6. ceph: 5 nodes with 16 drives vs 10 nodes with 8 drives

    Hi, I'm designing a ceph cluster for our VFX studio. We have about 32 artists seats and I need high sequential read and write speeds, not so much IOPS. I will use whatever it takes to put the best possible hardware inside each node, but I have to decide now if I go with many nodes with fewer...
  7. Extend LVM of Ceph DB/WAL Disk

    I have a 3 node Ceph cluster running Proxmox 7.2. Each node has 4 x HDD OSDs and the 4 OSDs share an Intel Enterprise SSD for the Ceph OSD database (DB/WAL) on each node. I am going to be adding a 5th OSD HDD to each node and also add an additional Intel Enterprise SSD on each node for use with...
  8. create 2 pools on 5 nodes proxmox with 20 OSD

    Hi All, I have been running a 5 nodes of proxmox for a while now, I have installed ceph and come to configure ceph pool i need two pool one for VMs and other for containers. i need help with this configuration for those pool size Min size num of PGs" as from pgcalc it said to put it 512 and from...
  9. proxmox 6.2 ceph issue

    Hi all! I really hope someome could guide me with this issue because I do not know cepth much and I am not sure about the next steps.. So I have a 3 nodes cluster, same hardware. ceph was working without any issue until a reboot I had to do because of a network issue. Now on a node, ceph is...
  10. Changing Ceph configuration

    Hello, i want to change the ceph config and use the bridge instead of the vlans I have created. But when I change the ceph.conf file with the IP network from bridge, my ceph doesn't work anymore. Any suggestions how I can do that? My main issue is that I am not getting enough read/write on the...
  11. Ceph block.db and block.wal

    Hello, I'm looking over the Proxmox documentation for building a Ceph cluster here... https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster There is a small section entitled block.db and block.wal which says... I was wondering if anyone knows how much of a performance advantage...
  12. VM friert für etwa 15 Sekunden ein, Ceph Storage

    Hallo zusammen, ich wollte mich nun einmal mit dem Thema Ceph in Proxmox vertraut machen und habe mir für diesen Zweck einmal einen kleinen 3 Node Cluster für Testzwecke gebaut. Auf diesem habe ich pro Node 2x SSDs für Ceph verbaut (gesamt 3x2). Das Ceph Netzwerk habe ich einmal in Public und...
  13. How to re-add ceph disk

    Hi guys! I experimented with ceph behavior when 1 of the disks fails. The disk has been "hot" removed from the server. After a couple of days, the disc was inserted back. The disk status was down/out. I clicked the IN button, but the down status didn't change. The server has been rebooted, but...
  14. Ceph cluster - OSD's thrown off

    Hello all, We are running a cluster of 5 nodes in our environment. We have a CEPH storage managed in the cluster , for each node we are having 4 OSD's so basically 20 OSD's Each node is a bare metal with proxmox ve-6.4-8 configured with CPU(s) 72 x Intel(R) Xeon(R) CPU E5-2699 v3 @...
  15. Ceph new rule for HDD storage.

    Hi guys, During the free time I had to think how to extend and add new resources (storage) to our Cloud. For the moment I do have stroage base on Ceph - OSD only SSD type. I was reading docs about Ceph and I can say it's possbile I have even the plan. Problem is I have no idea if the actions I...
  16. Ideal HA node config on a budget + Ceph nodes as two separate data pools?

    After spending some time learning that SAN is the traditional storage route and that Ceph + "hyperconverged" is actually encouraged now that we have the technology, I am a little lost on how to proceed with my home lab. I don't know if using Ceph would be like trying to shoe horn a new...
  17. [SOLVED] CEPH MON fail after upgrade

    Hi, on my test-cluster i upgraded all my nodes from 7.0 to 7.1 CEPH went to pacific 16.2.7 (was 16.2.5?) Now the monitors and managers won't start. I had a pool and cephFS configured with MDS. I've read somewhere that a pool in combination with an old cephFS (i came from PVE6) it could...
  18. Advice Regarding ProxMox Cluster Hardware Setup

    Hello Everyone, For quite some time I have been looking into building myself a server cluster to run my Virtual Environment that will host various types of services both internally within my homelab and some that are externally accessible. I have been looking into using ProxMox to achieve this...
  19. Proxmox ceph CPU general question

    Hello I am new to using Proxmox and Ceph. I recently installed a homelab using 3 nodes. 3 cpu 4 cores each socket. It was nice to have a small cluster at my house. yay. So the ceph storage is distributed across all 3 nodes. I have set it up as RBD. so is RBD (block device) a way to spread the...
  20. Ceph Error

    Good morning everyone, I have a cluster of 3 Proxmox servers under version 6.4-13. Last Friday I updated Ceph from nautilus to octupus, since it is one of the requirements to upgrade Proxmox to version 7. At first everything worked wonders. But today when I check I find that it is giving me the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!