ceph

  1. H

    installation of 2 servers for school. Solid advice welcome

    So instead of ordering 1 expensive Dell/HP server I decided to buy 2 identical more consumer grade servers. Purpose is that if one fails the other takes over. Main components: MSI MPG X670E CARBON WIFI AMD Ryzen 9 7950X3D Processor (32 threads) 192 gig DDR5 RAM Crucial MX500 250GB SSD (boot) 2...
  2. W

    Proxmox VE 8.1.3 HCI deployment with Reef

    Hi, I have deployed an HCI platform using Proxmox 8.1.3 and Ceph Reef. My deployment uses two networks for Ceph a public net 10.0.0.0/24 and cluster net 30.0.0.0/24 I have noticed something in the configuration file of ceph : [global] auth_client_required = cephx...
  3. S

    Proxmox Ceph cluster and external clients

    I was looking to see if anyone was aware with any of the modifications to the Proxmox Ceph release when compared to the upstream repository that can affect how you would get an external client to rbd volumes and cephfs pools. I recently built up a test cluster of all pve 8.1.3 (3 machines, 1 SSD...
  4. M

    Recover Data from Ceph OSDs

    Is there a way to recover data from OSDs if the monitors and managers don't work anymore but the OSDs still start up? Just for clarification: I don't want to rebuild the cluster I just want to copy data from the OSDs to another HDD.
  5. S

    2 Node Proxmox Ceph Cluster

    Hello, we recently got new Servers for our organization and havent really had experience with Proxmox before. We migrated everything from VMWare to Proxmox. Worked really well. Now we are facing the problem that we dont have Node failover since our Pool min_size is 2. A little bit about our...
  6. K

    Ceph monitor address unknown

    Hello, Today I added another node to the cluster. Before attaching it, I installed ceph. Ceph is version 17.2.6, while on the other nodes it is version 17.2.5. On the new server, I can't enable ceph monitor, even though when I try to do so, I get a response that everything is ok. The message...
  7. M

    syslog error in ceph after upgrade to 8.1

    recently i upgraded the server to proxmox version 8.1x from 7.x, and found this error message in syslog, kindly advise if this is some kind of error during upgrade ? 2023-12-07T18:35:26.242453+05:30 pve-3 ceph-crash[2036]: 2023-12-07T18:35:26.236+0530 7f83573ff6c0 -1 auth: unable to find a...
  8. C

    Ceph view only in dashboard

    Hi All I have set up ceph cluster using the cephadm dashboard. (The proxmox dashboard did not have the features I needed.) Is there any way I can enable the ceph section in the promox dashboard as read only. Just so I can look at one dashboard to see if there are any alerts. It actually does...
  9. S

    Hyper-converged Cluster Networking

    What is the best practice for the corosync network? I can't imagine it having much bandwidth requirements, and I'd hate to use a seperate nic just for this. seems to just need low latency. Since I have ceph public and cluster networks separated on separate 10G Nics, Can I put corosync on the...
  10. D

    Recommendation Request: Dual Node Shared Storage

    Had a client request a fully redundant dual-node setup, and most of my experience has been either with single node (ZFS FTW) or lots of nodes (CEPH FTW). Neither of those things seem to work well in a dual node fully redundant setup. Here's my thinking, wanted to see what the wisdom of the...
  11. C

    To separate ceph cluster or not?

    I have a 6 node hyper-converged cluster where 3 nodes handle compute with SSD OSDs and other 3 nodes with HDD OSDs (they handle no compute). In the years since the first deployment a few patterns emerged: HDD pool is used exclusively for CephFS SSD pool is used exclusively for RDB (HA VM...
  12. E

    Partial replication of virtual machine disk to ceph

    Is it possible to partially replicate a virtual machine disk on ceph. For example, not to replicate /mnt/... ?
  13. H

    Failed OSD for ceph stale pg

    Hi Guys, Long shot but we have an old 2/1 pool, on our proxmox install hypoerconverged and have lost an osd. I now have 3 stale pg's funnily enough showing as on this osd. Is there any way I can try to recover data from the failed disk and import it back in? The disk shows in the node but just...
  14. G

    Ceph size looks like shrinking as filling up

    Hello, Today, we noticed our ceph pool looks like shining as its filling up, is this normal, visual bug, or we need to change something ? satred with size of 5TB, after puting 1.6TB data on it, looks like its reduced to 3.6TB root@pxcl-3:~# ceph status cluster: id: health...
  15. K

    Proxmox cluster architecture choice

    Hello, My company wants to set up a proxmox infrastructure but we're hesitating between two choices (we can adjust the hardware configuration of each server). We want to have at least 5Tb of usable storage. Depending on the choice, we invest in a backup server (PBS or TrueNAS replication) a...
  16. G

    OSD rebalance at 1Gb/s over 10Gb/s network?

    Hi, I'm trying to build a hyper-converged 3 node cluster with 4 OSD each on proxmox but I'm having some issues with the OSDs... First one is the rebalance speed: I've noticed that, even over a 10Gbps network, ceph rebalance my pool at max 1Gbps but iperf3 confirm that the link is effectively...
  17. A

    [SOLVED] Single Node Unavailable - Cluster Not Ready

    Hi, we have a 3-node Proxmox VE 5.2-9 cluster (with CEPH) that is having issues with one of its nodes synchronizing. Connecting to its GUI, the node is online but all VMs are Offline, and starting them results in the message: "Cluster not ready - no quorum". pvecm status (node 1) Quorum...
  18. M

    Ceph OSD creation error

    Setting up ceph on a three node cluster, all three nodes are fresh hardware and installs of PVE. Getting an error on all three nodes when trying to create the OSD either via GUI or CLI. create OSD on /dev/sdc (bluestore) wiping block device /dev/sdc 200+0 records in 200+0 records out 209715200...
  19. L

    Hyperconverged Proxmox + Ceph Cluster - how to reconnect the right disk to nodes

    Hi, i had created a 3 Nodes Proxmox cluster with 3 Lenovo M720Q (for simplicity i call the nodes N1,N2 and N3). Then i had added 4 disks (D1, D2, D3 and D4). All was working fine. Then i move all the SFF PC and the disk from my desk to the rack but unfortunately i do not write down the...
  20. P

    Empty Ceph Mgr Dashboard after latest apt upgrade

    On pve 8, "Cluster utilization" charts are now blank, the funny part is that the Ceph UI works now, which it didn't before :cool: - any hints?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!