Hello there,
I want to create two separate pools in my CEPH. At the moment I have a configuration made on 4 nodes with m.2 NVMe drives as OSDs. My nodes also have SATA SSD drives which I'd like to use for 2nd pool but I don't see any option to select these OSDs, you just add them and that's it...
Greetings community!
After few month of using ceph from Proxmox i decided to add new disk and stuck with this issue.
ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster...
Setting up ceph on a three node cluster, all three nodes are fresh hardware and installs of PVE. Getting an error on all three nodes when trying to create the OSD either via GUI or CLI.
create OSD on /dev/sdc (bluestore)
wiping block device /dev/sdc
200+0 records in
200+0 records out
209715200...
Hello,
maybe often diskussed but also question from me too:
since we have our ceph cluster we can see an unweighted usage of all osd's.
4 nodes with 7x1TB SSDs (1HE, no space left)
3 nodes with 8X1TB SSDs (2HE, some space left)
= 52 SSDs
pve 7.2-11
all ceph-nodes showing us the same like...
I am trying to add new osd without success.
Follow the logs and information
---------------------------------------------------------------------------------------------------
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve)
pve-manager: 6.4-15 (running version...
Hi,
I'm having trouble adding a new osd to an existing ceph.
-----------------------------------------------------------------------------------------------------------------
pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve)
pve-manager: 6.4-15 (running version...
Hi,
I have a 3 node cluster running Proxmox 6.4-8 with Ceph. 2 of the 3 nodes have 1.2TB for Ceph (each node has one 1.2TB disk for OSD, one 1.2TB disk for DB), the third node has the same configuration but with 900GB disks. I decided to stop, out and destroy the 900GB OSD to replace with 1.2TB...
Hello! I have a 10 nodes hyperconverged cluster. Each node has 4 OSDs (total 40 OSDs). I have 2 different questions:
QUESTION 1 - OSD REPLACEMENT (with identical SSD)
Since I need to replace an SSD (one OSD has crashed 3 times in latest months so I prefer to replace it with a brand new one)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.