Hi,
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
Example:
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
Question:
How is the weight defined depending on disk size?
Which algorithm can be...
Hi,
I have created OSD on HDD w/o putting DB on faster drive.
In order to improve performance I have now a single SSD drive with 3.8TB.
Questions:
How can I add DB device for every single OSD to this new SSD drive?
Which parameter in ceph.conf defines the size for the DB?
Can you confirm that...
Hi,
I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster.
On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting.
Typically the content of this directory is this:
root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/
insgesamt 60...
Hi guys I notice that OSD encryption is available under create OSD. Is there any mechanism to show which OSDs are encrypted in the UI or command line? I made some encrypted and some non-encrypted to gauge performance but unable to differentiate which are which.
Hallo zusammen,
ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes.
Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable).
Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
Dear Proxmox Team,
we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck...
Hallo zusammen!
Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen.
Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware.
Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
Hi,
First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
# ceph-volume simple scan
stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device
stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
Running...
Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk:
pveceph...
Hi Everyone,
We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
Hi,
I want to stop several OSDs to start node maintenance.
However, I get an error message indicating a communication error. Please check the attached screenshot for details.
Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
Disk size = 3.9 TB
Partition size = 3.7 TB
Using *ceph-disk prepare* and *ceph-disk activate* (See below)
OSD created but only with 10 GB, not 3.7 TB
Commands Used
root@proxmox:~#...
We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
Hello,
I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster.
In moments of high load (multiple LXC with high I/O on small files) I see one node with:
- IO delay at 40%
- around 50% CPU usage
- load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...
Hello,
I am looking for updated documentation on correct procedure for Ceph OSD disk replacement.
Current Proxmox docs only cover OSD creation but is lacking of management procedures using pveceph commands.
I found this Ceph document but some commands are not giving the same output (ie...
Hi there,
I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup).
It all works fine unless an OSD or an OSD-Host (we have 3, each...
Hallo liebe Proxmoxfreunde,
ich habe eine Frage.
Es wird im Ceph GUI - nachdem ich zwei OSDs entfernt (ceph-8 und ceph-13 osds) und dann wieder hinzugefügt habe (ceph-21 & ceph-22 osds) - nun im status unter Total: 23 angezeigt, sowie das zwei OSDs "down" wären. Aus welchen osd table holt sich...
Hi, on one of the ceph cluster nodes a message appeared: 1 osds down, it appears and then disappears, the status constantly changes from down to up, what can be done about it?
The SMART status of the disk shows OK.
Package version:
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.