# ceph-volume simple scan
stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device
stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
Running...
Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk:
pveceph...
Hi Everyone,
We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
Hi,
I want to stop several OSDs to start node maintenance.
However, I get an error message indicating a communication error. Please check the attached screenshot for details.
Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
Disk size = 3.9 TB
Partition size = 3.7 TB
Using *ceph-disk prepare* and *ceph-disk activate* (See below)
OSD created but only with 10 GB, not 3.7 TB
Commands Used
root@proxmox:~#...
We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
Hello,
I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster.
In moments of high load (multiple LXC with high I/O on small files) I see one node with:
- IO delay at 40%
- around 50% CPU usage
- load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...
Hello,
I am looking for updated documentation on correct procedure for Ceph OSD disk replacement.
Current Proxmox docs only cover OSD creation but is lacking of management procedures using pveceph commands.
I found this Ceph document but some commands are not giving the same output (ie...
Hi there,
I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup).
It all works fine unless an OSD or an OSD-Host (we have 3, each...
Hallo liebe Proxmoxfreunde,
ich habe eine Frage.
Es wird im Ceph GUI - nachdem ich zwei OSDs entfernt (ceph-8 und ceph-13 osds) und dann wieder hinzugefügt habe (ceph-21 & ceph-22 osds) - nun im status unter Total: 23 angezeigt, sowie das zwei OSDs "down" wären. Aus welchen osd table holt sich...
Hi, on one of the ceph cluster nodes a message appeared: 1 osds down, it appears and then disappears, the status constantly changes from down to up, what can be done about it?
The SMART status of the disk shows OK.
Package version:
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager...
Good afternoon. I'm looking for a means to create super user type accounts that are similar to root that authenticate against Active directory. I have AD authentication working but I am unable to fine tune the permissions to allow things for some users/groups to create, stop, out, destroy OSDs...
Hi all,
I'm new with ceph and ran into an interesting warning message which I think I can not interpret correctly and would appreciate any suggestion/comment on this, where to start rolling up the case and/or I missing something.
First of all, the configuration structured as follows:
3 node...
Hi,
I have configured a 3-node Ceph cluster.
Each node has 2 RAID controllers, 4 SSDs and 48 HDDs.
I used this syntax to create an OSD:
pveceph osd create /dev/sdd -bluestore -journal_dev /dev/sdv1
pveceph osd create /dev/sde -bluestore -journal_dev /dev/sdw1
pveceph osd create /dev/sdf...
Hello,
I have added an extra disk and added them as OSD i have a total of 3 disks per node with 3 nodes.
I am now getting the following error "too few PGs per OSD (21 < min 30)" in my ceph
Is there a way to resolve this?
I have repeating failure at installing an osd on one node.
- installing thru GUI seems to work... but the OSD is not visible
- installing thru command line seems to work... but the OSD is not visible either
eg::
# ceph-disk zap /dev/sdb
Caution: invalid backup GPT header, but valid main header...
Preface:
I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
Hi Everyone,
I'm in a bit of a situation here.
We identified a bad drive (but still running) and decided we needed to remove it. Therefore we followed these instructions believing that it would work without a hitch and all our containers/vms would continue to run. Unfortunately, not the case...
Hi,
Today there was an unexpected power outage where my servers are co-located, the entire datacenter went dark. Luckily I had fresh backups to simply restore for the most part.
However, I have an issue with one OSD on one server, the OSD is stuck in "active+recovery_wait+degraded" I have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.