I ran the updates which installed a new kernel. after the reboot the monitor did not start. Attempted to start from command line:
systemctl status firstname.lastname@example.org
● email@example.com - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled...
I have a big question for the ceph cluster and I need your help or your opinion.
I installed a simple 3 nodes setup with Ceph.
In one node has 2x146 GB HW RAID 1 + 18x 600 GB 10k SAS without RAID.
(Summary we have 54 OSD device and we have to buy 3 SSD for journal)
And my big...
here I describe 1 of the 2 major issues I'm currently facing in my 8 node ceph cluster (2x MDS, 6x ODS).
The issue is that I cannot start any virtual machine KVM or container LXC; the boot process just hangs after a few seconds.
All these KVMs and LXCs have in common that their virtual...
i need help in creating osd in my partition.
in our server, we are provided with 2 nvme drive in raid-1. this is the partition:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 1.8T 0 disk
├─nvme1n1p1 259:1 0 511M 0 part...
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
How is the weight defined depending on disk size?
Which algorithm can be...
I have created OSD on HDD w/o putting DB on faster drive.
In order to improve performance I have now a single SSD drive with 3.8TB.
How can I add DB device for every single OSD to this new SSD drive?
Which parameter in ceph.conf defines the size for the DB?
Can you confirm that...
I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster.
On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting.
Typically the content of this directory is this:
root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/
Hi guys I notice that OSD encryption is available under create OSD. Is there any mechanism to show which OSDs are encrypted in the UI or command line? I made some encrypted and some non-encrypted to gauge performance but unable to differentiate which are which.
ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes.
Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable).
Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
Dear Proxmox Team,
we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck...
Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen.
Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware.
Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
# ceph-volume simple scan
stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device
stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk:
We have been running proxmox 5.4 with CEPH for awhile now without any issues. We recently built a brand new 5 node cluster using Proxmox 6.0. We had no issues getting our OSD's up and running, However whenever we physically pull a drive the associated OSD does not show down...
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
I want to stop several OSDs to start node maintenance.
However, I get an error message indicating a communication error. Please check the attached screenshot for details.
Please note that I have defined different storage types (NVME, HDD, etc.) in the crush map in order to use different...
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
Disk size = 3.9 TB
Partition size = 3.7 TB
Using *ceph-disk prepare* and *ceph-disk activate* (See below)
OSD created but only with 10 GB, not 3.7 TB
We've split users into groups and allocated different roles to each group (essentially admins and non-admin users). Non-admin users are limited to performing VM functions, while users in the admin group have the built-in role "Administrator" applied. This should give them the ability to perform...
I want to understand if I am reaching a bottleneck in my hyperconverged Proxmox+ Ceph cluster.
In moments of high load (multiple LXC with high I/O on small files) I see one node with:
- IO delay at 40%
- around 50% CPU usage
- load at 200 (40 total cores / 2 x Intel(R) Xeon CPU E5-2660...