Does this mean, the command pveceph osd create /dev/sdb -bluestore -journal_dev /dev/sdc will create multiple partitions on block device /dev/sdc if this block device is used multiple times as db device for different main devices?
What you say is 100% correct.
However you did not consider a setup where block.db resides on a faster disk (SSD) than the main device (HDD).
Then the block.db is a link to the device and not UUID:
root@ld4257:/etc/ceph# ls -lah /var/lib/ceph/osd/ceph-0/
insgesamt 60K
drwxr-xr-x 2 ceph ceph 271...
I fully understand that usage of RAID controller is not recommended and HBA / JBOD should be used.
However this does not solve the issue.
Let's assume I have a server that provides 20 slots for SAS devices, but I only have 10 disks available.
When I finish Ceph setup with this 10 disks and add...
The client requires the following caps to work as expected where block_name_prefix must be retrieved with rbd info backup/gbs.
root@ld4257:/etc/ceph# ceph auth get client.gbsadm
exported keyring for client.gbsadm
[client.gbsadm]
key = AQBd0klcFknvMRAAwuu30bNG7L7PHk5d8cSVvg==...
Hi,
I have created a pool + image using this commands:
rbd create --size 500G backup/gbs
Then I modified the features:
rbd feature disable backup/gbs exclusive-lock object-map fast-diff deep-flatten
Latest step was to create a client to get access to the cluster:
ceph auth get-or-create...
Proxmox WebUI is the pace to modify monitors.
In my case I simply deleted the entries with the cluster network IP and added new monitor which is using the public IP automatically.
Hi,
I have identified a major issue with my cluster setup consisting of 3 nodes:
all monitors are connected to cluster network.
Here's my /etc/ceph/ceph.conf:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx...
Hi,
my use case for Ceph is providing a central backup storage.
This means I will backup multiple databases in Ceph storage cluster mainly using librados.
There's a security demand that should be considered:
DB-owner A can only modify the files that belong to A; other files (owned by B, C or D)...
Hi,
I have configured a 3-node Ceph cluster.
Each node has 2 RAID controllers, 4 SSDs and 48 HDDs.
I used this syntax to create an OSD:
pveceph osd create /dev/sdd -bluestore -journal_dev /dev/sdv1
pveceph osd create /dev/sde -bluestore -journal_dev /dev/sdw1
pveceph osd create /dev/sdf...
Nope.
I shared it with the other 2 nodes of the cluster.
However I assume this was not a good idea; thin LVM should never be shared and used locally only.
Yeah, mounting was just a stupid idea to fix the issue.
Anyway, I stored the root disk of several LXCs in this storage.
Please check the example in the attached screenshot.
I cannot start the related LXCs anymore because the resource is missing.
Hi,
after rebooting my PVE node with LVM-Thin data storage the content is unavailable.
However the logical volume is active and visible:
root@ld4257:~# lvscan
ACTIVE '/dev/vg_backup_r5/backup' [305,63 TiB] inherit
ACTIVE '/dev/pve/swap' [8,00 GiB] inherit
ACTIVE...
Hi,
I have setup a 3-node-cluster that is working like charm, means I can migrate any VM or CT from one node to the other.
The same nodes are using a shared storage provided by Ceph storage.
I followed instructions and created HA groups + resources:
root@ld4257:~# more /etc/pve/ha/groups.cfg...
Hi,
after rebooting single PVE node (no cluster) I get an error that Proxmox VE Cluster is not started.
Checking the related service I found that directory /etc/pve is empty.
Unfortunately I cannot identify the root cause and fix this.
I tried to reinstall packages pve-cluster pve-manager...
OK... this means there's no functional reason?
Must I expect a malfunction if I disable this parameter?
Will Proxmox VE + Ceph still work considering the fact that Proxmox stores specific keyrings in /etc/pve/priv/ceph/
root@ld4257:~# ls -l /etc/pve/priv/ceph
insgesamt 2
-rw------- 1 root...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.