Hi,
in the Ceph configuration file /etc/ceph/ceph.conf you define a non-standard path for parameter "keyring" in section global:
root@ld4257:/# more /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx...
Hello Dietmar,
this actually meets my expectation.
However, it is not working.
This means I have executed the following steps:
pveum groupadd vmadmin -comment "VM Administrators"
pveum usermod d038783@pam -group vmadmin
pveum aclmod /pool/test -group test -role PVEVMAdmin
This is confirmed in...
Hello!
I have started user management with this guide.
However, I'm facing this issue:
No other user except for System administrator (=root) can resize virtual disks of any VM.
Can you please advise which role must be assigned to a user to grant this permission?
THX
Hi,
I have the following question regarding guest LAN and host bridge network:
Can I setup a guest LAN in network segment
10.68.88.0/21
if my host network configuration has default bridge
auto vmbr0
iface vmbr0 inet static
address 10.96.131.9
netmask 255.255.255.0
gateway...
Hello!
I have migrated / copied over a virtual disk (qcow2) created with Virsh.
This disk has 2 partitions and uses UEFI.
When I start a VM in Proxmox using this virtual disk, the SeaBIOS is loaded and fails to boot UEFI.
How can I fix this?
THX
I'm sorry, but I don't get this.
The pool is obviously pve.
If pve_ct and pve_vm are pointers, then this is not a Ceph thing?
Checking the properties of storage pve_ct and pve_vm the difference is clear: pve_ct has KRBD enabled.
Does it mean PVE is creating an image with attribute KRBD for...
Hello!
After creating a pool + storage via WebUI I have created a container.
The virtual disk of this container is defined as a block device image in Ceph:
root@ld4257:~# rbd ls pve
vm-100-disk-1
However when I check content of the available storages
pve_ct
pve_vm
I can see this image...
Hello!
I have created a pool + storage with WebUI.
This worked well, means both pool and storage are available.
In the "storage view" I can see:
<poolname>_ct
<poolname>_vm
Question:
From Ceph point-of-view, what is represented by <poolname>_ct and <poolname>_vm respectively?
It's not a RBD...
Hi,
I have configured a 3-node-cluster with currently 10 OSDs.
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-10 43.66196 root hdd_strgbox
-27 0 host ld4257-hdd_strgbox
-28 21.83098 host ld4464-hdd_strgbox
3...
So.
All issues with creation of OSDs have been sorted out.
And in the meantime I created 2 pools in order to benchmark the different disks available in the cluster.
One pool is intended to be used for PVE storage (VM and CT), and the relevant storage type "RBD (PVE) was created automatically...
I have modified CRUSH map and created 2 different buckets for the 2 different HDD types.
This means one bucket for all HDDs of size 1TB and one bucket for all HDDs of size 8TB.
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-11 0...
Hi,
in case I want to create a pool with SSDs only that is separated from HDDs I need to manipulate the CRUSH map and enter another root.
Is my assumption correct?
THX
Hi,
I have configured Ceph on a 3-node-cluster.
Then I created OSDs as follows:
Node 1: 3x 1TB HDD
Node 2: 3x 8TB HDD
Node 3: 4x 8TB HDD
This results in following OSD tree:
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 54.20874 root default
-3...
Understood.
That means I should put WAL only on a dedicated partition / drive if this drive is faster than Journal drive, means
data on HDD
Journal (= Block.db) on SSD
WAL on (= Block.wal) on NVMe
If I have only 2 different devices (in my case HDD + SSD) I must not use option -wal_dev when...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.