Search results

  1. C

    Proxmox + Ceph: why a non-standard keyring parameter in /etc/ceph/ceph.conf?

    Hi, in the Ceph configuration file /etc/ceph/ceph.conf you define a non-standard path for parameter "keyring" in section global: root@ld4257:/# more /etc/ceph/ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx...
  2. C

    [SOLVED] Allow users to resize virtual disks or: which privileges are included in role PVEVMAdmin?

    Hello Dietmar, this actually meets my expectation. However, it is not working. This means I have executed the following steps: pveum groupadd vmadmin -comment "VM Administrators" pveum usermod d038783@pam -group vmadmin pveum aclmod /pool/test -group test -role PVEVMAdmin This is confirmed in...
  3. C

    [SOLVED] Allow users to resize virtual disks or: which privileges are included in role PVEVMAdmin?

    Hello! I have started user management with this guide. However, I'm facing this issue: No other user except for System administrator (=root) can resize virtual disks of any VM. Can you please advise which role must be assigned to a user to grant this permission? THX
  4. C

    Question regarding guest LAN + host bridge network

    Hi, I have the following question regarding guest LAN and host bridge network: Can I setup a guest LAN in network segment 10.68.88.0/21 if my host network configuration has default bridge auto vmbr0 iface vmbr0 inet static address 10.96.131.9 netmask 255.255.255.0 gateway...
  5. C

    [SOLVED] Migrated KVM Virtual Disk fails to boot with UEFI

    + adding EFI disk + modifying EFI boot options in BIOS
  6. C

    [SOLVED] Migrated KVM Virtual Disk fails to boot with UEFI

    Hello! I have migrated / copied over a virtual disk (qcow2) created with Virsh. This disk has 2 partitions and uses UEFI. When I start a VM in Proxmox using this virtual disk, the SeaBIOS is loaded and fails to boot UEFI. How can I fix this? THX
  7. C

    Ceph: CT virtual drive in both storages <pool>_ct and <pool>_vm

    OK. The storage definition is clear. root@ld4257:# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,backup,iso rbd: pve_vm content images krbd 0 pool pve rbd: pve_ct content rootdir krbd 1 pool pve However imo...
  8. C

    Ceph: CT virtual drive in both storages <pool>_ct and <pool>_vm

    I'm sorry, but I don't get this. The pool is obviously pve. If pve_ct and pve_vm are pointers, then this is not a Ceph thing? Checking the properties of storage pve_ct and pve_vm the difference is clear: pve_ct has KRBD enabled. Does it mean PVE is creating an image with attribute KRBD for...
  9. C

    Ceph: CT virtual drive in both storages <pool>_ct and <pool>_vm

    Hello! After creating a pool + storage via WebUI I have created a container. The virtual disk of this container is defined as a block device image in Ceph: root@ld4257:~# rbd ls pve vm-100-disk-1 However when I check content of the available storages pve_ct pve_vm I can see this image...
  10. C

    Ceph: creating pool with storage

    Hello! I have created a pool + storage with WebUI. This worked well, means both pool and storage are available. In the "storage view" I can see: <poolname>_ct <poolname>_vm Question: From Ceph point-of-view, what is represented by <poolname>_ct and <poolname>_vm respectively? It's not a RBD...
  11. C

    Ceph: creating RBD image hangs

    Hi, I have configured a 3-node-cluster with currently 10 OSDs. root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -10 43.66196 root hdd_strgbox -27 0 host ld4257-hdd_strgbox -28 21.83098 host ld4464-hdd_strgbox 3...
  12. C

    How to benchmark ceph storage

    So. All issues with creation of OSDs have been sorted out. And in the meantime I created 2 pools in order to benchmark the different disks available in the cluster. One pool is intended to be used for PVE storage (VM and CT), and the relevant storage type "RBD (PVE) was created automatically...
  13. C

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    I have modified CRUSH map and created 2 different buckets for the 2 different HDD types. This means one bucket for all HDDs of size 1TB and one bucket for all HDDs of size 8TB. root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -11 0...
  14. C

    [SOLVED] Ceph: creating pool for SSD only

    Hi, in case I want to create a pool with SSDs only that is separated from HDDs I need to manipulate the CRUSH map and enter another root. Is my assumption correct? THX
  15. C

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    After stopping service ntp I ran ntpd -gq on any node. Then I started ntp again.
  16. C

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    I posted the output of ceph osd df in my initial thread. There's no data stored.
  17. C

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    Hi, I have configured Ceph on a 3-node-cluster. Then I created OSDs as follows: Node 1: 3x 1TB HDD Node 2: 3x 8TB HDD Node 3: 4x 8TB HDD This results in following OSD tree: root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 54.20874 root default -3...
  18. C

    [SOLVED] Ceph OSD activation fail

    Understood. That means I should put WAL only on a dedicated partition / drive if this drive is faster than Journal drive, means data on HDD Journal (= Block.db) on SSD WAL on (= Block.wal) on NVMe If I have only 2 different devices (in my case HDD + SSD) I must not use option -wal_dev when...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!