Search results

  1. C

    Why is storage type rbd only for Disk-Image + Container

    Hello! Can you please share some information why storage type rbd is only availabel for Disk-Image and Container? I would prefer to dump a backup to another rbd. THX
  2. C

    Ceph OSD failure after host reboot

    Does this mean, the command pveceph osd create /dev/sdb -bluestore -journal_dev /dev/sdc will create multiple partitions on block device /dev/sdc if this block device is used multiple times as db device for different main devices?
  3. C

    Ceph OSD failure after host reboot

    What you say is 100% correct. However you did not consider a setup where block.db resides on a faster disk (SSD) than the main device (HDD). Then the block.db is a link to the device and not UUID: root@ld4257:/etc/ceph# ls -lah /var/lib/ceph/osd/ceph-0/ insgesamt 60K drwxr-xr-x 2 ceph ceph 271...
  4. C

    Ceph OSD failure after host reboot

    I fully understand that usage of RAID controller is not recommended and HBA / JBOD should be used. However this does not solve the issue. Let's assume I have a server that provides 20 slots for SAS devices, but I only have 10 disks available. When I finish Ceph setup with this 10 disks and add...
  5. C

    [SOLVED] Mapping image fails with error: rbd: sysfs write failed

    The client requires the following caps to work as expected where block_name_prefix must be retrieved with rbd info backup/gbs. root@ld4257:/etc/ceph# ceph auth get client.gbsadm exported keyring for client.gbsadm [client.gbsadm] key = AQBd0klcFknvMRAAwuu30bNG7L7PHk5d8cSVvg==...
  6. C

    [SOLVED] Mapping image fails with error: rbd: sysfs write failed

    Hi, I have created a pool + image using this commands: rbd create --size 500G backup/gbs Then I modified the features: rbd feature disable backup/gbs exclusive-lock object-map fast-diff deep-flatten Latest step was to create a client to get access to the cluster: ceph auth get-or-create...
  7. C

    [SOLVED] How to modify ceph.mon network

    Proxmox WebUI is the pace to modify monitors. In my case I simply deleted the entries with the cluster network IP and added new monitor which is using the public IP automatically.
  8. C

    [SOLVED] How to modify ceph.mon network

    Hi, I have identified a major issue with my cluster setup consisting of 3 nodes: all monitors are connected to cluster network. Here's my /etc/ceph/ceph.conf: [global] auth client required = cephx auth cluster required = cephx auth service required = cephx...
  9. C

    Ceph OSD failure after host reboot

    I'm wondering if anybody else is affected by this issue and if yes, why there's no solution provided.
  10. C

    Howto define Ceph pools for use case: central DB backup storage

    Hi, my use case for Ceph is providing a central backup storage. This means I will backup multiple databases in Ceph storage cluster mainly using librados. There's a security demand that should be considered: DB-owner A can only modify the files that belong to A; other files (owned by B, C or D)...
  11. C

    Ceph OSD failure after host reboot

    Hi, I have configured a 3-node Ceph cluster. Each node has 2 RAID controllers, 4 SSDs and 48 HDDs. I used this syntax to create an OSD: pveceph osd create /dev/sdd -bluestore -journal_dev /dev/sdv1 pveceph osd create /dev/sde -bluestore -journal_dev /dev/sdw1 pveceph osd create /dev/sdf...
  12. C

    LVM-Thin data storage content lost after reboot

    I do believe there's an issue with Thin LVM. There was an error during bootup related to this, something like "not supported by CPU".
  13. C

    LVM-Thin data storage content lost after reboot

    OK. Let's recap: /mnt/pve/data/ is a directory storage pve/data is a thin LVM But where is the data? CT207 was running before host reboot.
  14. C

    LVM-Thin data storage content lost after reboot

    Nope. I shared it with the other 2 nodes of the cluster. However I assume this was not a good idea; thin LVM should never be shared and used locally only.
  15. C

    LVM-Thin data storage content lost after reboot

    root@ld4257:~# more /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl maxfiles 1 shared 1 rbd: pve_vm content images krbd 0 pool pve rbd: pve_ct content rootdir krbd 1 pool pve dir: data_ld4257...
  16. C

    LVM-Thin data storage content lost after reboot

    Yeah, mounting was just a stupid idea to fix the issue. Anyway, I stored the root disk of several LXCs in this storage. Please check the example in the attached screenshot. I cannot start the related LXCs anymore because the resource is missing.
  17. C

    LVM-Thin data storage content lost after reboot

    Hi, after rebooting my PVE node with LVM-Thin data storage the content is unavailable. However the logical volume is active and visible: root@ld4257:~# lvscan ACTIVE '/dev/vg_backup_r5/backup' [305,63 TiB] inherit ACTIVE '/dev/pve/swap' [8,00 GiB] inherit ACTIVE...
  18. C

    HA is not working with 3-node-cluster - resources are NOT failing over

    Hi, I have setup a 3-node-cluster that is working like charm, means I can migrate any VM or CT from one node to the other. The same nodes are using a shared storage provided by Ceph storage. I followed instructions and created HA groups + resources: root@ld4257:~# more /etc/pve/ha/groups.cfg...
  19. C

    [SOLVED] Missing /etc/pve file system after reboot

    Adapting network settings to have working name resolution.
  20. C

    [SOLVED] Missing /etc/pve file system after reboot

    Hi, after rebooting single PVE node (no cluster) I get an error that Proxmox VE Cluster is not started. Checking the related service I found that directory /etc/pve is empty. Unfortunately I cannot identify the root cause and fix this. I tried to reinstall packages pve-cluster pve-manager...