rbd

  1. A

    Network File Sharing; permissions/UID/GID

    Hi everyone, I've run into a particular issue. I have a ClearOS VM in Proxmox acting as a domain controller with roaming profiles for some Windows PCs. I have a 3TB disk in the Proxmox machine that I'd like to share to the ClearOS VM and other VMs in the future. At the moment I'm exporting the...
  2. V

    [SOLVED] Kubernetes - Ceph storage not mounting

    Hello guys, I am trying to use a persistent volume claim dynamically after defining a storage class to use Ceph Storage on a Proxmox VE 6.0-4 one node cluster. The persistent volume gets created successfully on ceph storage, but pods are unable to mount it. It throws below error. I am not sure...
  3. A

    is it possible to auto trim for lxc disks?

    I have a cluster that has relatively heavy IO and consequently free space on the ceph storage is constantly constrained. I'm finding myself performing fstrim on a more and more frequent interval. Is there a way to auto trim a disk for an lxc container?
  4. C

    Cannot connect to ceph after install

    Fresh single node install currently, with the intent to add 2 more nodes later that will use the Ceph storage from the first node. I installed the first node and installed ceph. I have 2 virtual bridges - vmbr0, 10.0.1.2/16 - general network - vmbr3, 10.10.10.2/24 - ceph network I...
  5. I

    QemuServer.pm subroutines are not called under GUI?

    There is an Proxmox VE 5.3-11 server running several VM's. Disk images are stored on Ceph RBD. I modified "print_drivedevice_full" subroutine in /usr/share/perl5/PVE/QemuServer.pm for RBD tuning as described in https://pve.proxmox.com/pipermail/pve-devel/2018-June/032787.html Here is my patch...
  6. C

    RBD storage 100% full

    Hello! I have successfully setup a PVE cluster with Ceph. After creating ceph pools and related RBD storage I moved the VM's drive to this newly created RBD storage. Due to some issues I needed to reboot all cluster nodes one after the other. Since then the PVE storage reports that all RBD is...
  7. A

    ceph rbd slow down read/write

    Summary: pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve) ceph version 12.2.8 (6f01265ca03a6b9d7f3b7f759d8894bb9dbb6840) luminous (stable) 4 nodes (per node: 4 nvme ssd & 2 sas ssd, bluestore) + 1 node with 4 sata ssd interconnect - 2x 10Gbps Created pool (512 PGs, replicated 3/2) on...
  8. A

    RBD hangs with remote Ceph OSDs

    Hi there, I am running a 2-node Proxmox-Cluster and mounted RBD images on a remote Ceph cluster (latest Mimic release). Currently we are using the RBD image mount as backup storage for our VMs (mounted in /var/lib/backup). It all works fine unless an OSD or an OSD-Host (we have 3, each...
  9. A

    Problems resizing rbd

    Just noticed this on one of my clusters; disk resize is failing with the following error message: Resizing image: 100% complete...done. mount.nfs: Failed to resolve server rbd: Name or service not known Failed to update the container's filesystem: command 'unshare -m -- sh -c 'mount...
  10. B

    Problem with RBD on Ceph IPv6 external cluster

    Hi all, So I have two external Ceph clusters - one is using IPv4 and the other is using IPv6. When I'm using IPv4 cluster, the dashborad shows status and content of RBD and I have no problem creating new images for VMs. But when I try to configure storage information about IPv6 cluster, I get...
  11. C

    Why is storage type rbd only for Disk-Image + Container

    Hello! Can you please share some information why storage type rbd is only availabel for Disk-Image and Container? I would prefer to dump a backup to another rbd. THX
  12. C

    [SOLVED] Mapping image fails with error: rbd: sysfs write failed

    Hi, I have created a pool + image using this commands: rbd create --size 500G backup/gbs Then I modified the features: rbd feature disable backup/gbs exclusive-lock object-map fast-diff deep-flatten Latest step was to create a client to get access to the cluster: ceph auth get-or-create...
  13. K

    [SOLVED] ceph trouble with non-standard object size

    Hi Community, to create a rbd image of 1T with an object size of 16K is easy. I did it like this: rbd create -s 1T --object-size 16K --image-feature layering --image-feature exclusive-lock --image-feature object-map --image-feature fast-diff --image-feature deep-flatten -p Poolname vm-222-disk-4...
  14. A

    get nodes/$node/storage showing 0 byte for ceph pool

    I have this intermittent problem with storage returning 0 values for a specific rbd pool. Its only happening on one cluster, and there doesnt seem to be a corrolation to which node context is being called...
  15. G

    Mounting an existing RBD image

    Recently I had to re-install Proxmox on my SSD's since replication is not supported in LVM, had to make "sort of a backup" of some files from a container that are around 250GB. To achieve that I mounted a disk using ceph storage, transferred the files to the storage, unmounted the disk...
  16. A

    adventures with snapshots

    I have a new problem (well, it could be old and I just noticed it.) I have a number of containers that show any number of snapshots but when I look at the disk those snapshots dont exist. Example: pvesh get /nodes/sky12/lxc/16980/snapshot 200 OK [ { "description" : "Automatic snapshot...
  17. K

    Backup hangup with Ceph/rbd

    I use Ceph / rbd as storage and operate the container environment, but backup occasionally fails. Is there someone in a similar situation? [304948.926528] EXT4-fs error (device rbd5): ext4_lookup:1575: inode #2621882: comm tar: deleted inode referenced: 2643543 [304948.927428]...
  18. C

    Ceph: creating RBD image hangs

    Hi, I have configured a 3-node-cluster with currently 10 OSDs. root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -10 43.66196 root hdd_strgbox -27 0 host ld4257-hdd_strgbox -28 21.83098 host ld4464-hdd_strgbox 3...
  19. A

    Proxmox several ceph's with same pool name issue

    Found issue when cluster have several RBD(external) storage's with same pool name. Creating, deleting rbd image works without any issue, but "move disk" or when qemu have same disk names on different storage's cause an error - proxmox doesn't understand what storage currently in use. Moving...
  20. C

    [SOLVED] Unable to start VM on Ceph RBD External After Recent Update

    Hi Proxmox, I am unable to start VM on Ceph RBD external after recent update. From inside Proxmox, I could perform `rbd ls`. What just went wrong? kvm: -drive...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!