rbd

  1. H

    Proxmox ceph rbd disk mount error

    I am encountering the following error while mounting the vm server on the proxmox ceph pool with rbd, can you help with the problem and its solution. mount: /mnt/test: wrong fs type, bad option, bad superblock on /dev/rbd3, missing codepage or helper program, or other error.
  2. R

    Ceph RBD image encryption

    Hi There!, Has anyone used or had the experience of activating Ceph's RBD image encryption? RBD Image encryption What I want is to have encrypted disks of some VMs. OSD encryption doesn't solve this case, as it doesn't protect against an attacker gaining access to the host. I also had a look...
  3. C

    [SOLVED] lvm.conf filter for when we use rbd in the hypervisor and lvm inside virtual machines

    Hello, I post this here just in case it helps anyone else. I am using ceph rbd to store the disk images for my virtual machines, and some of the virtual machines use lvm inside their own virtual disks. When I start such virtual machines, proxmox's lvm scans their volumes, and these volumes...
  4. A

    Default cephfs and librbd volumes statuses are "Unknown"

    After following the wiki to create a hyperconverged cluster, my `cephfs` and `librbd` volumes on my host servers have a little grey question mark next to them, and a "Status: Unknown" help text when i hover over them. Since they're there by default, I figured that once I got down to the pool...
  5. C

    [SOLVED] rbd: sysfs write failed on TPM disks

    Hello everyone, we are running a 4-Node pve cluster with 3 Nodes in a hyper-converged setup with ceph and the 4th Node just for virtualization without its own osds. After creating a VM with a TPM state device on a ceph pool it fails to start with the error message: rbd: sysfs write failed TASK...
  6. D

    Ceph + secure communications + TPM disk ⇒ scary looking kernel error, 'no match of type 1 in addrvec', 'corrupt full osdmap', even when krbd not set

    [ This follows on from my previous comment on a different thread, https://forum.proxmox.com/threads/pverados-segfault.130628/post-574807 ] I've just figured out the whats and whys of a problem I've been having trying to create a new VM that uses RBD disks hosted by an external Ceph cluster...
  7. N

    [SOLVED] Ceph Pool listing VM and CT Disks "rbd error: rbd: listing images failed: (2) No such file or directory (500)"

    Hello guys, I updated my PVE Cluster yesterday to the newest Ceph Version and did not notice any issues at first. The VMs and CTs do run normally, I can read and write to the virtual disks, even create new ones, just fine! Migrating between Nodes also works, as well as deleting CTs... But...
  8. M

    Ceph LXC Volume Migration in anderen Pool

    Hallo, ich habe einen Linux Container mit einem großen Ceph RBD Volume das ich in einen anderen Pool verschieben soll. Aus der GUI ist das aber nur mit heruntergefahrenem Container möglich. Das Kopieren würde ca. 2 Tage dauern, daher habe ich mich nach Alternativen umgesehen und bin auf den...
  9. powersupport

    RBD error

    Hi, When I am listing rbd ls -l cephpool getting error as below rbd: error opening vm-121-disk-0: (2) No such file or directory This VM is not exists in the server. how to delete the same Thank you.
  10. S

    RBD mount timeout

    Problem: Can't add external RBD storage. journalctl -fu pvestatd Mar 30 13:08:47 host1 pvestatd[24782]: got timeout Mar 30 13:08:47 host1 pvestatd[24782]: status update time (5.462 seconds) Mar 30 13:08:57 host1 pvestatd[24782]: got timeout proxmox storage: time pvesm status ceph rbd...
  11. S

    [SOLVED] Failed to add unused disk from external ceph storage

    Hi All, I'm using 3x pve 7.1-10 nodes as a cluster. I already connected synology box as shared storage. all features are working properly at the moment. I'm working on moving some workloads from failing redhat openstack cluster with ceph storage to the pve cluster. As I did many times...
  12. C

    fail to create vm with rbd error locked command timed out

    # ENV - pve nodes: 10.0.4.44 和 10.0.4.45 - external ceph mons: 10.0.4.40, 10.0.4.41 and 10.0.4.42 - external ceph version: 15.2.9 # ISSUE - when i created vm, it failed everytime with error message below: ``` TASK ERROR: unable to create VM 100 - rbd error: 'storage-boya-ceph'-locked...
  13. J

    Cannot start VM - /dev/rbd/rbd missing, but /dev/rbd0 present?

    /dev/rbd1 kvm: -drive file=/dev/rbd/rbd/vm-150-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap: Could not open '/dev/rbd/rbd/vm-150-disk-0': No such file or directory TASK ERROR: start failed: QEMU exited with code 1 I suspect this issue has...
  14. T

    [SOLVED] CEPH Storage Usage Confusion

    Hi Everyone, Getting into using the CEPH RBD command because I need to identify storage usage in my CEPH cluster. When issuing the command below, it almost appears as though my snapshots are using an immense amount of space: rbd -p pool du Excerpt of one VM...
  15. O

    Ceph rbd mirroring Snapshot-based not working :'(

    Hello, I'm trying to setup Ceph rbd mirroring Snapshot-based between 2 Ceph clusters each installed from a different PVE cluster. I call them pve-c1 and pve-c2. Anyone here already setup it successfully ? A this moment i only try a one-way replication from pve-c1 to pve-c2. Proxmox VE 6.3-2...
  16. I

    [SOLVED] Problem add external rbd storage

    Good morning, I have a cluster with 16 Hosts proxmox and an external CEPH cluster configured to store the VMS from the promox, I was using it normally on the proxmox and recently we had to do a maintenance on the storage, we moved all the VMS to another storage, I allocated the RBD storage from...
  17. S

    Proxmox external Ceph Disk Cache remommendation

    Hello, I searched this forum and google but i cannot find the final aswer.. We have a Proxmox cluster with a remote Ceph Luminous cluster. I see i get muge faster writes with Cache=writeback in the disk options in Proxmox, (random 4k up to 16x faster and Seq 10x faster) then with cache=none...
  18. S

    [SOLVED] Migrate instance from openstack to proxmox with ceph storage backend

    Hi all, We are using openstack for most of our production instances with ceph storage backend. Recently we added additional hardware and setup Proxmox v6 and attached it to the same ceph storage cluster With the ceph storage integration we tested couple of instance and it works perfectly fine...
  19. C

    [SOLVED] Starting qm fails with "got timeout"

    Hi, I have defined a storage of type "RBD" with only SSD drives connected to the relevant pool. All RBDs are available: root@ld3955:~# rbd ls -l ssd [25/1974]NAME SIZE PARENT FMT PROT LOCK vm-100-disk-0 1 MiB...
  20. elurex

    Ceph rbd error - sysfs write failed

    I am running PVE 6.0 and on this pve node, I have already started 2 vm with rbd disks to my ceph storage, However, sometime if I need to start third VM using rbd disk, PVE will error with following msg. It only can be solve if I reboot the pve node and then I can start 3 VM or more using rbd...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!