rbd

  1. G

    Mounting an existing RBD image

    Recently I had to re-install Proxmox on my SSD's since replication is not supported in LVM, had to make "sort of a backup" of some files from a container that are around 250GB. To achieve that I mounted a disk using ceph storage, transferred the files to the storage, unmounted the disk...
  2. A

    adventures with snapshots

    I have a new problem (well, it could be old and I just noticed it.) I have a number of containers that show any number of snapshots but when I look at the disk those snapshots dont exist. Example: pvesh get /nodes/sky12/lxc/16980/snapshot 200 OK [ { "description" : "Automatic snapshot...
  3. K

    Backup hangup with Ceph/rbd

    I use Ceph / rbd as storage and operate the container environment, but backup occasionally fails. Is there someone in a similar situation? [304948.926528] EXT4-fs error (device rbd5): ext4_lookup:1575: inode #2621882: comm tar: deleted inode referenced: 2643543 [304948.927428]...
  4. C

    Ceph: creating RBD image hangs

    Hi, I have configured a 3-node-cluster with currently 10 OSDs. root@ld4257:~# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -10 43.66196 root hdd_strgbox -27 0 host ld4257-hdd_strgbox -28 21.83098 host ld4464-hdd_strgbox 3...
  5. A

    Proxmox several ceph's with same pool name issue

    Found issue when cluster have several RBD(external) storage's with same pool name. Creating, deleting rbd image works without any issue, but "move disk" or when qemu have same disk names on different storage's cause an error - proxmox doesn't understand what storage currently in use. Moving...
  6. C

    [SOLVED] Unable to start VM on Ceph RBD External After Recent Update

    Hi Proxmox, I am unable to start VM on Ceph RBD external after recent update. From inside Proxmox, I could perform `rbd ls`. What just went wrong? kvm: -drive...
  7. A

    snapshot reversion fails via vzdump; succeeds using rbd snap

    I have a situation where a snapshot reversion for a container (rbd backed) is failing with the error "unable to read tail (got 0 byte)" in the tasklog. doing a manual reversion using rbd snap works fine. proxmox-ve: 5.1-42 (running kernel: 4.15.3-1-pve) pve-manager: 5.1-46 (running version...
  8. A

    snapshots through api fail. manual snapshots working

    I have a weird problem with specific containers failing to snapshot; the end result is that scheduled vzdump tasks effectively fail and disallow any further vzdump jobs until manually killed. The thing is, if I MANUALLY take a snapshot using rbd snap create it works fine- its only via api that...
  9. S

    [SOLVED] VM backup/restore and RDB additional disk

    I've some troubles understanding backup/restore for a (CentOS) VM with Ceph storage, used as NFS server. Let say it is VM n. 100. I've added a hard drive to this VM (virtio1) and it is located on my ceph RDB storage. I do not want to backup this additionnal disk when I backup the VM as it is a...
  10. grin

    [px5] new CT fail to start: mknod: …/rootfs/dev/rbd3: Operation not permitted

    [proxmox5] Newly created unprivileged lxc container fails to start. The failure is rather ugly, since there is basically no info on it: Aug 16 00:25:25 elton lxc-start[39248]: lxc-start: tools/lxc_start.c: main: 366 The container failed to start. Aug 16 00:25:25 elton lxc-start[39248]...
  11. S

    Proxmox 5.0/Ceph luminus "rbd error: rbd: couldn't connect to the cluster!"

    Hi all, I'm getting the error: "rbd error: rbd: couldn't connect to the cluster!" when i create a VM with rbd storage from GUI. I have installed the client keyring in /etc/pve/priv/ceph/my-ceph-storage.keyring. # rbd ls -p images #returns no error. The "images" is the pool I have created. #...
  12. D

    [SOLVED] rbd error: rbd: couldn't connect to the cluster! (500)

    Hey, i found many threads with the same issue but all they have the same solution (missing keyring). I checked them but i din´t find my mistake maybe someone can give me a hint. storage.cfg: i´ve did: cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/rbd.keyring file is now present...
  13. I

    Backup vs rbd export

    Hi There, I'm woundering is there faster way todo backups from CEPH rbd pool? I have a ZFS as backup storage connected over NFS protocol with 10G network to Proxmox . ZFS has 12 Sata Drives in raidz When I do backup via proxmox tool then writing speed to ZFS is 10-15 MBs only But when I use rbd...
  14. J

    Recommended storage system???

    I am about to configure a NAS that’s suppose to supply the storage for 3 virtualization servers. And I need some advise regarding which base-system to chose. The NAS has good processors, sufficiently RAM, good disks and 10G connections between the servers… So I am mostly looking for general...
  15. grin

    ceph don't seem to be able to do linked clone, and full cloning is slow

    the gui only offers Full Clone for VMs on ceph volumes; ceph should be able to create linked clones fast and easy (as well as light snapshots, by the way). Full Clone creation doesn't seem to use ceph and thus extremely slow. PVE full clone was created in 43 minutes, while doing it in the...
  16. S

    Ceph cache tier and disk resizing

    Hello, i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance. Everything seems fine except for disk resizing. I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph...
  17. M

    What is the correct method to change rbd monhost?

    Hello. I have a production 3 node proxmox 4.1 cluster with an external RBD storage. The external RBD has 3 nodes, each one with a ceph monitor. The hostnames are ceph0{1,2,3}.mynetwork.com. The ceph cluster is healthy and everything works fine. My /etc/pve/storage.cfg is this: dir: local...
  18. G

    [SOLVED] Mount RBD pool in LXC CT?

    I try to map and mount a RBD pool (from a ceph cluster) into a LXC container without success …: /usr/bin/rbd map --pool rbd test --id test --keyring /etc/ceph/ceph.client.test.keyring rbd: sysfs write failed rbd: map failed: (30) Read-only file system The Ceph configuration file seems good...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!