rbd

  1. A

    snapshot reversion fails via vzdump; succeeds using rbd snap

    I have a situation where a snapshot reversion for a container (rbd backed) is failing with the error "unable to read tail (got 0 byte)" in the tasklog. doing a manual reversion using rbd snap works fine. proxmox-ve: 5.1-42 (running kernel: 4.15.3-1-pve) pve-manager: 5.1-46 (running version...
  2. A

    snapshots through api fail. manual snapshots working

    I have a weird problem with specific containers failing to snapshot; the end result is that scheduled vzdump tasks effectively fail and disallow any further vzdump jobs until manually killed. The thing is, if I MANUALLY take a snapshot using rbd snap create it works fine- its only via api that...
  3. S

    [SOLVED] VM backup/restore and RDB additional disk

    I've some troubles understanding backup/restore for a (CentOS) VM with Ceph storage, used as NFS server. Let say it is VM n. 100. I've added a hard drive to this VM (virtio1) and it is located on my ceph RDB storage. I do not want to backup this additionnal disk when I backup the VM as it is a...
  4. grin

    [px5] new CT fail to start: mknod: …/rootfs/dev/rbd3: Operation not permitted

    [proxmox5] Newly created unprivileged lxc container fails to start. The failure is rather ugly, since there is basically no info on it: Aug 16 00:25:25 elton lxc-start[39248]: lxc-start: tools/lxc_start.c: main: 366 The container failed to start. Aug 16 00:25:25 elton lxc-start[39248]...
  5. S

    Proxmox 5.0/Ceph luminus "rbd error: rbd: couldn't connect to the cluster!"

    Hi all, I'm getting the error: "rbd error: rbd: couldn't connect to the cluster!" when i create a VM with rbd storage from GUI. I have installed the client keyring in /etc/pve/priv/ceph/my-ceph-storage.keyring. # rbd ls -p images #returns no error. The "images" is the pool I have created. #...
  6. D

    [SOLVED] rbd error: rbd: couldn't connect to the cluster! (500)

    Hey, i found many threads with the same issue but all they have the same solution (missing keyring). I checked them but i din´t find my mistake maybe someone can give me a hint. storage.cfg: i´ve did: cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/rbd.keyring file is now present...
  7. I

    Backup vs rbd export

    Hi There, I'm woundering is there faster way todo backups from CEPH rbd pool? I have a ZFS as backup storage connected over NFS protocol with 10G network to Proxmox . ZFS has 12 Sata Drives in raidz When I do backup via proxmox tool then writing speed to ZFS is 10-15 MBs only But when I use rbd...
  8. J

    Recommended storage system???

    I am about to configure a NAS that’s suppose to supply the storage for 3 virtualization servers. And I need some advise regarding which base-system to chose. The NAS has good processors, sufficiently RAM, good disks and 10G connections between the servers… So I am mostly looking for general...
  9. grin

    ceph don't seem to be able to do linked clone, and full cloning is slow

    the gui only offers Full Clone for VMs on ceph volumes; ceph should be able to create linked clones fast and easy (as well as light snapshots, by the way). Full Clone creation doesn't seem to use ceph and thus extremely slow. PVE full clone was created in 43 minutes, while doing it in the...
  10. S

    Ceph cache tier and disk resizing

    Hello, i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance. Everything seems fine except for disk resizing. I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph...
  11. M

    What is the correct method to change rbd monhost?

    Hello. I have a production 3 node proxmox 4.1 cluster with an external RBD storage. The external RBD has 3 nodes, each one with a ceph monitor. The hostnames are ceph0{1,2,3}.mynetwork.com. The ceph cluster is healthy and everything works fine. My /etc/pve/storage.cfg is this: dir: local...
  12. G

    [SOLVED] Mount RBD pool in LXC CT?

    I try to map and mount a RBD pool (from a ceph cluster) into a LXC container without success …: /usr/bin/rbd map --pool rbd test --id test --keyring /etc/ceph/ceph.client.test.keyring rbd: sysfs write failed rbd: map failed: (30) Read-only file system The Ceph configuration file seems good...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!