I have a new problem (well, it could be old and I just noticed it.) I have a number of containers that show any number of snapshots but when I look at the disk those snapshots dont exist.
Example:
pvesh get /nodes/sky12/lxc/16980/snapshot
200 OK
[
{
"description" : "Automatic snapshot...
I use Ceph / rbd as storage and operate the container environment, but backup occasionally fails. Is there someone in a similar situation?
[304948.926528] EXT4-fs error (device rbd5): ext4_lookup:1575: inode #2621882: comm tar: deleted inode referenced: 2643543
[304948.927428]...
Hi,
I have configured a 3-node-cluster with currently 10 OSDs.
root@ld4257:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-10 43.66196 root hdd_strgbox
-27 0 host ld4257-hdd_strgbox
-28 21.83098 host ld4464-hdd_strgbox
3...
Found issue when cluster have several RBD(external) storage's with same pool name.
Creating, deleting rbd image works without any issue, but "move disk" or when qemu have same disk names on different storage's cause an error - proxmox doesn't understand what storage currently in use.
Moving...
Hi Proxmox,
I am unable to start VM on Ceph RBD external after recent update. From inside Proxmox, I could perform `rbd ls`. What just went wrong?
kvm: -drive...
I have a situation where a snapshot reversion for a container (rbd backed) is failing with the error "unable to read tail (got 0 byte)" in the tasklog. doing a manual reversion using rbd snap works fine.
proxmox-ve: 5.1-42 (running kernel: 4.15.3-1-pve)
pve-manager: 5.1-46 (running version...
I have a weird problem with specific containers failing to snapshot; the end result is that scheduled vzdump tasks effectively fail and disallow any further vzdump jobs until manually killed.
The thing is, if I MANUALLY take a snapshot using rbd snap create it works fine- its only via api that...
I've some troubles understanding backup/restore for a (CentOS) VM with Ceph storage, used as NFS server. Let say it is VM n. 100.
I've added a hard drive to this VM (virtio1) and it is located on my ceph RDB storage.
I do not want to backup this additionnal disk when I backup the VM as it is a...
[proxmox5]
Newly created unprivileged lxc container fails to start. The failure is rather ugly, since there is basically no info on it:
Aug 16 00:25:25 elton lxc-start[39248]: lxc-start: tools/lxc_start.c: main: 366 The container failed to start.
Aug 16 00:25:25 elton lxc-start[39248]...
Hi all,
I'm getting the error: "rbd error: rbd: couldn't connect to the cluster!" when i create a VM with rbd storage from GUI.
I have installed the client keyring in /etc/pve/priv/ceph/my-ceph-storage.keyring.
# rbd ls -p images #returns no error. The "images" is the pool I have created.
#...
Hey,
i found many threads with the same issue but all they have the same solution (missing keyring).
I checked them but i din´t find my mistake maybe someone can give me a hint.
storage.cfg:
i´ve did:
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/rbd.keyring
file is now present...
Hi There,
I'm woundering is there faster way todo backups from CEPH rbd pool?
I have a ZFS as backup storage connected over NFS protocol with 10G network to Proxmox . ZFS has 12 Sata Drives in raidz
When I do backup via proxmox tool then writing speed to ZFS is 10-15 MBs only
But when I use rbd...
I am about to configure a NAS that’s suppose to supply the storage for 3 virtualization servers. And I need some advise regarding which base-system to chose. The NAS has good processors, sufficiently RAM, good disks and 10G connections between the servers… So I am mostly looking for general...
the gui only offers Full Clone for VMs on ceph volumes; ceph should be able to create linked clones fast and easy (as well as light snapshots, by the way).
Full Clone creation doesn't seem to use ceph and thus extremely slow.
PVE full clone was created in 43 minutes, while doing it in the...
Hello,
i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance.
Everything seems fine except for disk resizing.
I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph...
Hello.
I have a production 3 node proxmox 4.1 cluster with an external RBD storage. The external RBD has 3 nodes, each one with a ceph monitor. The hostnames are ceph0{1,2,3}.mynetwork.com. The ceph cluster is healthy and everything works fine. My /etc/pve/storage.cfg is this:
dir: local...
I try to map and mount a RBD pool (from a ceph cluster) into a LXC container without success …:
/usr/bin/rbd map --pool rbd test --id test --keyring /etc/ceph/ceph.client.test.keyring
rbd: sysfs write failed
rbd: map failed: (30) Read-only file system
The Ceph configuration file seems good...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.