copy the keyring into /etc/pve/priv/ceph and rename to the same name as your pool.
Ex.:
storage.cfg:
rbd: ceph-ssd
monhost x.x.x.x:6789;y.y.y.y:6789;z.z.z.z:6789
content rootdir,images
username admin
pool ssd
rbd: ceph-sata
monhost...
Got it...stupid error on my side. I did a cut and paste in the qemu VM conf file to try to same 5 seconds...actually lost about two days..
the virtio1 was before base-105-disk-1/vm-107-disk-2 instead of just vm-107-disk-2...
back to drawing board....it just doesn't work.
[root@dev-pcmk-1 ~]# parted /dev/vdd -a optimal
GNU Parted 3.1
Using /dev/vdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart primary 0 100%
Warning: The resulting partition is not properly aligned for best...
Thank you for your reply.
Am I doing it wrong?
root@r7101:~# ceph --version
ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)
root@r7101:~# rbd feature disable vm-107-disk-3 exclusive-lock, object-map, fast-diff
rbd: error parsing command 'feature'; -h or --help for usage
I guess...
hello, I'm trying to work on a centos pacemaker cluster. I want to mount a gfs2 fS coming from a volume shared to two VM. I created the volume on proxmox1 for vm1, then modified vm2 conf file on proxmox2. Starting both VM goes without issues. both then see the same disk. The problem is that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.