Can't create lxc container on ceph (krbd) storage

acidrop

Renowned Member
Jul 17, 2012
204
6
83
Hello,

I tried to create a lxc container after applying the latest pve (no subscription) updates, but it seems that it doesn't provide me anymore the option to select ceph rbd storage during the 'create CT' wizard.

I have a single ceph pool which I export as 2 separate storages on pve (one with krbd enabled for lxc and one with krbd disabled for kvm).

This configuration was working fine before the updates. The strange thing is that the lxc containers that I had create before the updates are working fine which leads me to believe that it may be a recent change on pve-manager perhaps?

During the wizard I can select between local,nfs and glusterfs storage but not ceph..

Below follows some info:

Code:
pveversion -v
proxmox-ve: 4.3-75 (running kernel: 4.4.35-1-pve)
pve-manager: 4.3-14 (running version: 4.3-14/3a8c61c7)
pve-kernel-4.4.35-1-pve: 4.4.35-75
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-19
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-87
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
openvswitch-switch: 2.6.0-2
ceph: 0.94.9-1~bpo80+1

Code:
cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: local-zfs
    pool rpool/data
    content images,rootdir
    sparse 1

rbd: cephstor1
    monhost 192.168.149.115;192.168.149.95;192.168.148.65
    content images
    pool rbdpool1
    username admin

rbd: cephstorlxc2
    monhost 192.168.149.115;192.168.149.95;192.168.148.65
    content rootdir
    krbd 1
    pool rbdpool1
    username admin

nfs: nfs2
    server nfs2
    export /nfs2
    path /mnt/pve/nfs2
    options vers=3
    content vztmpl,iso,rootdir,images,backup
    maxfiles 1

glusterfs: glustervol1
    volume vol1
    path /mnt/pve/glustervol1
    content images,vztmpl
    server 127.0.0.1
    server2 192.168.149.115
    maxfiles 1

zfs: zfsoveriscsi1
    blocksize 4k
    target iqn.2010-09.org.napp-it:1476178738
    portal 10.1.2.8
    pool zpool1
    iscsiprovider comstar
    sparse 1
    content images
 
are you sure you are on the right page in the wizard? because you the template page would not allow you to select the ceph storage would allow the gluster storage (as in your storage.cfg -> vztmpl) but for the
disks we do not allow gluster.

maybe you also can post a screenshot?
 
You are very right! Sorry about this, I'm not creating often CTs and I was a bit confused...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!