Container settings on CEPH no option to enable quota

Lucian Lazar

Member
Apr 23, 2018
23
3
23
41
Romania
ecoit.ro
Hi there,
Today we have removed the default pool as we have replaced some disks and even so we did not have any data on it, so we decided to start from scratch (we did not do pveceph purge, we simply removed the pool, removed all osd's and reformat osds as new then re-add them back to ceph cluster).
After removing the existing pool, creating the new one with same settings went without error, also flagged create storage.
However, now wehen i attempt to create a new container, the option to enable quota is disabled as seen in the screenshots. Also i remember that before, creating a new ceph pool created 2 storage entries, one for VM and one for CT. Now it is only one.
Is it something i did wrong?
Thank you all in advance for your help


proxmox-ve: 5.3-1 (running kernel: 4.15.18-8-pve)
pve-manager: 5.3-11 (running version: 5.3-11/d4907f84)
pve-kernel-4.15: 5.3-2
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.11-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-47
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-12
libpve-storage-perl: 5.0-39
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-23
pve-cluster: 5.0-33
pve-container: 2.0-35
pve-docs: 5.3-3
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-18
pve-firmware: 2.0-6
pve-ha-manager: 2.0-8
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-2
pve-xtermjs: 3.10.1-2
qemu-server: 5.0-47
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1





pveceph pool ls
Name size min_size pg_num %-used used
CEPH_SATA 3 2 512 0.00 0



ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 10.34933 root default
-3 2.27338 host cloud01
6 hdd 0.45470 osd.6 up 1.00000 1.00000
7 hdd 0.45470 osd.7 up 1.00000 1.00000
8 hdd 0.45470 osd.8 up 1.00000 1.00000
9 hdd 0.45470 osd.9 up 1.00000 1.00000
10 ssd 0.22729 osd.10 up 1.00000 1.00000
11 ssd 0.22729 osd.11 up 1.00000 1.00000
-5 2.69199 host cloud02
0 hdd 0.45470 osd.0 up 1.00000 1.00000
1 hdd 0.45470 osd.1 up 1.00000 1.00000
2 hdd 0.45470 osd.2 up 1.00000 1.00000
3 hdd 0.45470 osd.3 up 1.00000 1.00000
4 ssd 0.43660 osd.4 up 1.00000 1.00000
5 ssd 0.43660 osd.5 up 1.00000 1.00000
-7 2.69199 host cloud03
12 hdd 0.45470 osd.12 up 1.00000 1.00000
13 hdd 0.45470 osd.13 up 1.00000 1.00000
14 hdd 0.45470 osd.14 up 1.00000 1.00000
15 hdd 0.45470 osd.15 up 1.00000 1.00000
16 ssd 0.43660 osd.16 up 1.00000 1.00000
17 ssd 0.43660 osd.17 up 1.00000 1.00000
-9 2.69199 host cloud04
18 hdd 0.45470 osd.18 up 1.00000 1.00000
19 hdd 0.45470 osd.19 up 1.00000 1.00000
20 hdd 0.45470 osd.20 up 1.00000 1.00000
21 hdd 0.45470 osd.21 up 1.00000 1.00000
22 ssd 0.43660 osd.22 up 1.00000 1.00000
23 ssd 0.43660 osd.23 up 1.00000 1.00000






CEPH crush map:


# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class ssd
device 5 osd.5 class ssd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class ssd
device 11 osd.11 class ssd
device 12 osd.12 class hdd
device 13 osd.13 class hdd
device 14 osd.14 class hdd
device 15 osd.15 class hdd
device 16 osd.16 class ssd
device 17 osd.17 class ssd
device 18 osd.18 class hdd
device 19 osd.19 class hdd
device 20 osd.20 class hdd
device 21 osd.21 class hdd
device 22 osd.22 class ssd
device 23 osd.23 class ssd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host cloud01 {
id -3 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -11 class ssd # do not change unnecessarily
# weight 2.273
alg straw2
hash 0 # rjenkins1
item osd.6 weight 0.455
item osd.7 weight 0.455
item osd.8 weight 0.455
item osd.9 weight 0.455
item osd.10 weight 0.227
item osd.11 weight 0.227
}
host cloud02 {
id -5 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -12 class ssd # do not change unnecessarily
# weight 2.692
alg straw2
hash 0 # rjenkins1
item osd.0 weight 0.455
item osd.1 weight 0.455
item osd.2 weight 0.455
item osd.3 weight 0.455
item osd.4 weight 0.437
item osd.5 weight 0.437
}
host cloud03 {
id -7 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -13 class ssd # do not change unnecessarily
# weight 2.692
alg straw2
hash 0 # rjenkins1
item osd.12 weight 0.455
item osd.13 weight 0.455
item osd.14 weight 0.455
item osd.15 weight 0.455
item osd.16 weight 0.437
item osd.17 weight 0.437
}
host cloud04 {
id -9 # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
id -14 class ssd # do not change unnecessarily
# weight 2.692
alg straw2
hash 0 # rjenkins1
item osd.18 weight 0.455
item osd.19 weight 0.455
item osd.20 weight 0.455
item osd.21 weight 0.455
item osd.22 weight 0.437
item osd.23 weight 0.437
}
root default {
id -1 # do not change unnecessarily
id -10 class hdd # do not change unnecessarily
id -15 class ssd # do not change unnecessarily
# weight 10.349
alg straw2
hash 0 # rjenkins1
item cloud01 weight 2.273
item cloud02 weight 2.692
item cloud03 weight 2.692
item cloud04 weight 2.692
}

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map
Server View
Logs
 

Attachments

  • Screenshot 2019-03-14 at 11.06.23.png
    Screenshot 2019-03-14 at 11.06.23.png
    165.6 KB · Views: 5
  • Screenshot 2019-03-14 at 11.06.06.png
    Screenshot 2019-03-14 at 11.06.06.png
    196.9 KB · Views: 5
You need a privileged (default changed recently) container for quota support. Also with the two storage entries, this was previously merged and now only one entry is needed.
 
You need a privileged (default changed recently) container for quota support. Also with the two storage entries, this was previously merged and now only one entry is needed.
Thanks a lot, so the KRBD flag is no longer required (although it is still selectable in storage)?
Thanks
 
The KRBD option is used for VMs, CTs need to communicate with ceph through the kernel in any case.
 
The KRBD option is used for VMs, CTs need to communicate with ceph through the kernel in any case.
Thank you but i don't understand: In documentation the reference for KRBD is only for CT's, not VM's. Also, leaving the option disabled it allows me to create both CT and VM without problems. Can i safely leave it disabled then? Are there any security/performance implications?
Thank you again
 
The docs need an update. ;) You can leave the KRBD off, this is the default.

Can i safely leave it disabled then? Are there any security/performance implications?
It is a choice for VMs to either take the Kernel route or through librbd. Either way has its pros & cons.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!