Hi,
I am trying to understand the usable space showed in proxmox under ceph storage. I tried to google but no luck to get direct answer. I am appreciate senior here can guide me about how to calculate usable space.
I had refer to https://forum.proxmox.com/threads/newbie-need-your-input.24176/page-2 but seems different from the example given by Q-wulf.
Current setup:
4 Nodes, 4 x 1TB OSD each nodes, 1 x 120GB SSD for journal and 1 x 500GB HDD for OS
Pool:
Size: 3
Min: 1
Pg_number: 1024
In ceph storage summary:
Type: RBD
Size: 14.55TB
Ceph Configuration:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.50.51.0/24
filestore xattr use omap = true
fsid = bf5d56ae-xxx-4db1-xxx-b11ddxxxcbd6a
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.50.51.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
osd max backfills = 1
osd recovery max active = 1
filestore flusher = false
[mon.1]
host = node2 mon addr = 10.50.51.16:6789
[mon.0]
host = node1 mon addr = 10.50.51.15:6789
[mon.2]
host = node3 mon addr = 10.50.51.17:6789
Crush map rules:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit }
I am trying to understand the usable space showed in proxmox under ceph storage. I tried to google but no luck to get direct answer. I am appreciate senior here can guide me about how to calculate usable space.
I had refer to https://forum.proxmox.com/threads/newbie-need-your-input.24176/page-2 but seems different from the example given by Q-wulf.
Current setup:
4 Nodes, 4 x 1TB OSD each nodes, 1 x 120GB SSD for journal and 1 x 500GB HDD for OS
Pool:
Size: 3
Min: 1
Pg_number: 1024
In ceph storage summary:
Type: RBD
Size: 14.55TB
Ceph Configuration:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.50.51.0/24
filestore xattr use omap = true
fsid = bf5d56ae-xxx-4db1-xxx-b11ddxxxcbd6a
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.50.51.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
osd max backfills = 1
osd recovery max active = 1
filestore flusher = false
[mon.1]
host = node2 mon addr = 10.50.51.16:6789
[mon.0]
host = node1 mon addr = 10.50.51.15:6789
[mon.2]
host = node3 mon addr = 10.50.51.17:6789
Crush map rules:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit }