Hello everyone,
So, I've decided to give proxmox cluster a go and got some nice little NUC-a-like devices to run proxmox.
Cluster is as follows:
Vlan 20 10.0.2.0/25
Vlan 30 10.0.3.0/26
All devices have a 2TB M.2 SSD drive partitioned as follows:
Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 2099199 2097152 1G EFI System
/dev/nvme0n1p3 2099200 838860800 836761601 399G Linux LVM
/dev/nvme0n1p4 838862848 4000796671 3161933824 1.5T Linux LVM
Ceph status is as follows:
cluster:
id: 4429e2ae-2cf7-42fd-9a93-715a056ac295
health: HEALTH_OK
services:
mon: 3 daemons, quorum gaspar,balthasar,melchior (age 81m)
mgr: gaspar(active, since 83m)
osd: 3 osds: 3 up (since 79m), 3 in (since 79m)
data:
pools: 2 pools, 33 pgs
objects: 7 objects, 641 KiB
usage: 116 MiB used, 4.4 TiB / 4.4 TiB avail
pgs: 33 active+clean
pveceph pool ls shows following pools availble:
┌──────┬──────┬──────────┬────────┬─────────────┬────────────────┬───────────────────┬──────────────────────────┬───────│ Name │ Size │ Min Size │ PG Num │ min. PG Num │ Optimal PG Num │ PG Autoscale Mode │ PG Autoscale Target Size │ PG Aut╞══════╪══════╪══════════╪════════╪═════════════╪════════════════╪═══════════════════╪══════════════════════════╪═══════│ .mgr │ 3 │ 2 │ 1 │ 1 │ 1 │ on │ │
├──────┼──────┼──────────┼────────┼─────────────┼────────────────┼───────────────────┼──────────────────────────┼───────│ rbd │ 3 │ 2 │ 32 │ │ 32 │ on │ │
└──────┴──────┴──────────┴────────┴─────────────┴────────────────┴───────────────────┴──────────────────────────┴────
ceph osd pool application get rbd shows following:
ceph osd pool application get rbd
{
"rados": {}
}
rbd ls -l rbd shows
NAME SIZE PARENT FMT PROT LOCK
myimage 1 TiB 2
This is what's contained in the ceph.conf file:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.0.3.11/26
fsid = 4429e2ae-2cf7-42fd-9a93-715a056ac295
mon_allow_pool_delete = true
mon_host = 10.0.3.11 10.0.3.13 10.0.3.12
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.0.3.0/26
cluster_network = 10.0.3.0/26
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring
[mon.balthasar]
public_addr = 10.0.3.13
[mon.gaspar]
public_addr = 10.0.3.11
[mon.melchior]
public_addr = 10.0.3.12
All this seems to show that I should have a pool rbd available with an image of 1TB yet, when I try to add a storage, I can't find the pool in the drop down menu whn I go to Datacenter > Storage > Add > RBD and can't type in rbd in the pool part.
Any ideas what I could do to salvage this situation?
So, I've decided to give proxmox cluster a go and got some nice little NUC-a-like devices to run proxmox.
Cluster is as follows:
- Cluster name: Magi
- Host 1: Gaspar
- VMBR0 IP is 10.0.2.10 and runs on eno1 network device
- vmbr1 IP is 10.0.3.11 and runs on enp1s0 network device
- Host 2: Melchior
- VMBR0 IP is 10.0.2.11 and runs on eno1 network device
- VMBR1 IP is 10.0.3.12 and runs on enp1s0 network device
- Host 3: Balthasar
- VMBR0 IP is 10.0.2.12 and runs on eno1 network device
- VMBR1 IP is 10.0.3.13 and runs on enp1s0 network device
- Host 1: Gaspar
Vlan 20 10.0.2.0/25
Vlan 30 10.0.3.0/26
All devices have a 2TB M.2 SSD drive partitioned as follows:
Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 2099199 2097152 1G EFI System
/dev/nvme0n1p3 2099200 838860800 836761601 399G Linux LVM
/dev/nvme0n1p4 838862848 4000796671 3161933824 1.5T Linux LVM
Ceph status is as follows:
cluster:
id: 4429e2ae-2cf7-42fd-9a93-715a056ac295
health: HEALTH_OK
services:
mon: 3 daemons, quorum gaspar,balthasar,melchior (age 81m)
mgr: gaspar(active, since 83m)
osd: 3 osds: 3 up (since 79m), 3 in (since 79m)
data:
pools: 2 pools, 33 pgs
objects: 7 objects, 641 KiB
usage: 116 MiB used, 4.4 TiB / 4.4 TiB avail
pgs: 33 active+clean
pveceph pool ls shows following pools availble:
┌──────┬──────┬──────────┬────────┬─────────────┬────────────────┬───────────────────┬──────────────────────────┬───────│ Name │ Size │ Min Size │ PG Num │ min. PG Num │ Optimal PG Num │ PG Autoscale Mode │ PG Autoscale Target Size │ PG Aut╞══════╪══════╪══════════╪════════╪═════════════╪════════════════╪═══════════════════╪══════════════════════════╪═══════│ .mgr │ 3 │ 2 │ 1 │ 1 │ 1 │ on │ │
├──────┼──────┼──────────┼────────┼─────────────┼────────────────┼───────────────────┼──────────────────────────┼───────│ rbd │ 3 │ 2 │ 32 │ │ 32 │ on │ │
└──────┴──────┴──────────┴────────┴─────────────┴────────────────┴───────────────────┴──────────────────────────┴────
ceph osd pool application get rbd shows following:
ceph osd pool application get rbd
{
"rados": {}
}
rbd ls -l rbd shows
NAME SIZE PARENT FMT PROT LOCK
myimage 1 TiB 2
This is what's contained in the ceph.conf file:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.0.3.11/26
fsid = 4429e2ae-2cf7-42fd-9a93-715a056ac295
mon_allow_pool_delete = true
mon_host = 10.0.3.11 10.0.3.13 10.0.3.12
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.0.3.0/26
cluster_network = 10.0.3.0/26
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring
[mon.balthasar]
public_addr = 10.0.3.13
[mon.gaspar]
public_addr = 10.0.3.11
[mon.melchior]
public_addr = 10.0.3.12
All this seems to show that I should have a pool rbd available with an image of 1TB yet, when I try to add a storage, I can't find the pool in the drop down menu whn I go to Datacenter > Storage > Add > RBD and can't type in rbd in the pool part.
Any ideas what I could do to salvage this situation?