Ceph mount a PG/pool for "Images & ISOs"

liszca

Active Member
May 8, 2020
67
1
28
22
wanted to mount a ceph PG for Image and ISOs. Its just to have all images and ISO the same on every node.
I named the PG "proxmox"

To check and put the mount somewhere I edited file: "/etc/pve/storage.cfg"

Code:
cephfs: proxmox
        path /mnt/pve/proxmox
        content iso,images
        fs-name proxmox
        monhost 192.168.0.10  192.168.0.11  192.168.0.12  192.168.0.13

The result is it doesn't come to work on all the nodes
 
Last edited:
Did you create a MDS (metadata server), better at least two, and a Ceph FS?

Please provide the output of the following commands within [CODE][/CODE] for better readability.

Code:
ceph -s
pveceph pool ls --noborder


Also, to get the terminology right: The hierarchy is, from the top down: Pool -> Placement group (PG) -> Objects. The PGs exist to make the accounting for all the objects easier. So that it can be done on the PG level, and not for each individual object.

There should be two pools for a Ceph FS present, if created through the Proxmox VE tooling (GUI or pveceph fs create. One {fsname}_metadata and one {fsname}_data.
 
Did you create a MDS (metadata server), better at least two, and a Ceph FS?
Yes I did, but I am not sure if I did right:
1696327635917.png
Note "(ceph1)" happend while trying to do something different: " ceph fs volume create test1" Later I renamed it. I is that needed? before both where in standby.

Code:
  cluster:
    id:     ddfe12d5-782f-4028-b499-71f3e6763d8a
    health: HEALTH_OK

  services:
    mon: 4 daemons, quorum aegaeon,anthe,atlas,calypso (age 2h)
    mgr: calypso(active, since 3h), standbys: aegaeon
    mds: 1/1 daemons up, 1 standby
    osd: 4 osds: 4 up (since 2h), 4 in (since 2h)

  data:
    volumes: 1/1 healthy
    pools:   5 pools, 113 pgs
    objects: 39.99k objects, 152 GiB
    usage:   455 GiB used, 3.3 TiB / 3.7 TiB avail
    pgs:     113 active+clean

  io:
    client:   38 KiB/s wr, 0 op/s rd, 7 op/s wr


pveceph pool ls --noborder
Code:
Name             Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale
.mgr                3        2      1           1              1 on
ceph                3        2     32                         32 on
cephfs.ceph.data    3        2     32                         32 on
cephfs.ceph.meta    3        2     16          16             16 on
proxmox             3        2     32                         32 on

There should be two pools for a Ceph FS present, if created through the Proxmox VE tooling (GUI or pveceph fs create. One {fsname}_metadata and one {fsname}_data.
I am confused by the volume "ceph1" created earlier. Whats the thing with a volume?

Code:
 # ceph fs volume ls
[
    {
        "name": "ceph1"
    }
]
 
Last edited:
Looks like some things got a bit messy playing around. I recommend that you remove the Ceph file system "ceph1" and its associtated pools. Then also the pool "proxmox" and the storage config for it and setup the Ceph FS through either the Proxmox VE GUI or with the pveceph fs create command.

This way, you will also get the correct storage config on Proxmox VE.
 
Does it make sense to have multiple MDS on each node?
1696337550528.png

how about the managers does it make sense to have multiple in standby
1696337590727.png
 
My Conclusion is it makes sense to have multiple MDS and managers on standby incase one dies because that node is down.
 
Having at least one more MGR or MDS on a another node is a good idea, as you said, so they can take over if the node where the current active one runs on, fails.

The MONs work similar as Proxmox VE nodes themselves, by forming a quorum (majority). While MGR and MDS work in active/stand-by modes.

If you have multiple CephFS' you might even want to run multiple MDS per node.
 
If you run CephFS on the local, hyperconverged, Ceph cluster, you don't need to do it manually like this. Remove all remains of former tries and then use the GUI to create a new CephFS and make sure that the checkbox "Add Storage" is enabled. It will handle it all for you and you will see the created CephFS mounted under /mnt/pve/{cephfs name}
 
If you run CephFS on the local, hyperconverged, Ceph cluster, you don't need to do it manually like this. Remove all remains of former tries and then use the GUI to create a new CephFS and make sure that the checkbox "Add Storage" is enabled
I managed to get it to work, but not with the name "proxmox" it was complaining its already used. But I was unable to figure out where cleanup is needed

By the way many thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!