Trying to setup Ceph RADOSGW for S3 entry

ErkDog

Member
May 10, 2020
44
3
13
43
I followed the guide here with appropriate changes for my environment:
https://base64.co.za/enable-amazon-s3-interface-for-ceph-inside-proxmox/

For example I don't have node1 2 3 I have pveclua b c d

4 Nodes.

So I replaced all instances of node# with pveclua b c d etc.

However, when I went to run the commands to create the permissions it said a bunch of pools just didn't exist, so I created them manually and ran the permissions commands with --yes-i-am-sure or whatever.

I get this error on this command:
Code:
root@pveclua:~# radosgw-admin pools list
could not list placement set: (2) No such file or directory

If I try to list users it works, but can't remove them so I can fix them:
Code:
root@pveclua:~# radosgw-admin user list
[
    "ecansol",
    "ecan2",
    "ecan"
]
radosgw-adroot@pveclua:~# radosgw-admin user rm -uid-"ecan"
ERROR: invalid flag -uid-ecan

These are the pools I have:
Code:
root@pveclua:~# ceph osd pool ls
.mgr
mainceph_data
mainceph_metadata
RDBceph
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
default.rgw.data.root
default.rgw.gc
default.rgw.users.uid
default.rgw.users.email
default.rgw.users.keys
default.rgw.buckets.index
default.rgw.buckets.data
default.rgw.lc

How can I delete those 3 users?

I also don't know what DNS Record I'm supposed to set in order for the rgw_dns_name to function properly.

I don't know how to access the block storage assuming I had all of those other things working.

Do I create a bucket w/ CLI? then access that to store files?

Quota, is that set on a user? So if I had a user that I wanted to give access to that would control their usage across whatever buckets they created ?

Is the quota against the entire usage on the cluster for example 10TB Quota would be 3.33TB Usable since it makes 3 copies? or is the 10TB Quota on the data they store, so if I gave someone a 10TB Quota they could use upwards of 30TB?

Thanks,
Matt
 
I'm thinking the fact that I have the rdb application enabled on some of these pools somehow may also be part of the problem but I don't know how to remove it:

Code:
pool 9 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6323 lfor 0/0/5569 flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6320 lfor 0/0/5569 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 5572 lfor 0/0/5570 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
pool 13 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6663 lfor 0/6663/6661 flags hashpspool stripe_width 0 application rbd,rgw
pool 14 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6724 lfor 0/6724/6722 flags hashpspool stripe_width 0 application rbd,rgw
pool 15 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6805 lfor 0/6805/6803 flags hashpspool stripe_width 0 application rbd,rgw
pool 16 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6807 lfor 0/6807/6805 flags hashpspool stripe_width 0 application rbd,rgw
pool 17 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6820 lfor 0/6820/6818 flags hashpspool stripe_width 0 application rbd,rgw
pool 18 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6861 lfor 0/6861/6859 flags hashpspool stripe_width 0 application rbd,rgw
pool 19 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6870 lfor 0/6870/6868 flags hashpspool stripe_width 0 application rbd,rgw
pool 20 'default.rgw.lc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 6853 lfor 0/6853/6851 flags hashpspool stripe_width 0 application rbd,rgw

I tried:

Code:
root@pveclua:~# ceph osd pool application disable default.rgw.buckets.data rdb --yes-i-really-mean-it
application 'rdb' is not enabled on pool 'default.rgw.buckets.data'
root@pveclua:~#

Which of course is a lie cause it absolutely is enabled :-/.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!