Weird behavior radosGW

hm-trustinfo

New Member
Jun 8, 2023
7
0
1
Hi,

We use the following tutorials to get radosGW work :

https://devopstales.github.io/home/proxmox-ceph-radosgw/
https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway

Everything seems to work perfectly I can navigate, create a bucket and add object with the following tools (using V2):

https://s3browser.com/

But when I try to use hyperbackup for customer backup we get “network error” :


1718697682040.png


The weird part is during the initial set-up hyperbackup can list all the bucket, this let me think the configuration in all right.

1718697770358.png
The error windows come when we choose the bucket.

We use the following radosgw configuration :

Code:
[client.radosgw.PROX-190-STOR-1]
host = PROX-190-STOR-1
keyring = /etc/pve/priv/ceph.client.radosgw.keyring
log file = /var/log/ceph/client.radosgw.$host.log
rgw frontends = beast endpoint=192.168.190.26:80
rgw_dns_name = s3.acces1.trustinfo.fr
rgw_curl_tcp_keepalive = 1
rgw_relaxed_s3_bucket_names = true
rgw_trust_forwarded_https = true
debug rgw = 20


In front we use HAProxy for the loadbalance and let's encrypt certificate.

I enable the log and a hyperbackup connection look like this :


Code:
2024-06-18T09:56:37.865+0200 70d08e2006c0 15 req 3892795138370987458 0.001999907s s3:list_buckets server signature=LDyHpVQYK7kBK6wkgeFa8Vo26wI=
2024-06-18T09:56:37.865+0200 70d08e2006c0 15 req 3892795138370987458 0.001999907s s3:list_buckets client signature=LDyHpVQYK7kBK6wkgeFa8Vo26wI=
2024-06-18T09:56:37.865+0200 70d08e2006c0 15 req 3892795138370987458 0.001999907s s3:list_buckets compare=0
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets rgw::auth::s3::LocalEngine granted access
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets rgw::auth::s3::AWSAuthStrategy granted access
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets normalizing buckets and tenants
2024-06-18T09:56:37.865+0200 70d08e2006c0 10 req 3892795138370987458 0.001999907s s->object=<NULL> s->bucket=
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets init permissions
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets get_system_obj_state: rctx=0x70d2679aecb8 obj=default.rgw.meta:users.uid:005474 state=0x70d23c099500 s->prefetch_data=0
2024-06-18T09:56:37.865+0200 70d08e2006c0 10 req 3892795138370987458 0.001999907s s3:list_buckets cache get: name=default.rgw.meta+users.uid+005474 : hit (requested=0x16, cached=0x17)
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets get_system_obj_state: s->obj_tag was set empty
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets Read xattr: user.rgw.idtag
2024-06-18T09:56:37.865+0200 70d08e2006c0 10 req 3892795138370987458 0.001999907s s3:list_buckets cache get: name=default.rgw.meta+users.uid+005474 : hit (requested=0x13, cached=0x17)
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets recalculating target
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets reading permissions
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets init op
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets verifying op mask
2024-06-18T09:56:37.865+0200 70d08e2006c0 20 req 3892795138370987458 0.001999907s s3:list_buckets required_mask= 1 user.op_mask=7
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets verifying op permissions
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets verifying op params
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets pre-executing
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets check rate limiting
2024-06-18T09:56:37.865+0200 70d08e2006c0  2 req 3892795138370987458 0.001999907s s3:list_buckets executing
2024-06-18T09:56:37.866+0200 70d07ac006c0  2 req 3892795138370987458 0.002999860s s3:list_buckets completing
2024-06-18T09:56:37.866+0200 70d07ac006c0 20 req 3892795138370987458 0.002999860s get_system_obj_state: rctx=0x70d2679af760 obj=default.rgw.log:script.postrequest. state=0x70d254009590 s->prefetch_data=0
2024-06-18T09:56:37.866+0200 70d07ac006c0 10 req 3892795138370987458 0.002999860s cache get: name=default.rgw.log++script.postrequest. : expiry miss
2024-06-18T09:56:37.866+0200 70d077a006c0 10 req 3892795138370987458 0.002999860s cache put: name=default.rgw.log++script.postrequest. info.flags=0x0
2024-06-18T09:56:37.866+0200 70d077a006c0 10 req 3892795138370987458 0.002999860s adding default.rgw.log++script.postrequest. to cache LRU end
2024-06-18T09:56:37.866+0200 70d077a006c0  2 req 3892795138370987458 0.002999860s s3:list_buckets op status=0
2024-06-18T09:56:37.866+0200 70d077a006c0  2 req 3892795138370987458 0.002999860s s3:list_buckets http status=200
2024-06-18T09:56:37.866+0200 70d077a006c0  1 ====== req done req=0x70d2679b0700 op status=0 http_status=200 latency=0.002999860s ======
2024-06-18T09:56:37.866+0200 70d077a006c0  1 beast: 0x70d2679b0700: 192.168.190.239 - 005474 [18/Jun/2024:09:56:37.863 +0200] "GET / HTTP/1.1" 200 333 - "HyperBackup/3.0.2-2531 (RS1219+; DSM 7.1-42962) aws-sdk-php2/2.8.31 Guzzle/3.9.3 curl/7.79.1 PHP/7.3.3" - latency=0.002999860s


With the Github ceph I triend several modifications but nothing conclusive :

https://github.com/ceph/ceph/blob/4...2a1d/src/common/options/rgw.yaml.in#L883-L894


Does hyperbackup need extra confirmation to work ? I can’t find more information on internet.
I suspect an issue with the keep alivee.

Help is apreciate
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!