[SOLVED] CEPH storage configuration problem

daruom13

Member
Aug 1, 2020
31
2
13
37
Hello,

I have configured a Proxmox cluster with 3 dedicated server. Until then, all are working fine.

However, when I migrate a VM, it takes a long time.
So I rented a "cloud disk array" (The servers are at OVH) to have shared storage with CEPH.

I followed the OVH documentation, but it doesn't seem to work : docs OVH
This is what the web interface shows me:
ceph error.PNG

ceph error2.PNG


In the image, the key path does not match those described in the docs.
The public network contains a private address.

here is my configuration:
storage.cfg

storage.cfg.PNG

keyring :
keyring.PNG


Anyone have any idea where the error came from?

Thank you.
 
You connect Proxmox VE as a client to a Ceph cluster. There is no ceph.conf needed, only the storage.cfg. And I suppose the pool + user name is different as well.
 
Does the problem come from the fact that I installed ceph via the interface ? (example on another server) :
ceph web interface.PNG

The username and pool in storage.cfg are good, as I entered the info.

Should I remove "Ceph-nautilus", or just remove the "ceph.conf"?

Thank you for your feedback.
 
Does the problem come from the fact that I installed ceph via the interface ? (example on another server) :
Yes, this is to setup a hyper-converged Ceph storage on the nodes. You don't need this tab for client mode.

Should I remove "Ceph-nautilus", or just remove the "ceph.conf"?
Best remove the ceph.conf, it makes things easier. But the connection of the storage should already work.
 
Thank you for your indication.
I renamed the ceph.conf to ceph.conf.bak.
Now, through the interface, here is what I have:
new ceph web interface.PNG

Could the problem be with the ceph.client.admin.keyring file?

I am not the one who created this file. I (as indicated in the OVH doc) create a "ceph" folder and add the key to a file inside.

config ceph.PNG

Thank you.
 
Now, through the interface, here is what I have:
Forget that. That's for hyper-converged setups. That's not your case.

I am not the one who created this file. I (as indicated in the OVH doc) create a "ceph" folder and add the key to a file inside.
The key and username should be provided by OVH. If those are, then check your storage (side panel GUI) or pvesm status.
 
Hello,

There was a typo in the storage.cfg file.
But once corrected, the status seems "inactive"

ceph correction.PNG
 
Hello,

I solved my problem. I had made 2 mistakes.
The 1st, I thought that in storage.cfg, the "monhost" should be the IP of my server.
The second, in the keyring file, I put between "client.admin" brackets. In fact, it was "client.myuser".

Thank you again to Alwin for putting me on the right track.
 
  • Like
Reactions: Alwin
Hi @daruom13

I am interested in cloud disk array solutions with proxmox. Now that it works for you, can you tell me if you can use the cloud disk array for your VM and CT (LXC) disks?

Could you use "cloud disk array" as disk storage ? I tested glusterFS recently but I can't put a CT storage on it, so can't use HA and migration features.


Second question, are performances good for an operating system with it storage across the OVH network?

Than,k you for your help :)
 
Last edited:
Hi @daruom13

I am interested in cloud disk array solutions with proxmox. Now that it works for you, can you tell me if you can use the cloud disk array for your VM and CT (LXC) disks?

Could you use "cloud disk array" as disk storage ? I tested glusterFS recently but I can't put a CT storage on it, so can't use HA and migration features.


Second question, are performances good for an operating system with it storage across the OVH network?

Than,k you for your help :)
Hello, sorry to only answer now.

Yes, I am using the cloud disk array to store the hard disk of the VMs.
This allows a migration in a few seconds, and without interruption of service (or order of a few milliseconds).

The performances are excellent. The cloud disk array is located in Graveline (France), as well as 2 cluster nodes. The other 2 nodes of the cluster are located in Roubaix (distance of 100 km).
No communication problem between VMs, regardless of the server on which it is located. same for migration.

I can only recommend it. I hope I was helpful to you.
 
  • Like
Reactions: Shaaarnir

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!