[SOLVED] Setting up ceph, getting an error in the gui

Im not sure but I guess you cant use the same network 192.168.0.0/24 for both ceph networks.
Well, yes you can - although there are really good reasons for two physically separated networks. I am testing it currently with
Code:
~# grep _network /etc/ceph/ceph.conf
         cluster_network = 10.3.16.7/16
         public_network = 10.3.16.7/16

...and it seems to work. Again: not recommended. And additionally: keep an eye on corosync, I have that one on different network cable.

Best regards
 
  • Like
Reactions: jsterr
Well, yes you can - although there are really good reasons for two physically separated networks. I am testing it currently with
Code:
~# grep _network /etc/ceph/ceph.conf
         cluster_network = 10.3.16.7/16
         public_network = 10.3.16.7/16

...and it seems to work. Again: not recommended. And additionally: keep an eye on corosync, I have that one on different network cable.

Best regards
Im curious, why are you testing this?
 
  • Like
Reactions: rsr911
Nice! Its ok :) I always like connecting to people, so if you want you can add me on linkedin (business) if you have! Greetings
Request sent. I just use the free account.

Is changing the ceph networks more involved than just changing the IPs now that it's up and working? I suppose it doesn't matter one of mine is a public IP (oops) as it's on an isolated switch.
 
Im curious, why are you testing this?
Because I can ;-)

I am definitely not a Ceph specialist, I use ZFS primarily. This is just a homelab: some HM80 have a SATA/ZFS-Mirror for VM-data. Then there is a single build-in NVMe which I can not find a good way for to utilize. So I set up Ceph with a single OSD per node. (Three nodes to start with, probably five nodes for Christmas...)

And to be clear once again: this setup is definitely not recommended as it is probably just too small for a stable system - I am just gathering experience.

Best regards
 
  • Like
Reactions: rsr911 and jsterr
Because I can ;-)

I am definitely not a Ceph specialist, I use ZFS primarily. This is just a homelab: some HM80 have a SATA/ZFS-Mirror for VM-data. Then there is a single build-in NVMe which I can not find a good way for to utilize. So I set up Ceph with a single OSD per node. (Three nodes to start with, probably five nodes for Christmas...)

And to be clear once again: this setup is definitely not recommended as it is probably just too small for a stable system - I am just gathering experience.

Best regards
Nice! So you did not get the error from this post like @rsr911 got?
 
  • Like
Reactions: rsr911
I'd like to thank you guys for the help. My cluster is up and running and Ive got PBS on a separate PVE server doing backups. Hoping to order a fifth server this coming week so I can have a 5 node cluster (4 as data).

I'm going to call this issue solved. I'm about to post another, likely easier question as a new thread.
 
  • Like
Reactions: jsterr

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!