Adding a Second Public Network to Proxmox VE with Ceph Cluster

b2a225

New Member
Jun 22, 2024
7
0
1
Hello Proxmox Community,

I am currently running a Proxmox VE cluster with Ceph storage, and I would like to add a second public network to my configuration. Here is my current setup:

network interface:

Bash:
root@s1proxmox01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:8e:38:2a brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet6 fe80::5054:ff:fe8e:382a/64 scope link
       valid_lft forever preferred_lft forever
3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:27:31:29 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet6 fe80::5054:ff:fe27:3129/64 scope link
       valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:3f:07:dd brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    inet6 fe80::5054:ff:fe3f:7dd/64 scope link
       valid_lft forever preferred_lft forever
5: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 52:54:00:e4:9f:05 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:e4:9f:05 brd ff:ff:ff:ff:ff:ff
    inet 10.10.134.2/16 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fee4:9f05/64 scope link
       valid_lft forever preferred_lft forever
7: vlan70@ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:8e:38:2a brd ff:ff:ff:ff:ff:ff
    inet 192.168.134.10/26 scope global vlan70
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe8e:382a/64 scope link
       valid_lft forever preferred_lft forever
8: vlan72@ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:27:31:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.134.70/26 scope global vlan72
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe27:3129/64 scope link
       valid_lft forever preferred_lft forever
9: vlan73@ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:3f:07:dd brd ff:ff:ff:ff:ff:ff
    inet 192.168.134.140/26 scope global vlan73
       valid_lft forever preferred_lft forever

Current Ceph Configuration:
Bash:
root@s1proxmox01:~# more /etc/ceph/ceph.conf
[global]
    auth_client_required = cephx
    auth_cluster_required = cephx
    auth_service_required = cephx
    cluster_network = 192.168.134.70/26
    fsid = 358a2a4f-3b2b-4e20-9ad8-ddf136bc1e17
    mon_allow_pool_delete = true
    mon_host = 192.168.134.10 192.168.134.11 192.168.134.12
    ms_bind_ipv4 = true
    ms_bind_ipv6 = false
    osd_pool_default_min_size = 2
    osd_pool_default_size = 3
    public_network = 192.168.134.10/26

[client]
    keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
    keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.s1proxmox01]
    public_addr = 192.168.134.10

[mon.s1proxmox02]
    public_addr = 192.168.134.11

[mon.s1proxmox03]
    public_addr = 192.168.134.12

Ceph Status:

Bash:
root@s1proxmox01:~# ceph -s
  cluster:
    id:     358a2a4f-3b2b-4e20-9ad8-ddf136bc1e17
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum s1proxmox01,s1proxmox02,s1proxmox03 (age 51m)
    mgr: s1proxmox01(active, since 59m), standbys: s1proxmox02
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

I want to add the network 192.168.134.128/26 as a second ceph public network. Could you please guide me on how to do this? What are the necessary steps to update Ceph configuration in Proxmox VE 8? and services to restart ....

Thank you!
 
Documentation is fine but I think their is somthing wrong with proxmox.... below my last configuration update. still not working.

Bash:
root@s1proxmox01:~# cat /etc/ceph/ceph.conf
[global]
        auth_client_required = cephx
        auth_cluster_required = cephx
        auth_service_required = cephx
        cluster_network = 192.168.134.70/26
        fsid = 358a2a4f-3b2b-4e20-9ad8-ddf136bc1e17
        mon_allow_pool_delete = true
        mon_host = 192.168.134.10,192.168.134.11,192.168.134.12,192.168.134.140,192.168.134.141,192.168.134.142
        ms_bind_ipv4 = true
        ms_bind_ipv6 = false
        osd_pool_default_min_size = 2
        osd_pool_default_size = 3
        public_network = 192.168.134.10/26,192.168.134.140/26
[mon]
        public_network = 192.168.134.0/26,192.168.134.128/26
[client]
        keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
        keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.s1proxmox01]
        public_addr = 192.168.134.10,192.168.134.140

[mon.s1proxmox02]
        public_addr = 192.168.134.11,192.168.134.141

[mon.s1proxmox03]
        public_addr = 192.168.134.12,192.168.134.142

root@s1proxmox01:~#
root@s1proxmox01:~#
root@s1proxmox01:~#
root@s1proxmox01:~# systemctl restart ceph-mon.target
root@s1proxmox01:~#
root@s1proxmox01:~# ss -tulpn | grep 6789
tcp   LISTEN 0      512    192.168.134.10:6789      0.0.0.0:*    users:(("ceph-mon",pid=10289,fd=27))                 
root@s1proxmox01:~# ss -tulpn | grep ceph-mon
tcp   LISTEN 0      512    192.168.134.10:6789      0.0.0.0:*    users:(("ceph-mon",pid=10289,fd=27))                 
tcp   LISTEN 0      512    192.168.134.10:3300      0.0.0.0:*    users:(("ceph-mon",pid=10289,fd=26))                 
root@s1proxmox01:~#
 
You have define multiple Mon Networks. That’s completely wrong configuration.
 
Please be carefull with what you write. have a look here https://www.ibm.com/docs/en/storage...-configuring-multiple-public-networks-cluster . In my cluster, I have three monitors. In ceph storage, you can two type of networks, public network and cluster network(for osd data replication, rebalance, ...). For public network, you can define more than one subnet and ceph is design for that. MON are connected to that public network. By design this configuration is right but in the context of proxmox how can I adapt it or what is the right configuration.
Best regards,
 
Last edited:
You should always be careful with storage networks.
With Ceph, you should have at least 3 monitor and a maximum of 5 monitor.

You should always be careful with storage networks.
With Ceph, you should have at least 3 monitor and a maximum of 5 monitor.
 
You should always be careful with storage networks.
With Ceph, you should have at least 3 monitor and a maximum of 5 monitor.

You should always be careful with storage networks.
With Ceph, you should have at least 3 monitor and a maximum of 5 mo
This is another subject. Please what is your solution for my subject.
 
If you tell me what your original problem is, I can certainly help you.
Using multiple networks with Ceph is not trivial and is only done if you want to connect external routed clusters, for example.
 
If you tell me what your original problem is, I can certainly help you.
Using multiple networks with Ceph is not trivial and is only done if you want to connect external routed clusters, for example.
So to sum up, you don't know how to "Adding a Second Public Network to Proxmox VE with Ceph Cluster". Thanks.
 
you made two interfaces on the same network. dont do that. it wont do what you think it does. edit- I see you properly subnetted them- should be fine.
no it is two different subnets
Bash:
public_network = 192.168.134.0/26,192.168.134.128/26
 
So to sum up, you don't know how to "Adding a Second Public Network to Proxmox VE with Ceph Cluster". Thanks.
I know how to do it and I also know about the problems involved, as storage traffic has to be routed. I always try to avoid such setups, as they are always potentially prone to major problems.

Such requests are extremely rare and often have a background such as an increase in bandwidth. This is not suitable for that. That's why I'm asking about the goal you want to achieve with it
 
Last edited:
Honestly, unless there is some reason to try to shove all your interfaces to the same /24, it would probably be a lot easier to troubleshoot if you gave each network its own /24. Your config should work; I'm guessing you have masking issues.
 
Honestly, unless there is some reason to try to shove all your interfaces to the same /24, it would probably be a lot easier to troubleshoot if you gave each network its own /24. Your config should work; I'm guessing you have masking issues.
I'll try that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!