Hello Proxmox Community,
I am currently running a Proxmox VE cluster with Ceph storage, and I would like to add a second public network to my configuration. Here is my current setup:
network interface:
Current Ceph Configuration:
Ceph Status:
I want to add the network 192.168.134.128/26 as a second ceph public network. Could you please guide me on how to do this? What are the necessary steps to update Ceph configuration in Proxmox VE 8? and services to restart ....
Thank you!
I am currently running a Proxmox VE cluster with Ceph storage, and I would like to add a second public network to my configuration. Here is my current setup:
network interface:
Bash:
root@s1proxmox01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:8e:38:2a brd ff:ff:ff:ff:ff:ff
altname enp0s4
inet6 fe80::5054:ff:fe8e:382a/64 scope link
valid_lft forever preferred_lft forever
3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:27:31:29 brd ff:ff:ff:ff:ff:ff
altname enp0s5
inet6 fe80::5054:ff:fe27:3129/64 scope link
valid_lft forever preferred_lft forever
4: ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:3f:07:dd brd ff:ff:ff:ff:ff:ff
altname enp0s6
inet6 fe80::5054:ff:fe3f:7dd/64 scope link
valid_lft forever preferred_lft forever
5: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 52:54:00:e4:9f:05 brd ff:ff:ff:ff:ff:ff
altname enp0s3
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:e4:9f:05 brd ff:ff:ff:ff:ff:ff
inet 10.10.134.2/16 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee4:9f05/64 scope link
valid_lft forever preferred_lft forever
7: vlan70@ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:8e:38:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.134.10/26 scope global vlan70
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe8e:382a/64 scope link
valid_lft forever preferred_lft forever
8: vlan72@ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:27:31:29 brd ff:ff:ff:ff:ff:ff
inet 192.168.134.70/26 scope global vlan72
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe27:3129/64 scope link
valid_lft forever preferred_lft forever
9: vlan73@ens6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:3f:07:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.134.140/26 scope global vlan73
valid_lft forever preferred_lft forever
Current Ceph Configuration:
Bash:
root@s1proxmox01:~# more /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.134.70/26
fsid = 358a2a4f-3b2b-4e20-9ad8-ddf136bc1e17
mon_allow_pool_delete = true
mon_host = 192.168.134.10 192.168.134.11 192.168.134.12
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.134.10/26
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring
[mon.s1proxmox01]
public_addr = 192.168.134.10
[mon.s1proxmox02]
public_addr = 192.168.134.11
[mon.s1proxmox03]
public_addr = 192.168.134.12
Ceph Status:
Bash:
root@s1proxmox01:~# ceph -s
cluster:
id: 358a2a4f-3b2b-4e20-9ad8-ddf136bc1e17
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum s1proxmox01,s1proxmox02,s1proxmox03 (age 51m)
mgr: s1proxmox01(active, since 59m), standbys: s1proxmox02
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
I want to add the network 192.168.134.128/26 as a second ceph public network. Could you please guide me on how to do this? What are the necessary steps to update Ceph configuration in Proxmox VE 8? and services to restart ....
Thank you!