Hi,
I read this sentence on ceph hardware-recommendations : "Provision at least 10 Gb/s networking in your datacenter, both among Ceph hosts and between clients and your Ceph cluster"
This is my ceph configuration:
My proxmox server "1-34-0001"
My virtual machine uses vmbr0. vmbr1 as the public network of ceph, vmbr2 as the cluster network of ceph. However, the 10 GB NIC only has eno1 and eno2.
Please give me some advice on the following questions:
1. Whether the vm server uses a 1GB NIC to communicate with the ceph public network seriously affects the performance.
2. If the current configuration performance is poor, can the cluster network use vmbr1 with the public network and vm server use vmbr2 ?
I read this sentence on ceph hardware-recommendations : "Provision at least 10 Gb/s networking in your datacenter, both among Ceph hosts and between clients and your Ceph cluster"
This is my ceph configuration:
Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.4.34/24
fsid = 9d805c38-f944-4e24-8e6c-6b278011c85f
mon_allow_pool_delete = true
mon_host = 192.168.3.34 192.168.3.35 192.168.3.36
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.3.34/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.1-34-0001]
public_addr = 192.168.3.34
[mon.1-35-0001]
public_addr = 192.168.3.35
[mon.1-36-0001]
public_addr = 192.168.3.36
My proxmox server "1-34-0001"
/etc/network/interfaces
:
Code:
auto lo
iface lo inet loopback
iface eno4 inet manual
iface eno3 inet manual
iface eno1 inet manual
iface eno2 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.34/24
gateway 192.168.1.1
bridge-ports eno4
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.3.34/24
bridge-ports eno2
bridge-stp off
bridge-fd 0
auto vmbr2
iface vmbr2 inet static
address 192.168.4.34/24
bridge-ports eno1
bridge-stp off
bridge-fd 0
My virtual machine uses vmbr0. vmbr1 as the public network of ceph, vmbr2 as the cluster network of ceph. However, the 10 GB NIC only has eno1 and eno2.
Please give me some advice on the following questions:
1. Whether the vm server uses a 1GB NIC to communicate with the ceph public network seriously affects the performance.
2. If the current configuration performance is poor, can the cluster network use vmbr1 with the public network and vm server use vmbr2 ?
Last edited: