Hello,
I currently have this CEPH configuration on a three-node Proxmox 5.1 network:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 172.16.0.0/12
fsid = xxxxxxxxxxxxxxx
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pg bits = 14
osd pgp bits = 14
osd pool default min size = 2
osd pool default size = 3
public network = 172.16.0.0/12
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.appcl02]
host = appcl02
mon addr = 172.17.0.2:6789
[mon.appcl01]
host = appcl01
mon addr = 172.17.0.1:6789
[mon.appcl03]
host = appcl03
mon addr = 172.17.0.3:6789
A bit often I see in the dmesg of the three nodes the following message:
[21587785.122771] libceph: mon2 172.17.0.3:6789 session lost, hunting for new mon
[21587786.730591] libceph: mon0 172.17.0.1:6789 session established
[21587815.847358] libceph: mon0 172.17.0.1:6789 session lost, hunting for new mon
[21587815.848485] libceph: mon2 172.17.0.3:6789 session established
[21587846.567845] libceph: mon2 172.17.0.3:6789 session lost, hunting for new mon
[21587846.569205] libceph: mon1 172.17.0.2:6789 session established
[21587877.284345] libceph: mon1 172.17.0.2:6789 session lost, hunting for new mon
[21587878.760801] libceph: mon0 172.17.0.1:6789 session established
[21587908.004866] libceph: mon0 172.17.0.1:6789 session lost, hunting for new mon
I've read somehwere here that it's a bad practice to put the ceph cluster network in the same network as the public network. But does this create performance/additional problems like the above?
And, if that is the case..how could I migrate to separate network without creating problems? The cluster is in production, with a hundred of VMs running.
Thanks
I currently have this CEPH configuration on a three-node Proxmox 5.1 network:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 172.16.0.0/12
fsid = xxxxxxxxxxxxxxx
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pg bits = 14
osd pgp bits = 14
osd pool default min size = 2
osd pool default size = 3
public network = 172.16.0.0/12
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.appcl02]
host = appcl02
mon addr = 172.17.0.2:6789
[mon.appcl01]
host = appcl01
mon addr = 172.17.0.1:6789
[mon.appcl03]
host = appcl03
mon addr = 172.17.0.3:6789
A bit often I see in the dmesg of the three nodes the following message:
[21587785.122771] libceph: mon2 172.17.0.3:6789 session lost, hunting for new mon
[21587786.730591] libceph: mon0 172.17.0.1:6789 session established
[21587815.847358] libceph: mon0 172.17.0.1:6789 session lost, hunting for new mon
[21587815.848485] libceph: mon2 172.17.0.3:6789 session established
[21587846.567845] libceph: mon2 172.17.0.3:6789 session lost, hunting for new mon
[21587846.569205] libceph: mon1 172.17.0.2:6789 session established
[21587877.284345] libceph: mon1 172.17.0.2:6789 session lost, hunting for new mon
[21587878.760801] libceph: mon0 172.17.0.1:6789 session established
[21587908.004866] libceph: mon0 172.17.0.1:6789 session lost, hunting for new mon
I've read somehwere here that it's a bad practice to put the ceph cluster network in the same network as the public network. But does this create performance/additional problems like the above?
And, if that is the case..how could I migrate to separate network without creating problems? The cluster is in production, with a hundred of VMs running.
Thanks