Proxmox 5.2 ceph cluster problem

basanisi

Active Member
Apr 15, 2011
40
2
28
Hello,

I have a cluster of 3 proxmox 5.2 server, all works correctly since 3 days ago.

Ceph mon and ceph osd on one server stop working correctly, and the most surprising think is on the proxmox server with ceph don't work, the cli ceph command did not respond.

Thank's form you help.
 

Attachments

  • Capture d’écran de 2018-06-19 08-07-46.png
    Capture d’écran de 2018-06-19 08-07-46.png
    146.1 KB · Views: 21
  • Capture d’écran de 2018-06-19 08-07-53.png
    Capture d’écran de 2018-06-19 08-07-53.png
    162.7 KB · Views: 19
  • Capture d’écran de 2018-06-19 08-08-11.png
    Capture d’écran de 2018-06-19 08-08-11.png
    164.8 KB · Views: 18
  • Capture d’écran de 2018-06-19 08-08-19.png
    Capture d’écran de 2018-06-19 08-08-19.png
    158.3 KB · Views: 17

fireon

Famous Member
Oct 25, 2010
3,906
324
103
40
Austria/Graz
iteas.at
Look like an network/hardwareproblem. Please post your networkconfiguration from all nodes. And what hardware do you using: Memory/CPU/HDDs/SSDs/Controller... Post also your syslog for the specific problemtime.
 

basanisi

Active Member
Apr 15, 2011
40
2
28
Hello,

Thank's for your rapid answer.

All my 3 servers are xeons based with :

64 Gb of ram
Raid disk hardware controller with 8 disk of 1Gb of SATA HDD
1 raid 1 with 2 disks
1 raid 10 with 6 disks
4 network adapters.

Effectively there is a problem on syslog link to a ping to 10.20.10.2:6803 but I don't know how to correct this

Thank's for your answers
 

Attachments

  • proxmox01_netconf.txt
    1.5 KB · Views: 14
  • proxmox05_netconf.txt
    2.3 KB · Views: 2
  • proxmox06_netconf.txt
    1.8 KB · Views: 1
Last edited:

basanisi

Active Member
Apr 15, 2011
40
2
28
Thank's for your precious informations the cluster is running normally. The problem was on the cisco switches, and precisely igmp was not activate.

But I did not understand your remark "no separate network for you ceph and no separate network for you cluster itself", because the cluster network management is 10.165.2.1/24 range and the ceph network range 10.20.10.1/24.

May I use vlan for ceph network ?

Thank's for your answer.
 

fireon

Famous Member
Oct 25, 2010
3,906
324
103
40
Austria/Graz
iteas.at
But I did not understand your remark "no separate network for you ceph and no separate network for you cluster itself", because the cluster network management is 10.165.2.1/24 range and the ceph network range 10.20.10.1/24.
But this are bridges and are usable for VM's too. Thats a very bad idea. What you should have?

One Network bond/bridge... vmbr0 for your VM's. And that is also the IP-Adress what is used to contact the webinterface. Every VLAN what you would like to use in the VM's must be tagged on that bond directly on the switch. After that you are able to easy write the vlan ID in the VMconfig in the webinterface. So normally only one bridge is needed.

One Network only for Ceph. A bond likely 10Ggiabit, no bridge normal interface, only for ceph communication.

One Network a bond (Redundant Ring Protocol) or only one PHY Interface for the Clustercommunication (Corosync). For this network no gateway and no internet is needed.

post-up ip link set dev eno1 mtu 9000 && ip link set dev eno2 mtu 9000
post-up ip link set dev enp6s0f0 mtu 9000 && ip link set dev enp6s0f1 mtu 9000
post-up ip link set dev bond0 mtu 9000
post-up ip link set dev bond1 mtu 9000
post-up ip link set dev vmbr0 mtu 9000
post-up ip link set dev vmbr1 mtu 9000
Why you need this?
 

basanisi

Active Member
Apr 15, 2011
40
2
28
Hello I follow your advices and, I modify my 3 proxmox server has you said.

To reply to your answer, this configuration are in the /etc/network/interface to activate jumbo frames on all network interface. It's the only solution that I found to activate it on boot.

post-up ip link set dev eno1 mtu 9000 && ip link set dev eno2 mtu 9000
post-up ip link set dev enp6s0f0 mtu 9000 && ip link set dev enp6s0f1 mtu 9000
post-up ip link set dev bond0 mtu 9000
post-up ip link set dev bond1 mtu 9000
post-up ip link set dev vmbr0 mtu 9000
post-up ip link set dev vmbr1 mtu 9000
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!