ceph monitor downtime at trun on firewall

hitsword

Member
Nov 8, 2017
17
1
23
31
I have 3 network
vmbr0 172.16.0.0/16 for default
vmbr1 192.168.33.0/24 for PVE cluster
vmbr2 192.168.66.0/24 for Ceph cluter

I set the DC firewall:
Firewall = YES
Input Policy = ACCEPT
Output Policy = ACCEPT

Code:
root@pve6:~# ceph status
  cluster:
    id:     3ac8b2fe-a802-4b27-91b3-5f466bc25158
    health: HEALTH_WARN
            1/5 mons down, quorum pve2,pve3,pve4,pve6

  services:
    mon: 5 daemons, quorum pve2,pve3,pve4,pve5,pve6
    mgr: pve4(active), standbys: pve6, pve3, pve2, pve5
    osd: 36 osds: 36 up, 36 in

  data:
    pools:   1 pools, 1024 pgs
    objects: 794.83k objects, 2.82TiB
    usage:   8.97TiB used, 56.5TiB / 65.5TiB avail
    pgs:     1024 active+clean

  io:
    client:   309KiB/s rd, 629KiB/s wr, 12op/s rd, 57op/s wr

why monitor downtime?
 
ceph.conf
Code:
[global]
     auth client required = cephx
     auth cluster required = cephx
     auth service required = cephx
     cluster network = 192.168.66.0/24
     fsid = 3ac8b2fe-a802-4b27-91b3-5f466bc25158
     keyring = /etc/pve/priv/$cluster.$name.keyring
     mon allow pool delete = true
     osd journal size = 5120
     osd pool default min size = 2
     osd pool default size = 3
     public network = 192.168.66.0/24
     mon_cluster_log_file_level = info

[osd]
     keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.pve3]
     host = pve3
     mon addr = 192.168.66.3:6789

[mon.pve4]
     host = pve4
     mon addr = 192.168.66.4:6789

[mon.pve5]
     host = pve5
     mon addr = 192.168.66.5:6789

[mon.pve2]
     host = pve2
     mon addr = 192.168.66.2:6789

[mon.pve6]
     host = pve6
     mon addr = 192.168.66.6:6789
 
The firewall rules are not picked up right away. Check if the INPUT/OUTPUT is on ACCEPT, iptables -L and restart the MONs if they didn't resume service already.
 
ceph status command no response at turn on firewall.
iptables -L in attach file.
 

Attachments

  • iptables.txt
    14.4 KB · Views: 7
Code:
root@pve2:/etc/pve# cat firewall/cluster.fw 
[OPTIONS]

enable: 1
policy_in: ACCEPT

[IPSET groupnet] # PVEGroup

192.168.33.0/24 # GroupNet

[IPSET storagenet] # PVEStorage

192.168.66.0/24 # StorageNet

[IPSET wannet] # PVEWAN

172.16.0.0/16 # WANNet

[RULES]

IN ACCEPT -i vmbr2 -log debug # TEST
GROUP pve_storage -i vmbr2 # Storage
GROUP pve_grouplan -i vmbr1 # GroupLAN
GROUP pve_wan -i vmbr0 # WAN

[group pve_grouplan] # GroupLAN

IN ACCEPT -p tcp -dport 111 -log nolog # PVE-rpcbind
IN Ping(ACCEPT) -log nolog # PING
IN SSH(ACCEPT) -log nolog # PVE-SSH
IN ACCEPT -p udp -dport 5405 -log nolog # PVE-Corosync-UDP
IN ACCEPT -p tcp -dport 5404 -log nolog # PVE-Corosync

[group pve_storage] # Storage

IN SSH(ACCEPT) -log nolog # PVE-SSH
IN Ping(ACCEPT) -log nolog # PING
IN ACCEPT -p tcp -dport 111 -log nolog # PVE-rpcbind
IN Ceph(ACCEPT) -log nolog # PVE-Ceph

[group pve_wan] # WAN

IN ACCEPT -p tcp -dport 111 -log nolog # PVE-rpcbind
IN Ping(ACCEPT) -log nolog # PING
IN ACCEPT -p tcp -dport 3128 -log nolog # PVE-SPICE
IN ACCEPT -p tcp -dport 8006 -log nolog # PVE-HTTPS
IN SSH(ACCEPT) -log nolog # PVE-SSH
IN HTTP(ACCEPT) -log nolog # PVE-HTTP
 
Yes.I'm sure.
ceph -s and ceph -w command no response at firewall turn on.
First I added ceph macro in filewall. the ceph no response.
After I set ACCEPT any INPUT. the ceph no response too.
I turn off firewall . the ceph is ok.
I turn on firewall again.the ceph no response again.
CEPH LOG:
Code:
2020-01-28 21:22:36.747643 mon.pve2 mon.0 192.168.66.2:6789/0 255 : cluster [INF] mon.pve2 calling monitor election
2020-01-28 21:22:38.241872 mon.pve3 mon.1 192.168.66.3:6789/0 102 : cluster [INF] mon.pve3 calling monitor election
2020-01-28 21:22:38.409037 mon.pve4 mon.2 192.168.66.4:6789/0 112 : cluster [INF] mon.pve4 calling monitor election
2020-01-28 21:22:39.296906 mon.pve6 mon.4 192.168.66.6:6789/0 124 : cluster [INF] mon.pve6 calling monitor election
2020-01-28 21:22:44.558286 mon.pve2 mon.0 192.168.66.2:6789/0 256 : cluster [INF] mon.pve2 calling monitor election
2020-01-28 21:22:49.635462 mon.pve2 mon.0 192.168.66.2:6789/0 257 : cluster [INF] mon.pve2 is new leader, mons pve2,pve3,pve4,pve5 in quorum (ranks 0,1,2,3)
2020-01-28 21:22:50.565385 mon.pve2 mon.0 192.168.66.2:6789/0 258 : cluster [INF] overall HEALTH_OK
2020-01-28 21:22:50.565456 mon.pve2 mon.0 192.168.66.2:6789/0 259 : cluster [INF] mon.pve2 calling monitor election
2020-01-28 21:22:50.906136 mon.pve2 mon.0 192.168.66.2:6789/0 260 : cluster [INF] mon.pve2 is new leader, mons pve2,pve3,pve4,pve5,pve6 in quorum (ranks 0,1,2,3,4)
2020-01-28 21:22:51.957342 mon.pve2 mon.0 192.168.66.2:6789/0 265 : cluster [INF] overall HEALTH_OK
2020-01-28 21:21:25.815902 osd.34 osd.34 192.168.66.3:6800/4537 1589 : cluster [WRN] slow request 30.704321 seconds old, received at 2020-01-28 21:20:55.111528: osd_op(client.55150877.0:27504209 2.2ea 2:57702b1a:::rbd_data.1a7e4d6b8b4567.00000000000016b3:head [write 2609152~139264] snapc 132=[132] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 15
2020-01-28 21:21:38.268633 osd.16 osd.16 192.168.66.5:6808/5389 1418 : cluster [WRN] slow request 30.115083 seconds old, received at 2020-01-28 21:21:08.153484: osd_op(client.55166360.0:5495520 2.29 2:94224c08:::rbd_data.bda8574b0dc51.000000000001147f:head [write 2613248~8192] snapc 13b=[13b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 21
2020-01-28 21:21:54.738941 osd.34 osd.34 192.168.66.3:6800/4537 1590 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 59.627366 secs
2020-01-28 21:21:54.738949 osd.34 osd.34 192.168.66.3:6800/4537 1591 : cluster [WRN] slow request 30.516713 seconds old, received at 2020-01-28 21:21:24.222180: osd_op(client.47833248.0:27073759 2.2ea 2:57711360:::rbd_data.1703066b8b4567.0000000000017800:head [write 61440~4096] snapc 133=[133] ondisk+write+known_if_redirected e9176) currently op_applied
2020-01-28 21:21:55.736362 osd.34 osd.34 192.168.66.3:6800/4537 1592 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 60.624800 secs
2020-01-28 21:21:55.736370 osd.34 osd.34 192.168.66.3:6800/4537 1593 : cluster [WRN] slow request 60.624800 seconds old, received at 2020-01-28 21:20:55.111528: osd_op(client.55150877.0:27504209 2.2ea 2:57702b1a:::rbd_data.1a7e4d6b8b4567.00000000000016b3:head [write 2609152~139264] snapc 132=[132] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 15
2020-01-28 21:22:09.007044 osd.16 osd.16 192.168.66.5:6808/5389 1419 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.853522 secs
2020-01-28 21:22:09.007051 osd.16 osd.16 192.168.66.5:6808/5389 1420 : cluster [WRN] slow request 60.853522 seconds old, received at 2020-01-28 21:21:08.153484: osd_op(client.55166360.0:5495520 2.29 2:94224c08:::rbd_data.bda8574b0dc51.000000000001147f:head [write 2613248~8192] snapc 13b=[13b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 21
2020-01-28 21:22:24.659675 osd.34 osd.34 192.168.66.3:6800/4537 1594 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 89.548109 secs
2020-01-28 21:22:24.659683 osd.34 osd.34 192.168.66.3:6800/4537 1595 : cluster [WRN] slow request 60.437457 seconds old, received at 2020-01-28 21:21:24.222180: osd_op(client.47833248.0:27073759 2.2ea 2:57711360:::rbd_data.1703066b8b4567.0000000000017800:head [write 61440~4096] snapc 133=[133] ondisk+write+known_if_redirected e9176) currently op_applied
2020-01-28 21:21:35.662993 osd.26 osd.26 192.168.66.6:6820/5451 1351 : cluster [WRN] slow request 30.154665 seconds old, received at 2020-01-28 21:21:05.508263: osd_op(client.55166360.0:5495518 2.315 2:a8d8f14d:::rbd_data.bda8574b0dc51.0000000000003f03:head [write 2568192~4096] snapc 13b=[13b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 30
2020-01-28 21:21:55.505470 osd.26 osd.26 192.168.66.6:6820/5451 1352 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 49.997145 secs
2020-01-28 21:21:55.505477 osd.26 osd.26 192.168.66.6:6820/5451 1353 : cluster [WRN] slow request 30.683436 seconds old, received at 2020-01-28 21:21:24.821971: osd_op(client.55255138.0:33649600 2.2c3 2:c34e2ab6:::rbd_data.1b93874b0dc51.0000000000002c7e:head [write 3264512~4096] snapc 14b=[14b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 2
2020-01-28 21:21:55.505481 osd.26 osd.26 192.168.66.6:6820/5451 1354 : cluster [WRN] slow request 30.683309 seconds old, received at 2020-01-28 21:21:24.822098: osd_op(client.55255138.0:33649601 2.2c3 2:c34e2ab6:::rbd_data.1b93874b0dc51.0000000000002c7e:head [write 3272704~12288] snapc 14b=[14b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 2
2020-01-28 21:22:25.268966 osd.26 osd.26 192.168.66.6:6820/5451 1355 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 60.446940 secs
2020-01-28 21:22:25.268975 osd.26 osd.26 192.168.66.6:6820/5451 1356 : cluster [WRN] slow request 60.446940 seconds old, received at 2020-01-28 21:21:24.821971: osd_op(client.55255138.0:33649600 2.2c3 2:c34e2ab6:::rbd_data.1b93874b0dc51.0000000000002c7e:head [write 3264512~4096] snapc 14b=[14b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 2
2020-01-28 21:22:25.268979 osd.26 osd.26 192.168.66.6:6820/5451 1357 : cluster [WRN] slow request 60.446813 seconds old, received at 2020-01-28 21:21:24.822098: osd_op(client.55255138.0:33649601 2.2c3 2:c34e2ab6:::rbd_data.1b93874b0dc51.0000000000002c7e:head [write 3272704~12288] snapc 14b=[14b] ondisk+write+known_if_redirected e9176) currently sub_op_commit_rec from 2
2020-01-28 21:23:03.815322 mon.pve2 mon.0 192.168.66.2:6789/0 267 : cluster [WRN] Health check failed: 3 slow requests are blocked > 32 sec. Implicated osds 16,26 (REQUEST_SLOW)
2020-01-28 21:23:13.443685 mon.pve2 mon.0 192.168.66.2:6789/0 271 : cluster [WRN] Health check update: 2 slow requests are blocked > 32 sec. Implicated osds 26 (REQUEST_SLOW)
2020-01-28 21:23:31.294486 mon.pve2 mon.0 192.168.66.2:6789/0 277 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 2 slow requests are blocked > 32 sec. Implicated osds 26)
2020-01-28 21:23:31.294541 mon.pve2 mon.0 192.168.66.2:6789/0 278 : cluster [INF] Cluster is now healthy
 
First I added ceph macro in filewall. the ceph no response.
After I set ACCEPT any INPUT. the ceph no response too.
I turn off firewall . the ceph is ok.
Yes. As connection tracking is active, any existing connection will not be altered on rule change.

I turn on firewall again.the ceph no response again.
Best, deactivate the firewall and restart all ceph services. Ceph should work as expected, if not then there is some other issue, not related to the firewall.
 
I deactivate the firewall and no need restart any services. Ceph work as expected.
but activate the firewall, Ceph is no response.
What firewall rule did I miss?
 
What firewall rule did I miss?
Remove the bridge and use the bond directly. In any case, if you don't intend to run any VM/CT on that interface, you can remove the bridge. This will remove complexity and may improve latency.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!