How to Block VM to CEPH Network? PVE 5.4-5

HE_Cole

Member
Oct 25, 2018
45
1
6
33
Miami, FL
Hello Everyone!

I feel dumb here but i just cant make this DENY rule work.

I have a 10 Node cluster.

vmbr0 is my WAN

vmbr1 and vmbr2 are my ceph bond/lacp 2x 10Gbit SFP per node.

My vmbr0 is a entire /24 Public Ip range no NAT

vmbr0 is the only interface assigned to each vm in the cluster.

I want to block all traffic from any address in my /24 public range from going to my CEPH networks 10.10.11.0/24 and 10.10.12.0/24 respectively.

Right now any VM in the cluster can ping my ceph networks 10.10.11.0/24 and 10.10.12.0/24

I have the cluster wide firewall running and i have existing rules in place.

But if i make a new rule at the TOP for

IN DENY "my public /24" to 10.10.11.0/24 ICMP it does not work.

I simply want to block my public range vmbr0 from being able to access my CEPH network.

I would jusr use a ACL on my switch but there are SVI's aka layer 2 vlans so a ACL wont filter and need to do it at the VM host level.
 
Can you please post your network and Ceph configuration?
 
  • Like
Reactions: HE_Cole
Sure here they are

CEPH

Code:
global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.13.0/24 fsid = fa2b7a5f-5aa0-4a5d-b929-3cea28f78613 keyring = /etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd journal size = 5120 osd pool default min size = 2 osd pool default size = 3 public network = 10.10.13.0/24 [mds] keyring = /var/lib/ceph/mds/ceph-$id/keyring [mds.he-s06-r01-pve02] host = he-s06-r01-pve02 mds standby for name = pve [osd] keyring = /var/lib/ceph/osd/ceph-$id/keyring [mon.he-s06-r01-pve02] host = he-s06-r01-pve02 mon addr = 10.10.13.10:6789 [mon.he-s08-r01-pve02] host = he-s08-r01-pve02 mon addr = 10.10.13.30:6789 [mon.he-s07-r01-pve02] host = he-s07-r01-pve02 mon addr = 10.10.13.20:6789



NETWORK

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 78:2b:cb:6d:3b:5f brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 78:2b:cb:6d:3b:60 brd ff:ff:ff:ff:ff:ff
    inet 172.16.17.10/25 brd 172.16.17.127 scope global eno2
       valid_lft forever preferred_lft forever
    inet6 fe80::7a2b:cbff:fe6d:3b60/64 scope link
       valid_lft forever preferred_lft forever
4: enp5s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:10:18:aa:62:64 brd ff:ff:ff:ff:ff:ff
5: enp5s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:10:18:aa:62:66 brd ff:ff:ff:ff:ff:ff
6: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
7: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
8: enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 98:4b:e1:33:60:70 brd ff:ff:ff:ff:ff:ff
9: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 98:4b:e1:33:60:74 brd ff:ff:ff:ff:ff:ff
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
11: bond0.40@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
12: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
    inet 10.10.13.10/24 brd 10.10.13.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::b699:baff:fef8:5968/64 scope link
       valid_lft forever preferred_lft forever
13: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 98:4b:e1:33:60:70 brd ff:ff:ff:ff:ff:ff
    inet 23.136.0.9/24 brd 23.136.0.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2606:8d80::9a4b:e1ff:fe33:6070/64 scope global dynamic mngtmpaddr
       valid_lft 2591938sec preferred_lft 604738sec
    inet6 2606:8d80:a001:0:9a4b:e1ff:fe33:6070/64 scope global dynamic mngtmpaddr
       valid_lft 2591944sec preferred_lft 604744sec
    inet6 2606:8d80:b001:0:9a4b:e1ff:fe33:6070/64 scope global dynamic mngtmpaddr
       valid_lft 2591944sec preferred_lft 604744sec
    inet6 fe80::9a4b:e1ff:fe33:6070/64 scope link
       valid_lft forever preferred_lft forever
14: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
    inet 10.10.14.10/24 brd 10.10.14.255 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::b699:baff:fef8:5968/64 scope link
       valid_lft forever preferred_lft forever
15: bond0.50@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether b4:99:ba:f8:59:68 brd ff:ff:ff:ff:ff:ff
16: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000
    link/ether 12:3e:91:80:5a:65 brd ff:ff:ff:ff:ff:ff
17: fwbr104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:cc:7d:ae:45:8a brd ff:ff:ff:ff:ff:ff
18: fwpr104p0@fwln104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 8a:d8:47:48:57:ec brd ff:ff:ff:ff:ff:ff
19: fwln104i0@fwpr104p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
    link/ether 1a:cc:7d:ae:45:8a brd ff:ff:ff:ff:ff:ff
20: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr106i0 state UNKNOWN group default qlen 1000
    link/ether c2:2a:c6:77:2b:d3 brd ff:ff:ff:ff:ff:ff
21: fwbr106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:40:ea:db:4f:1f brd ff:ff:ff:ff:ff:ff
22: fwpr106p0@fwln106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 3a:ee:87:0f:e4:f4 brd ff:ff:ff:ff:ff:ff
23: fwln106i0@fwpr106p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr106i0 state UP group default qlen 1000
    link/ether 2a:40:ea:db:4f:1f brd ff:ff:ff:ff:ff:ff
24: tap105i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr105i0 state UNKNOWN group default qlen 1000
    link/ether 0a:b2:a7:89:6d:8d brd ff:ff:ff:ff:ff:ff
25: fwbr105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:92:aa:b1:66:be brd ff:ff:ff:ff:ff:ff
26: fwpr105p0@fwln105i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 46:73:19:ec:59:fd brd ff:ff:ff:ff:ff:ff
27: fwln105i0@fwpr105p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr105i0 state UP group default qlen 1000
    link/ether c2:92:aa:b1:66:be brd ff:ff:ff:ff:



My goal is the block my VM's on vmbr0 the PUBLIC WAN from accessing ceph networks at the host level.

I am unable to do it from my switch as my VLAN;s are layer 2 and cant use a ACL so i am confident it can be done from iptables.
 
[global]
fsid = fa2b7a5f-5aa0-4a5d-b929-3cea28f78613
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.10.13.0/24
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 10.10.13.0/24

[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.he-s06-r01-pve02]
host = he-s06-r01-pve02
mds standby for name = pve

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.he-s06-r01-pve02]
host = he-s06-r01-pve02
mon addr = 10.10.13.10:6789

[mon.he-s08-r01-pve02]
host = he-s08-r01-pve02
mon addr = 10.10.13.30:6789

[mon.he-s07-r01-pve02]
host = he-s07-r01-pve02
mon addr = 10.10.13.20:6789
As your config got mangled on posting, I am re-posting it. BTW, please use more than on MDS, if this MDS can't be recovered, your data on the CephFS will be gone.

I want to block all traffic from any address in my /24 public range from going to my CEPH networks 10.10.11.0/24 and 10.10.12.0/24 respectively.
Form the config above, your ceph traffic is on network 10.10.13.0/24 (vmbr1) and I don't see any network with 10.10.11.0/24 or 10.10.12.0/24.

Besides, I meant the network configuration (/etc/network/interfaces) in my previous question. ;)
 
As your config got mangled on posting, I am re-posting it. BTW, please use more than on MDS, if this MDS can't be recovered, your data on the CephFS will be gone.


Form the config above, your ceph traffic is on network 10.10.13.0/24 (vmbr1) and I don't see any network with 10.10.11.0/24 or 10.10.12.0/24.

Besides, I meant the network configuration (/etc/network/interfaces) in my previous question. ;)

Apologies i pasted the config from the wrong cluster. We have6 different PVE clusters.

here is the correct one.

Code:
[global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.11.0/24 fsid = 7b2f35a9-cd80-432b-9ef1-7b35259bc707 keyring = /etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd journal size = 5120 osd pool default min size = 2 osd pool default size = 3 public network = 10.10.11.0/24 [mds] keyring = /var/lib/ceph/mds/ceph-$id/keyring [osd] keyring = /var/lib/ceph/osd/ceph-$id/keyring [mds.he-s01-r01-pve01] host = he-s01-r01-pve01 mds standby for name = pve [mds.he-s05-r01-pve01] host = he-s05-r01-pve01 mds standby for name = pve [mon.he-s03-r01-pve01] host = he-s03-r01-pve01 mon addr = 10.10.11.5:6789 [mon.he-s05-r01-pve01] host = he-s05-r01-pve01 mon addr = 10.10.11.7:6789 [mon.he-s02-r01-pve01] host = he-s02-r01-pve01 mon addr = 10.10.11.4:6789 [mon.he-s01-r01-pve01] host = he-s01-r01-pve01 mon addr = 10.10.11.3:6789 [mon.he-s04-r01-pve01] host = he-s04-r01-pve01 mon addr = 10.10.11.6:6789

and IP A

Code:
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@he-s01-r01-pve01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether a4:ba:db:34:f0:b5 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a4:ba:db:34:f0:b6 brd ff:ff:ff:ff:ff:ff
    inet 172.16.16.10/25 brd 172.16.16.127 scope global eno2
       valid_lft forever preferred_lft forever
    inet6 fe80::a6ba:dbff:fe34:f0b6/64 scope link
       valid_lft forever preferred_lft forever
4: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:15:21:46 brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:15:17:15:21:47 brd ff:ff:ff:ff:ff:ff
6: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
7: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
9: bond0.20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
10: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
    inet 10.10.11.3/24 brd 10.10.11.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::2a92:4aff:feaf:4a10/64 scope link
       valid_lft forever preferred_lft forever
11: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:ba:db:34:f0:b5 brd ff:ff:ff:ff:ff:ff
    inet 23.136.0.6/24 brd 23.136.0.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2606:8d80::a6ba:dbff:fe34:f0b5/64 scope global dynamic mngtmpaddr
       valid_lft 2591968sec preferred_lft 604768sec
    inet6 2606:8d80:a001:0:a6ba:dbff:fe34:f0b5/64 scope global dynamic mngtmpaddr
       valid_lft 2591968sec preferred_lft 604768sec
    inet6 2606:8d80:b001:0:a6ba:dbff:fe34:f0b5/64 scope global dynamic mngtmpaddr
       valid_lft 2591968sec preferred_lft 604768sec
    inet6 fe80::a6ba:dbff:fe34:f0b5/64 scope link
       valid_lft forever preferred_lft forever
12: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
    inet 10.10.12.3/24 brd 10.10.12.255 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 fe80::2a92:4aff:feaf:4a10/64 scope link
       valid_lft forever preferred_lft forever
13: bond0.30@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 28:92:4a:af:4a:10 brd ff:ff:ff:ff:ff:ff
root@he-s01-r01-pve01:~#

thanks.


Again the goal is to block the public network vmbr0 from accessing 10.10.11.0/24 and 10.10.12.0/24 at the host level.
 
Again the goal is to block the public network vmbr0 from accessing 10.10.11.0/24 and 10.10.12.0/24 at the host level.
What test case are you using to verify this?

And I still like a formatted output, it is very hard to read one line of config.
 
Here is the main ceph config.

Code:
[global]
auth client required = cephx
 auth cluster required = cephx
 auth service required = cephx
  cluster network = 10.10.11.0/24
   public network = 10.10.11.0/24

My goal is again to have a iptables rule at the host level to block vms from accessing the 10.10.11.0/24 network.

Because my core is manly L2 an ACL cant filter at the switch level so even though the VM and ceph are separate VLANS and separate interfaces then can communicate.

If i make a rule on the hosts block the vmbr0 the only vm interface and DENY access to 10.10.11.0/24 it does not work.
 
If i make a rule on the hosts block the vmbr0 the only vm interface and DENY access to 10.10.11.0/24 it does not work.
Yet again, how do you test if the rule works or not?

And what is your routing table showing?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!